Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid sending the user comments twice by directly evaluating them from the list with all the comments #241

Open
gentlementlegen opened this issue Jan 17, 2025 · 2 comments

Comments

@gentlementlegen
Copy link
Member

gentlementlegen commented Jan 17, 2025

          Yeah that's why we could leave `allComments` so it makes more sense to the LLM but remove `userComments` and specify that the LLM should evaluate comments of user `x`. 

If I understand correctly comments include id and author so the LLM would know which comments to evaluate, but I might be wrong maybe it will miss comments

Originally posted by @whilefoo in #225 (comment)

To evaluate user comments, we currently send the whole list of the comments under the issue, then the list of user comments. This means we send twice the same content, since the user comments are already in the comment list. Instead, we should tell in the prompt what user we are evaluating to extract them during the LLM evaluation, which would greatly reduce the amount of tokens used, allowing us to send more data.

We could also look into prompt caching, although it might be specific to OpenAI API.

Copy link

ubiquity-os-beta bot commented Jan 17, 2025

@0x4007
Copy link
Member

0x4007 commented Jan 17, 2025

@shiv810 please edit the spec for any clarifications as it seems you have the best understanding of the cache mechanism.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants