You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yeah that's why we could leave `allComments` so it makes more sense to the LLM but remove `userComments` and specify that the LLM should evaluate comments of user `x`.
If I understand correctly comments include id and author so the LLM would know which comments to evaluate, but I might be wrong maybe it will miss comments
To evaluate user comments, we currently send the whole list of the comments under the issue, then the list of user comments. This means we send twice the same content, since the user comments are already in the comment list. Instead, we should tell in the prompt what user we are evaluating to extract them during the LLM evaluation, which would greatly reduce the amount of tokens used, allowing us to send more data.
We could also look into prompt caching, although it might be specific to OpenAI API.
The text was updated successfully, but these errors were encountered:
If I understand correctly comments include id and author so the LLM would know which comments to evaluate, but I might be wrong maybe it will miss comments
Originally posted by @whilefoo in #225 (comment)
To evaluate user comments, we currently send the whole list of the comments under the issue, then the list of user comments. This means we send twice the same content, since the user comments are already in the comment list. Instead, we should tell in the prompt what user we are evaluating to extract them during the LLM evaluation, which would greatly reduce the amount of tokens used, allowing us to send more data.
We could also look into prompt caching, although it might be specific to OpenAI API.
The text was updated successfully, but these errors were encountered: