You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The prompt construction code of toolbench/inference/LLM/tool_llama_model.py#L97-#L103:
for message in conversation_history:
role = roles[message['role']]
content = message['content']
if role == "System" and functions != []:
content = process_system_message(content, functions)
prompt += f"{role}: {content}\n"
prompt += "Assistant:\n"
When the role is assistant, the content included in the prompt only contains the Thought and excludes Action and Action Input. This is because the action details are stored in the function_call key of the message.
Here is the code of conversation_history construction in toolbench/inference/LLM/tool_llama_model.py#L116-#L123:
This bug results in the assistant portion of the prompt during inference being inconsistent with the prompt used during training, potentially leading to decreased evaluation performance.
The text was updated successfully, but these errors were encountered:
The prompt construction code of toolbench/inference/LLM/tool_llama_model.py#L97-#L103:
When the role is
assistant
, thecontent
included in the prompt only contains theThought
and excludesAction
andAction Input
. This is because the action details are stored in thefunction_call
key of themessage
.Here is the code of
conversation_history
construction in toolbench/inference/LLM/tool_llama_model.py#L116-#L123:This bug results in the assistant portion of the prompt during inference being inconsistent with the prompt used during training, potentially leading to decreased evaluation performance.
The text was updated successfully, but these errors were encountered: