-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Groq: client.chat.completions.create function location. #16628
Comments
The source code that handles user messages with function calls through tool use when using a Groq model, specifically in relation to the is_function = False
for response in client.chat.completions.create(
messages=message_dicts,
**self._get_model_kwargs(stream=True, **kwargs),
):
response = cast(ChatCompletionChunk, response)
if len(response.choices) > 0:
delta = response.choices[0].delta
else:
if self._is_azure_client():
continue
else:
delta = ChoiceDelta()
# check if this chunk is the start of a function call
if delta.tool_calls:
is_function = True
# update using deltas
role = delta.role or MessageRole.ASSISTANT
content_delta = delta.content or ""
content += content_delta
additional_kwargs = {}
if is_function:
tool_calls = update_tool_calls(tool_calls, delta.tool_calls)
if tool_calls:
additional_kwargs["tool_calls"] = tool_calls
yield ChatResponse(
message=ChatMessage(
role=role,
content=content,
additional_kwargs=additional_kwargs,
),
delta=content_delta,
raw=response,
additional_kwargs=self._get_response_token_counts(response),
) This code is responsible for processing the responses from the |
That is not the information I am looking for. I want the source code of |
The source code for def _chat(self, messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponse:
all_kwargs = self._get_model_kwargs(**kwargs)
message_dicts = to_openai_message_dicts(messages)
response = self._client.chat.completions.create(
messages=message_dicts,
stream=False,
**all_kwargs,
)
openai_message = response.choices[0].message
message = from_openai_message(openai_message)
return ChatResponse(
message=message,
raw=response,
additional_kwargs=self._get_response_token_counts(response),
) This method uses the |
The For groq, we can see that it its just a light wrapper around the openai llm llama_index/llama-index-integrations/llms/llama-index-llms-groq/llama_index/llms/groq/base.py Line 7 in be7f61a
Looking at the openai llm class, we can see that the tools are prepared here llama_index/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py Line 823 in be7f61a
Which that method is called from the base class here
And then tool calls are parsed from the response here llama_index/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py Line 877 in be7f61a
|
Thank you for the quick response. I was actually able to find those files.
found in file (429) I am interested in looking at the prompt that ultimately gets sent to the groq API, when it is no longer formated for llamaindex structure and classes. I am interested in that piece of text. But I am having trouble tracing back that function `client.chat.completions.create. If I try to print its location with
I get sent to https://github.com/openai/openai-python/blob/main/src/openai/_utils/_utils.py. Which clearly is not the place for the Groq API call which i am looking for. I am particularly interested in looking at the instruction sent to grow API which details how to structure the output so that it can properly do function calling. Any help with this would be greatly appreciated. Cheers |
Question Validation
Question
Hi, I am trying to get a full understanding on how function a function call message si done, in particular when using Groq with a FunctionCalling agent created in the following way. I want to understand what is the structure of the message passed to the most basic llm agent in the chain.
where the message is passed to
I was able to trace the user input up until the file
https://github.com/run-llama/llama_index/blob/67c7e50e782f9ce12e1fd76b4ac3a131a505f19b/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py
Up to the function call on line 429
however I am unable to find where the definition of client.chat.completions.create is when I am using a groq model. If I try to print in python the location of the function through
I get sent to an openai module. This one ( on my local compyter it is not a link)
lib/python3.11/site-packages/openai/_utils/_utils.py
which is not part of llamaindex and it is also not Groq.
Can you please point me to the correct source code that handles the user message with function call for through tool use with Groq please?
The text was updated successfully, but these errors were encountered: