langgraph/how-tos/pass-run-time-values-to-tools/ #681
Replies: 11 comments 28 replies
-
the document introduction mentions the usage of Runnable, and RunnableConfig - "To pass run time information, we will leverage the Runnable interface. The standard runnables methods (invoke, batch, stream etc.) accept a 2nd argument which is a RunnableConfig." But not clear from the tutorial if (or where) these objects are being used. |
Beta Was this translation helpful? Give feedback.
-
The documentation on this link: https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/ Can you show me how pass run time values to tools when working with langgraph? |
Beta Was this translation helpful? Give feedback.
-
is this method still valide? |
Beta Was this translation helpful? Give feedback.
-
I am able to read the state, but unable to update it, pls help. |
Beta Was this translation helpful? Give feedback.
-
should we use |
Beta Was this translation helpful? Give feedback.
-
Hi friends: this function's artifact is a list of Documents
How can I define it to return another data type like json or a pandas dataframe? Would appreciate some help here! Thanks |
Beta Was this translation helpful? Give feedback.
-
Using version 0.2.4, my tool is always being invoked with the InjectedState parameter, even though the tool call schema shows that it's not included: from langgraph.prebuilt import InjectedState
Tool call schema:
|
Beta Was this translation helpful? Give feedback.
-
is that valid when passing a PyDantic Schema to the @tool(args_schema = ) still prompts me for the state as a missing arg |
Beta Was this translation helpful? Give feedback.
-
Using v0.2.4, I implemented a custom tool node like the one in the documentation. def call_tool(state: State):
tools_by_name = {...}
messages = state["messages"]
last_message = messages[-1]
output_messages = []
tool_results = {}
for tool_call in last_message.tool_calls:
try:
result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
output_messages.append(
ToolMessage(
content=json.dumps(result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
tool_results.update(result)
except Exception as e:
output_messages.append(
ToolMessage(
content="",
name=tool_call["name"],
tool_call_id=tool_call["id"],
additional_kwargs={"error": str(e)},
)
)
return {
"messages": output_messages,
**tool_results, # A way to update the state with the tool results
} And then when I need to inject state into a tool: @tool
def generate_chart(request: str, state: Annotated[dict, InjectedState]):
... It outputs the error {
"error": "1 validation error for generate_chartSchema\nstate\n field required (type=value_error.missing)"
} I suspect it is caused by my custom tool node implementation. is there a way to solve this issue? |
Beta Was this translation helpful? Give feedback.
-
I'm trying to do the same thing with create_tool_calling_agent. Nothing works, maybe someone knows if it is possible to achieve similar behavior. The state would be first passed from the graph to the agent and then to the tools. |
Beta Was this translation helpful? Give feedback.
-
Is there a straightforward way to modify InjectedState inside the tool? or its only for reading the state? What's is suggested way to modify the state right from the @tool? |
Beta Was this translation helpful? Give feedback.
-
langgraph/how-tos/pass-run-time-values-to-tools/
Build language agents as graphs
https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/
Beta Was this translation helpful? Give feedback.
All reactions