-
class HumanInputRun(BaseTool):
"""Tool that asks user for input."""
name: str = "human"
description: str = (
"You can ask a human for getting answer for the clarifying question. "
"The input should be a question and the answer options for the human."
)
prompt_func: Callable[[str], None] = Field(default_factory=lambda: _print_func)
input_func: Callable = Field(default_factory=lambda: input)
def _run(
self,
question: str,
answer_options: List[str],
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Use the Human input tool."""
self.prompt_func(question, answer_options)
answer = self.input_func()
return {"clarifying_question_answer": answer, "clarifying_question": question, "answer_options": answer_options}
tool = HumanInputRun(input_func=get_input)
tools = [tool]
tool_node = ToolNode(tools) I have this tool and I want to update the state with what this tool returns. However, it only returns What I want to do is to update the graph state with this How can I do that? |
Beta Was this translation helpful? Give feedback.
Replies: 11 comments 19 replies
-
if you want to update the state directly, you should probably add this as a |
Beta Was this translation helpful? Give feedback.
-
I have the same problem. |
Beta Was this translation helpful? Give feedback.
-
@hwchase17 Is there any plan to add capability to update state from tools too in future? Not sure if ToolsMessage will have all the information returned from tools. |
Beta Was this translation helpful? Give feedback.
-
This should be addressed because |
Beta Was this translation helpful? Give feedback.
-
Just fyi, I ended up implementing follow up node to address this issue. |
Beta Was this translation helpful? Give feedback.
-
I had to implement something unconventional, but it works. I created a tool using the @tool(response_format="content_and_artifact") directive. The tool only receives parameters and returns them as a tuple, like this: return "some_tool_message", params. After that, I created a separate node that always executes after the tool call to handle the actual functionality. Essentially, the tool serves as an entry trigger, while the node handles the real logic. Honestly, this approach feels odd, but it works. I haven’t found a way to modify the state directly within the tool call itself. All my attempts so far have been unsuccessful. Here’s the code example: # My tool
@tool(response_format="content_and_artifact")
def tool_a(state: Annotated[InputState, InjectedState], config: RunnableConfig, **other_params) -> Tuple[str, dict]:
tool_params = {
'param1': 1,
'param2': 2
}
return "some_message", tool_params
# My node
def tool_a_node(state: State):
# Modify state here
# Send AI messages from here
pass # Add your functionality
# Mapping data from the tool trigger to the actual node for processing
def map_data_from_tool_to_node(state: State):
tool_message = state["messages"][-1]
tool_data = tool_message.artifact
Send("tool_a_node", tool_data)
# Adding the tool and connecting it to the node
builder.add_node("tool_a", ToolNode([tool_a]))
builder.add_conditional_edges("tool_a", map_data_from_tool_to_node, ["tool_a_node"]) Important If anyone knows a better way to modify the state directly from the tool call, I’d appreciate the insights. So far, this workaround is the best I could come up with. |
Beta Was this translation helpful? Give feedback.
-
Had the same question and I am not sure if this was solved already, but what is wrong with InjectedState and the approach explained here with examples: I believe we can pass state directly to the tool. The rest is the same with default ToolNode.
|
Beta Was this translation helpful? Give feedback.
-
The only thing missing is I would like to trigger the reducer instead of doing async function toolNodeWithState(state: typeof GlobalState.State) {
const tools = await generateTools(
context,
() => state,
(newState) => Object.assign(state, newState),
);
const toolNodeWithConfig = new ToolNode(tools);
state.messages = (await toolNodeWithConfig.invoke(state)).messages;
return state;
} |
Beta Was this translation helpful? Give feedback.
-
@MrAlekhin I'm not doing direct state modification. I'm respecting the langgraph philosophy: you just "return" the state modifications you want your tool to apply (more below) @sgaseretto this is how I made the modification: First you need to define "InjectedStateModifier". That's the annotation used to tell langchain that the tool parameter is injected by us and not the AI. The actual data being injected would be something similar to my StateModifier. class InjectedStateModifier(InjectedToolArg):
pass
class StateModifier:
def __init__(self,
state_schema: Type,
parsed_state_schema: Dict[str, Callable]
):
self._state_schema = state_schema
self._parsed_state_schema = parsed_state_schema
self._state: Dict[str, Any] = {}
def push_statechange(self, s: Dict[str, Any]):
_inject_tool_results(self._parsed_state_schema, self._state, s) # a method that merge tool states respecting the operators (if any)
@property
def pending_state(self):
return {**self._state} Look at how the out of the box InjectedState or InjectedStore works for a example. Then modify the ToolNode class to create a StateModifier and inject that to all tool calls. Simply mimic the existing code _inject_store() to know how to inject that class to the parameter: async def _afunc(
self,
input: Union[
list[AnyMessage],
dict[str, Any],
BaseModel,
],
config: RunnableConfig,
*,
store: BaseStore,
) -> Any:
statemodifier: StateModifier = StateModifier(self._state_schema, self._parsed_state_schema) # <---
tool_calls, output_type = self._parse_input(input, store, statemodifier)
if output_type == "list":
raise Exception('output_type == "list" is not supported')
outputs = await asyncio.gather(
*(self._arun_one(call, config) for call in tool_calls)
)
statemodifier.push_statechange({self.messages_key: outputs}) # <---
return statemodifier.pending_state The last line is what the ToolNode would actually return to Langgraph : a Message + any state returned by the tool. In the end, it is super simple to use once that ToolNode is modified: class StateForTool(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
author: str
elem: Annotated[list[int], operator.add]
class GetTableDetailsToolM(BaseModel):
table_logical_name: str = Field(..., description="")
state: Annotated[StateForTool, InjectedState] = Field()
store: Annotated[Any, InjectedStore, SkipValidation] = Field()
statemodifier: Annotated[Any, InjectedStateModifier, SkipValidation] = Field()
class GetTableDetailsTool(BaseTool):
name: str = "get_table_details"
description: str = """..."""
args_schema: Type[BaseModel] = GetTableDetailsToolM
async def _arun(
self,
table_logical_name: str,
state: Annotated[StateForTool, InjectedState],
store: Annotated[BaseStore, InjectedStore],
statemodifier: Annotated[StateModifier, InjectedStateModifier],
config: RunnableConfig,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
# this is the state this tool want to push
statemodifier.push_statechange({
"elem": 69,
"author": "Alessandro"
})
return "This is what my tool returns" In the above example, the tool can decide to push some state change on its own. Please note that if there are multiple tool calls at the same time modifying "elem", they will be correctly aggregated because my implementation also check for Annotated fields operators. Once ToolNode is patched, you can easily modify state without having to deal with followup nodes or messages list inspection. |
Beta Was this translation helpful? Give feedback.
-
What about using https://langchain-ai.github.io/langgraph/how-tos/update-state-from-tools/ from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.types import Command
from langchain_core.tools import tool
from langchain_core.tools.base import InjectedToolCallId
from langchain_core.messages import ToolMessage
from langchain_core.runnables import RunnableConfig
from typing_extensions import Any, Annotated
class State(AgentState):
# user provided
last_name: str
# updated by the tool
user_info: dict[str, Any]
@tool
def lookup_user_info(
tool_call_id: Annotated[str, InjectedToolCallId], config: RunnableConfig
):
"""Use this to look up user information to better assist them with their questions."""
user_id = config.get("configurable", {}).get("user_id")
if user_id is None:
raise ValueError("Please provide user ID")
if user_id not in USER_ID_TO_USER_INFO:
raise ValueError(f"User '{user_id}' not found")
user_info = USER_ID_TO_USER_INFO[user_id]
return Command(
update={
# update the state keys
"user_info": user_info,
# update the message history
"messages": [
ToolMessage(
"Successfully looked up user information", tool_call_id=tool_call_id
)
],
}
) |
Beta Was this translation helpful? Give feedback.
-
Hi folks! We have now added support for updating the state from the tools using the new Command type. Please see this how-to guide for reference https://langchain-ai.github.io/langgraph/how-tos/update-state-from-tools/. Let me know if you have any questions / feedback |
Beta Was this translation helpful? Give feedback.
Hi folks! We have now added support for updating the state from the tools using the new Command type. Please see this how-to guide for reference https://langchain-ai.github.io/langgraph/how-tos/update-state-from-tools/. Let me know if you have any questions / feedback