Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function tool callback #16637

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
162 changes: 162 additions & 0 deletions docs/docs/examples/tools/function_tool_callback.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Function call with callback\n",
"\n",
"This is a feature that allows applying some human-in-the-loop concepts in FunctionTool.\n",
"\n",
"Basically, a callback function is added that enables the developer to request user input in the middle of an agent interaction, as well as allowing any programmatic action."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install llama-index-llms-openai\n",
"%pip install llama-index-agents-openai"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core.tools import FunctionTool\n",
"from llama_index.agent.openai import OpenAIAgent\n",
"from llama_index.llms.openai import OpenAI\n",
"import os"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"sk-\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Function to display to the user the data produced for function calling and request their input to return to the interaction."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"def callback(message):\n",
"\n",
" confirmation = input(f\"{message[1]}\\nDo you approve of sending this greeting?\\nInput(Y/N):\")\n",
"\n",
" if confirmation.lower() == \"y\": \n",
" # Here you can trigger an action such as sending an email, message, api call, etc. \n",
" return \"Greeting sent successfully.\"\n",
" else:\n",
" return \"Greeting has not been approved, talk a bit about how to improve\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simple function that only requires a recipient and a greeting message."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def send_hello(destination:str, message:str)->str:\n",
" \"\"\"\n",
" Say hello with a rhyme \n",
" destination: str - Name of recipient\n",
" message: str - Greeting message with a rhyme to the recipient's name\n",
" \"\"\" \n",
"\n",
" return destination, message\n",
"\n",
"hello_tool = FunctionTool.from_defaults(fn=send_hello, callback=callback)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI()\n",
"agent = OpenAIAgent.from_tools([hello_tool])"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I attempted to send a hello message to Karen, but it seems the greeting has not been approved. Let's try to come up with a different greeting that might be more suitable. How about \"Hello Karen, your smile shines like the sun\"? Let's send this message instead.\n"
]
}
],
"source": [
"response = agent.chat(\"Send hello to Karen\")\n",
"print(str(response))"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I have successfully sent a hello message to Joe with the greeting \"Hello Joe, you're a pro!\"\n"
]
}
],
"source": [
"response = agent.chat(\"Send hello to Joe\")\n",
"print(str(response))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
32 changes: 24 additions & 8 deletions llama-index-core/llama_index/core/tools/function_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@

AsyncCallable = Callable[..., Awaitable[Any]]


def sync_to_async(fn: Callable[..., Any]) -> AsyncCallable:
"""Sync to async."""

Expand Down Expand Up @@ -44,6 +43,7 @@ def __init__(
fn: Optional[Callable[..., Any]] = None,
metadata: Optional[ToolMetadata] = None,
async_fn: Optional[AsyncCallable] = None,
callback: Optional[Callable[[Any], Any]] = None,
) -> None:
if fn is None and async_fn is None:
raise ValueError("fn or async_fn must be provided.")
Expand All @@ -62,6 +62,13 @@ def __init__(
raise ValueError("metadata must be provided.")

self._metadata = metadata
self._callback = callback

def _run_callback(self, result: Any) -> Any:
"""Executes the callback if provided and returns its result."""
if self._callback:
return self._callback(result)
return ""

@classmethod
def from_defaults(
Expand All @@ -73,6 +80,7 @@ def from_defaults(
fn_schema: Optional[Type[BaseModel]] = None,
async_fn: Optional[AsyncCallable] = None,
tool_metadata: Optional[ToolMetadata] = None,
callback: Optional[Callable[[Any], Any]] = None,
) -> "FunctionTool":
if tool_metadata is None:
fn_to_parse = fn or async_fn
Expand All @@ -90,7 +98,7 @@ def from_defaults(
fn_schema=fn_schema,
return_direct=return_direct,
)
return cls(fn=fn, metadata=tool_metadata, async_fn=async_fn)
return cls(fn=fn, metadata=tool_metadata, async_fn=async_fn, callback=callback)

@property
def metadata(self) -> ToolMetadata:
Expand All @@ -109,19 +117,27 @@ def async_fn(self) -> AsyncCallable:

def call(self, *args: Any, **kwargs: Any) -> ToolOutput:
"""Call."""
tool_output = self._fn(*args, **kwargs)
tool_output = self._fn(*args, **kwargs)
final_output_content = str(tool_output)
callback_output = self._run_callback(tool_output)
if callback_output:
final_output_content += f" Callback: {callback_output}"
return ToolOutput(
content=str(tool_output),
content=final_output_content,
tool_name=self.metadata.name,
raw_input={"args": args, "kwargs": kwargs},
raw_output=tool_output,
)

async def acall(self, *args: Any, **kwargs: Any) -> ToolOutput:
"""Call."""
tool_output = await self._async_fn(*args, **kwargs)
"""Async Call."""
tool_output = self._fn(*args, **kwargs)
final_output_content = str(tool_output)
callback_output = self._run_callback(tool_output)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the callback should be async? Otherwise this will block the event loop (probably not ideal)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my view, yes. We developed this feature so that when a FunctionTool is called, it can request user input that will influence the result or execution of that function.

That said, it makes sense for it not to be asynchronous. However, in our case, we use synchronous calls. If you believe it should be handled asynchronously, I can change it without any problems.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in a lot of use cases, people are using something like fastapi to serve APIs, and you wouldn't want this callback to halt the entire server.

It probably makes sense to let the user provide either a sync or async callback, and llama-index handles converting it either way (if a sync function is provided, we can make it a "fake" async function with a wrapper. If an async function is provided, we can make it sync using from llama_index.core.utils import asyncio_run and using callback_output = asyncio.run(async_fn(tool_output))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stopping to evaluate this point, the person who uses an asynchronous call in this case really doesn't make sense conceptually to stop the application.
So whoever uses the asynchronous callback in the asynchronous call will be in a different situation.
I will make the adjustment

if callback_output:
final_output_content += f" Callback: {callback_output}"
return ToolOutput(
content=str(tool_output),
content=final_output_content,
tool_name=self.metadata.name,
raw_input={"args": args, "kwargs": kwargs},
raw_output=tool_output,
Expand Down Expand Up @@ -157,4 +173,4 @@ def to_langchain_structured_tool(
func=self.fn,
coroutine=self.async_fn,
**langchain_tool_kwargs,
)
)
3 changes: 3 additions & 0 deletions llama-index-core/llama_index/core/tools/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,9 @@ def _process_langchain_tool_kwargs(
langchain_tool_kwargs["description"] = self.metadata.description
if "fn_schema" not in langchain_tool_kwargs:
langchain_tool_kwargs["args_schema"] = self.metadata.fn_schema
#Callback dont exist on langchain
if "callback" in langchain_tool_kwargs:
del langchain_tool_kwargs["callback"]
return langchain_tool_kwargs

def to_langchain_tool(
Expand Down
Loading