Replies: 4 comments 5 replies
-
Hey @wh1pp3rz! 👋 I'm here to help you with your coding questions and bug fixes. Let's squash those bugs together! To filter out tool input parameters from the Here is an example of how to use these properties to filter out tool input parameters: import { RemoteRunnable } from "@langchain/core/runnables/remote";
const remoteChain = new RemoteRunnable({
url: "https://your_hostname.com/path",
});
const logStream = await remoteChain.streamEvents(
{
question: "What is a document loader?",
chat_history: [],
},
// LangChain runnable config properties
{
// Version is required for streamEvents since it's a beta API
version: "v1",
// Optional, chain specific config
configurable: {
llm: "openai_gpt_3_5_turbo",
},
metadata: {
conversation_id: "other_metadata",
},
},
// Optional additional streamLog properties for filtering outputs
{
excludeNames: ["tool_input"],
excludeTags: [],
excludeTypes: [],
}
);
for await (const chunk of logStream) {
console.log(chunk);
} In this example, the |
Beta Was this translation helpful? Give feedback.
-
@dosu unfortunately that doesn't work. It would appear that the implementation of streaming events in the anthropic package does not tag the tool input messages. |
Beta Was this translation helpful? Give feedback.
-
@dosu i also have this problem. Is there another fix for this. The tool calls get mistaken as normal chunk content in the streamevent API |
Beta Was this translation helpful? Give feedback.
-
@jacoblee93 Can you help here ? I can't imagine that this behaviour is by choice. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I'm building a reAct agent using langGraph with the option to allow the users to select between openAI, google-genai or anthropic models. I'm noticing when using ChatAnthropic model with streamEvents, the tool input message chunks are being streamed as normal message chunks with no real way to filter them out. Is there a way to avoid streaming the input parameters for tool calls? It looks weird in the context of a chat output. I've examined the events being streamed, but there doesn't seem to be anything that distinguishes the tool call output, so the final result is that the tool call params are concatenated at the end of the output that suggest a tool is being called. Here's an example of the streamed output from the code snippet above:
I want the output not to include the tools parameters, in this case
{"query": "John Doe", "limit": 1}
and further down{"customerId": "1", "tag": "VIP"}
, however there isn't anyway of distinguishing these chunks from the event stream. Any suggestions?System Info
[email protected] | MIT | deps: 16 | versions: 284
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/
keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores
dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.2.12.tgz
.shasum: 3fac0b9519a070689b6dd679d5854abc57824dcf
.integrity: sha512-ZHtJrHUpridZ7IQu7N/wAQ6iMAAO7VLzkupHqKP79S6p+alrPbn1BjRnh+PeGm92YiY5DafTCuvchmujxx7bCQ==
.unpackedSize: 5.4 MB
dependencies:
@langchain/core: >=0.2.11 <0.3.0 js-tiktoken: ^1.0.12 langsmith: ~0.1.30 uuid: ^10.0.0
@langchain/openai: >=0.1.0 <0.3.0 js-yaml: ^4.1.0 ml-distance: ^4.0.0 yaml: ^2.2.1
@langchain/textsplitters: ~0.0.0 jsonpointer: ^5.0.1 openapi-types: ^12.1.3 zod-to-json-schema: ^3.22.3
binary-extensions: ^2.2.0 langchainhub: ~0.0.8 p-retry: 4 zod: ^3.22.4
maintainers:
dist-tags:
latest: 0.2.12 next: 0.2.3-rc.0
published 6 days ago by basproul [email protected]
npm notice
npm notice New minor version of npm available! 10.7.0 -> 10.8.2
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.2
npm notice To update run: npm install -g [email protected]
npm notice
Beta Was this translation helpful? Give feedback.
All reactions