Replies: 5 comments 4 replies
-
Do something like this https://github.com/cocktailpeanut/dalai/blob/de8351215b476cbc25af4263bda53a4aa771cc72/index.js#L444 io.emit in the handleNewToken callback |
Beta Was this translation helpful? Give feedback.
-
Hi, You can find two examples here
|
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
There are 2 ways:
import { CallbackManager } from "langchain/callbacks";
import { ChatOpenAI } from "langchain/chat_models";
import { HumanChatMessage } from "langchain/schema";
export const run = async () => {
const chat = new ChatOpenAI({
maxTokens: 25,
streaming: true,
callbackManager: CallbackManager.fromHandlers({
async handleLLMNewToken(token: string) {
console.log({ token });
},
async handleLLMEnd(output) {
console.log("End of stream.", output);
},
}),
});
const response = await chat.call([new HumanChatMessage("Tell me a joke.")]);
}; |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Hi I am pretty new in coding, and I have no idea how streaming works, right now this will output/stream the logs on the server, so how would I make it way to the client? I am using nuxtjs btw. Can someone enlighten me + example code?
Beta Was this translation helpful? Give feedback.
All reactions