Skip to content

How to use memory with VectorDBQAChain #1068

Closed Answered by jteso
jteso asked this question in Q&A
Discussion options

You must be logged in to vote

I will share my solution in case others are facing a similar problem.
I found that the most effective way to allow for follow-up questions is by implementing something along these lines:

 const llm = new OpenAI({ temperature: 0 });
 const chain = ConversationalRetrievalQAChain.fromLLM(
    llm, 
    vectorStore.asRetriever(), 
  );
  
const question = "Your question here";
let res = await chain.call({ question, chat_history });
chat_history.push(`Question: ${question}. Answer: ${res.text}`);
await chain.call({question: "your followup question", chat_history});

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@joaopcm
Comment options

Answer selected by jteso
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants