How to pass data from database to Langchain like context/source #6378
Unanswered
teomikitenko
asked this question in
Q&A
Replies: 1 comment 1 reply
-
To pass data from a database to LangChain as context or source, you can follow these steps:
Here's an example of how you can achieve this: import { ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda, RunnableMap, RunnablePassthrough } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import pg from "pg";
// Step 1: Retrieve embeddings from the database
async function fetchEmbeddingsFromDatabase() {
// Implement your logic to fetch embeddings from the database
return [
{ pageContent: "Harrison worked at Kensho" },
{ pageContent: "Bears like to eat honey." },
];
}
const embeddingsData = await fetchEmbeddingsFromDatabase();
// Step 2: Create Documents with Retrieved Embeddings
const documents = embeddingsData.map(data => new Document(data));
// Step 3: Set Up the Runnable Chain
const retriever = {
invoke: async (input) => {
// Implement your logic to retrieve the relevant document based on the input
return documents[0].pageContent; // Example: returning the first document's content
}
};
const prompt = ChatPromptTemplate.fromMessages([
["ai", `Answer the question based on only the following context:\n\n{context}`],
["human", "{question}"],
]);
const model = new ChatOpenAI({});
const outputParser = new StringOutputParser();
const setupAndRetrieval = RunnableMap.from({
context: new RunnableLambda({
func: (input) => retriever.invoke(input),
}).withConfig({ runName: "contextRetriever" }),
question: new RunnablePassthrough(),
});
const chain = setupAndRetrieval.pipe(prompt).pipe(model).pipe(outputParser);
// Step 4: Invoke the Chain
const response = await chain.invoke("Where did Harrison work?");
console.log(response);
/**
Harrison worked at Kensho.
*/ This code demonstrates how to retrieve embeddings from a database, create documents, set up a retriever, and pass the context to LangChain for generating answers in a RAG application [1][2][3]. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
I build my own RAG application but have a trouble, Logic my app is - user push button and load some document(text), after this i generate embeddings using this document and write its to database,i use Convex. And i want get this embedding from this document on database and add to llm to generate answer. But i dont find any example how do this, find only example how use ConvexVectoreStore and transform it to retriver - but in my case its dont work(i have several reason). I need just use embedding from document and pass to context and i dont know how and langchain document/discord dont give me answer. Chat bot in langchain site recomend me create CustomRetriever class - but why doc dont have answer, Pls help me
windows 10, using next,js convex and langchain
Beta Was this translation helpful? Give feedback.
All reactions