How can I fallback the prompt to LLM without vectorStore context in case the similarity score goes below a certain threshold in ConversationalRetrievalQAChain in Langchain.js?
Here's my code:
const retriever = filters ? vectorStore.asRetriever(1, filters) : vectorStore.asRetriever(1);
const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, {
memory
});
One approach is to use the
ScoreThresholdRetriever. From the documentation:Specify the desired threshold and the
ScoreThresholdRetrieverwill only return documents with scores above the threshold. The prompt can be set up so that there doesn't need to be any "fallback" logic. The documents are simply omitted from the prompt. For example:Alternatively, you can manually retrieve the documents from the retriever, check the similarity scores, and construct the desired prompt prior to invoking the chain. This may give you more flexibility in the implementation.
References