fallback the prompt to LLM if similarity score goes below threshold

81 Views Asked by At

How can I fallback the prompt to LLM without vectorStore context in case the similarity score goes below a certain threshold in ConversationalRetrievalQAChain in Langchain.js?

Here's my code:

const retriever = filters ? vectorStore.asRetriever(1, filters) : vectorStore.asRetriever(1);

        const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, {
            memory
        });
1

There are 1 best solutions below

0
Andrew Nguonly On

One approach is to use the ScoreThresholdRetriever. From the documentation:

const retriever = ScoreThresholdRetriever.fromVectorStore(vectorStore, {
  minSimilarityScore: 0.9, // Finds results with at least this similarity score
  maxK: 100, // The maximum K value to use. Use it based to your chunk size to make sure you don't run out of tokens
  kIncrement: 2, // How much to increase K by each time. It'll fetch N results, then N + kIncrement, then N + kIncrement * 2, etc.
  filter: filters
});

Specify the desired threshold and the ScoreThresholdRetriever will only return documents with scores above the threshold. The prompt can be set up so that there doesn't need to be any "fallback" logic. The documents are simply omitted from the prompt. For example:

const prompt = PromptTemplate.fromTemplate(
`You are an AI assistant. Use the following context when answering the question:

{context}

Question: {question}`
);

Alternatively, you can manually retrieve the documents from the retriever, check the similarity scores, and construct the desired prompt prior to invoking the chain. This may give you more flexibility in the implementation.

References

  1. Similarity Score Threshold (LangChain)