Returning document sources using LCEL

58 Views Asked by At

I am implementing the example provided here: https://python.langchain.com/docs/templates/neo4j-advanced-rag

However, I'd like to enhance the functionality to return the sources (aka context) that was supplied to the model. I tried to go through the documentation provided here: https://python.langchain.com/docs/use_cases/question_answering/sources#adding-sources, but couldn't understand how to apply that in the code below:

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are an AI chatbot having a conversation with a human."),
        MessagesPlaceholder(variable_name="history"),
        ("human", "Given this history: {history} and \n this context:\n{context}\n, answer the questions below\nQuestion:{question}. \
         by strictly following this instruction: Answer the question based only on the context and nothing else. If you cannot answer, simply say - I don't know. "),
    ]
)

model = AzureChatOpenAI(openai_api_type='azure',
                    deployment_name=azure_chat_deploy_name,
                    openai_api_version=azure_api_version,
                    openai_api_key=azure_api_key,
                    azure_endpoint=azure_base)


retriever = typical_rag.as_retriever().configurable_alternatives(
    ConfigurableField(id="strategy"),
    default_key="typical_rag",
    parent_strategy=parent_vectorstore.as_retriever(),
    hypothetical_questions=hypothetic_question_vectorstore.as_retriever(),
    summary_strategy=summary_vectorstore.as_retriever(),
)


chain = (
    RunnableParallel(
        {
            "context": itemgetter("question") | retriever ,
            "question": itemgetter("question"),
            "history": itemgetter("history")
        }
    )
    | prompt
    | model
    | StrOutputParser()
)

# Add typing for input
class Question(BaseModel):
    question: str


chain = chain.with_types(input_type=Question, output_type=Context)

chain_with_history = RunnableWithMessageHistory(
    chain, 
    lambda session_id: msgs,
    input_messages_key="question",
    history_messages_key="history"
)

print(chain_with_history.astream_events)


# Render current messages from StreamlitChatMessageHistory
for msg in msgs.messages:
    st.chat_message(msg.type).write(msg.content)

if user_question := st.chat_input():
    st.chat_message("human").write(user_question)
    config = {"configurable": {"session_id": "any"}}

    response = chain_with_history.invoke({"question": user_question}, config)
    
    print("Response:",response)
    st.chat_message("ai").write(response)

Any help/pointers is greatly appreciated? Thanks

1

There are 1 best solutions below

0
cavalier On

Not sure why my question received negative vote, but for anyone looking for this, posting the answer: Modified the code as below to add the context as well to the chain

chain = ( {
    "context": itemgetter("question") | retriever ,
    "question": itemgetter("question"),
    "history": itemgetter("history")
} | RunnableParallel ({
    "response": prompt | model, 
    "context": itemgetter("context") # This adds the retrieved context as well to the chain
})
)

and then the context can be retrieved from model response as below:

model_output = chain_with_history.invoke({"question": user_question}, config)
    response = model_output['response'].content
    provided_context = ' '.join(document.page_content for document in model_output['context'])
print(f"Context sent to the model: {provided_context}")