This is my code:
from langchain_community.vectorstores import Pinecone
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain.chains import ConversationalRetrievalChain
pc = pinecone.Pinecone(api_key=secret['PINECONE_API_KEY'],
environment=secret['PINECONE_ENV'])
index = pc.Index(secret['PINECONE_INDEX_NAME'])
embeddings = OpenAIEmbeddings(secret['OPENAI_API_KEY'])
model = ChatOpenAI(model_name='gpt-4-turbo-preview')
docsearch = Pinecone.from_existing_index(index_name=secret['PINECONE_INDEX_NAME'], embedding=embeddings, namespace=secret['PINECONE_NAMESPACE']), search_kwargs = {'k': 25, 'namespace': secret['PINECONE_NAMESPACE']}
retriever = docsearch.as_retriever(namespace=secret['PINECONE_NAMESPACE'], search_kwargs=search_kwargs)
qa = ConversationalRetrievalChain.from_llm(llm=model,retriever=retriever)
qa({'question': prompt, 'chat_history': chat})
When I run this locally, I get a proper response. When I run it through docker, I get 'Index' object has no attribute 'configuration'.
These are the versions I'm running:
langchain==0.1.7
langchain-community==0.0.20
pinecone-client==3.0.2
Any thoughts? This has been wrecking my brain all morning