I've been working on a Langchain/Pinecone integration for a bit here, and this code was previously working for me. It seems to have broke sometime in the past month or so. Here's a stripped down version that I can run via rails runner:
begin
llm_options = { completion_model_name: "gpt-4-1106-preview", chat_completion_model_name: "gpt-4-1106-preview" }
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_ACCESS_TOKEN"], default_options: llm_options)
client = Langchain::Vectorsearch::Pinecone.new(api_key: ENV["PINECONE_API_KEY"],
index_name: "development-topic-test",
environment: "us-east-1-aws",
llm: llm)
result = client.ask(question: "how do you reverse an array in ruby?", namespace: "topic-markets")
puts result.inspect
rescue => e
puts "RESCUED: #{e}"
end
This results in the output:
RESCUED: unknown keyword: :prompt
Things I've checked:
- Environment variables are correct
- The environment is correct and the index name exists in my Pinecone environment
- The namespace exists in that Pinecone index
- Also tried GPT model "gpt-4-turbo-preview" -- yielded the same result
- Tried running the same code in a staging environment with different configuration values -- yielded the same result
Any ideas?
If I had to guess, an API method got changed somewhere, and some code has not been updated yet.
For anyone out there who comes across this, the answer was as simple as updating the gem(s).
I performed a
bundle updatein my rails app which updated the following gems to: