I want to build a retriever in Langchain and want to use an already deployed fastAPI embedding model. How could I do that?
from langchain_community.vectorstores import DocArrayInMemorySearch
embeddings_model = requests.post("http://internal-server/embeddings/")
db = DocArrayInMemorySearch.from_documents(chunked_docs, embeddings_model)
retriever = db.as_retriever()
You can create a custom embeddings class that subclasses the
BaseModelandEmbeddingsclasses. Example:embed_documents()andembed_query()are abstract methods in theEmbeddingsclass and they must be implemented. TheOllamaEmbeddingsclass is a simple example of how to create a custom embeddings class.You can use the custom embeddings class just like any other embeddings class.
References