frameworks to perform LLMOps with Langchain?

48 Views Asked by At

Best practices for small LLM / RAG Projects

I would like to use something like MLFlow to keep track of my ML lifecycle for my RAG system that was build in langchain. Since I would like to test different embedding functions and other params like chunk size, what is the best way to log this models accordingly.

I would like to load retrievers from artifacts like it is possible with mlflow. I tried out hosting locally langserve which is great, however after reviewing models in langchain i cannot recreate models from artifacts or can I?

I would be happy If you can share your best practives with me :)

I tried to log with mlflow.langchain, however custom chains like {'context': retriever, 'question' : RunnablePassthrough() } ... generate this error:

TypeError: cannot pickle 'weakref.ReferenceType' object

0

There are 0 best solutions below