how to use a fine tuned model in from langchain_community.llms.Ollama

56 Views Asked by At

How to use a fine-tuned model in Ollama langchain_community.llms.ollama.Ollama. The correct way of passing any model is passing in model : str variable.

But the selection is only limited to https://ollama.com/library.

How can I add a fine-tuned gemma model as a string parameter.

I followed this video Ollama - Loading Custom Models , where he is able to add Quantized version of LLM into mac client of Ollama.

My use case is to fine tune a gemma:2b model, and save it to S3, and use this model in a compute instance as an API. My question revolves around how to intake this model in Ollama instance

  • reading model from a path in compute instance
  • Does the model require any quantization on top of it
1

There are 1 best solutions below

1
j3ffyang On

You can upload your custom model to HuggingFace, create a Modelfile (https://github.com/ollama/ollama/blob/main/docs/modelfile.md), then build the model with ollama

Here's a reference > https://github.com/ollama/ollama