I'm trying JSON parser on a Llama.cpp open source model with Langchain.
Here is the sample code:
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from llama_cpp import Llama
llm = Llama(model_path='./mistral-7b-instruct-v0.2.Q4_0.gguf')
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
# Set up a parser + inject instructions into the prompt template.
parser = JsonOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
#Chain
chain = prompt | llm | parser
#Run
chain.invoke({"query": joke_query}
I'm getting following error:
TypeError: object of type 'StringPromptValue' has no len()
I followed the instruction over the Langchain website.
Does this technique work on local llama_cpp models? if not, is there another way to get this output in JSON.