JSONQueryEngine of llama-index with AWS Bedrock

116 Views Asked by At

I'm trying to follow the guideline provided here:

https://docs.llamaindex.ai/en/latest/examples/query_engine/json_query_engine.html

Aim is to query a complex json based on schema and values with llms. Only difference is in this case the the llm would a model via AWS Bedrock.

I'm able to set-up bedrock and it has worked for me in other use cases.

model_id = "anthropic.claude-v2"
model_kwargs =  { 
    "max_tokens_to_sample": 4096,
    "temperature": 0.0
}

llm = Bedrock(
    client=bedrock_runtime,
    model_id=model_id,
    model_kwargs=model_kwargs
)

the json query is set-up in the following manner (as shown in the above link)

raw_query_engine = JSONQueryEngine(
    json_value=json_data_value,
    json_schema=json_data_schema,
    llm=llm,
    synthesize_response=False,
)

But it throws error for the following query:

raw_query_engine.query(
    "What are the names of the dashboards?",
)
ValueError: Argument `prompt` is expected to be a string. Instead found <class 'llama_index.core.prompts.base.PromptTemplate'>. If you want to run the LLM on multiple prompts, use `generate` instead.

Detailed error as follows

ValueError                                Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 raw_query_engine.query(
      2     "What are the names of the dashboards?",
      3 )

File ~/.local/lib/python3.8/site-packages/llama_index/core/base/base_query_engine.py:40, in BaseQueryEngine.query(self, str_or_query_bundle)
     38 if isinstance(str_or_query_bundle, str):
     39     str_or_query_bundle = QueryBundle(str_or_query_bundle)
---> 40 return self._query(str_or_query_bundle)

File ~/.local/lib/python3.8/site-packages/llama_index/core/indices/struct_store/json_query.py:150, in JSONQueryEngine._query(self, query_bundle)
    147 """Answer a query."""
    148 schema = self._get_schema_context()
--> 150 json_path_response_str = self._llm.predict(
    151     self._json_path_prompt,
    152     schema=schema,
    153     query_str=query_bundle.query_str,
    154 )
    156 if self._verbose:
    157     print_text(
    158         f"> JSONPath Instructions:\n" f"```\n{json_path_response_str}\n```\n"
    159     )

File ~/.local/lib/python3.8/site-packages/langchain/llms/base.py:843, in BaseLLM.predict(self, text, stop, **kwargs)
    841 else:
    842     _stop = list(stop)
--> 843 return self(text, stop=_stop, **kwargs)

File ~/.local/lib/python3.8/site-packages/langchain/llms/base.py:797, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
    795 if not isinstance(prompt, str):
    796     print(prompt)
--> 797     raise ValueError(
    798         "Argument `prompt` is expected to be a string. Instead found "
    799         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
    800         "`generate` instead."
    801     )
    802 return (
    803     self.generate(
    804         [prompt],
   (...)
    812     .text
    813 )

ValueError: Argument `prompt` is expected to be a string. Instead found <class 'llama_index.core.prompts.base.PromptTemplate'>. If you want to run the LLM on multiple prompts, use `generate` instead.

How is the prompt getting modified which is the basis of this error.

metadata={'prompt_type': <PromptType.JSON_PATH: 'json_path'>} template_vars=['schema', 'query_str'] kwargs={} output_parser=None template_var_mappings=None function_mappings=None template='We have provided a JSON schema below:\n{schema}\nGiven a task, respond with a JSON Path query that can retrieve data from a JSON value that matches the schema.\nTask: {query_str}\nJSONPath: '

If anyone has been able to get JSONQueryEngine work with Bedrock LLMs (Not via any RAG approach), could provide a insight what needs to change, would be very helpful.

Thanks

Output should have been a list of dashboard names based on the key 'dashboard' and subkey 'name' in the json.

0

There are 0 best solutions below