I converted my tensorflow model to OV like this:
from openvino.runtime import serialize
ir_path = Path(model_path)/"openVINO/serialized_model.xml"
ov_model = convert_model(cloned_model, input_shape=[[1,1,224,224,3]])
serialize(ov_model, ir_path)
And then inference like this:
import openvino.runtime as ov
compiled_model = ov.compile_model(ir_path)
infer_request = compiled_model.create_infer_request()
for input_path in frame_paths:
for state in infer_request.query_state():
state.reset()
# Create tensor from external memory
input_tensor = ov.Tensor(array=get_model_input([input_path], max_sequence_len=1), shared_memory=False)
# Set input tensor for model with one input
infer_request.set_input_tensor(input_tensor)
# infer_request.query_state().reset()
infer_request.start_async()
infer_request.wait()
# Get output tensor for model with one output
output = infer_request.get_output_tensor()
output_buffer = output.data
It got me different results than original TF model. But when I put infer_request = compiled_model.create_infer_request() in main loop then everything seems fine. But I can't tackle where is the difference, especially that I'm resetting query state. And when I execute infer_request.query_state() it returns with empty array.
When I tried to get input data to search for some state tensors I got only errors saying that there's only 1 tensor (the one that I supplied).
My model is LSTM layers on top of EfficientNetV2B0 so on top of bunch of convolutional layers.
Please find the documentation link that contains reference code snippets related to your use case.
Kindly access it here: https://github.com/openvinotoolkit/openvino/blob/master/docs/snippets/ov_network_state_intro.py
We recommend you to try implementing the sequence provided in the link above.
If you encounter any issues, please share the error messages and your script to help resolve the issue.