How can I send http request to Openvino Model Container?

38 Views Asked by At

docker run -u $(id -u) -v ./models/:/models -p 9000:9000 openvino/model_server:latest --model_name tft --model_path /models/tft --port 9000

I run a container via this code, I can send grpc request to this via: client = make_grpc_client("localhost:9000") output = client.predict({"dense_3_input": data}, "tft", model_version=1)

Is there anyone to give a sample http request for this api?

1

There are 1 best solutions below

0
Wan_Intel On

The codes to request prediction using REST are as follows:

from ovmsclient import make_http_client

client = make_http_client("localhost:8000")

with open("img.jpeg", "rb") as f:
    data = f.read()
inputs = {"input_name": data}
results = client.predict(inputs=inputs, model_name="my_model")

For more information, please refer to TensorFlow Serving API.