docker run -u $(id -u) -v ./models/:/models -p 9000:9000 openvino/model_server:latest --model_name tft --model_path /models/tft --port 9000
I run a container via this code, I can send grpc request to this via: client = make_grpc_client("localhost:9000") output = client.predict({"dense_3_input": data}, "tft", model_version=1)
Is there anyone to give a sample http request for this api?
The codes to request prediction using REST are as follows:
For more information, please refer to TensorFlow Serving API.