I convert a PyTorch model into 2 Core ML models with input type: ImageType and TensorType.
image_input = ct.ImageType(name="input_1", shape=dummy_input.shape)
tensor_input = ct.TensorType(name="input_1", shape=dummy_input.shape)
mlmodel = ct.convert(traced_model, inputs=[image_input])
mlmodel2 = ct.convert(traced_model, inputs=[tensor_input])
But when I evaluated them using python, it got different result. The code as below:
image = np.array(imageio.imread(self.__img_list[index], pilmode="RGB"))
image = image / 255.0 # for pytorch
if model_input_type == 'imageType'
my_tensor = image[0]
image = torchvision.transforms.functional.to_pil_image(my_tensor)
prediction = mlmodel.predict({input_name : image})[output_name]
I think the prediction should be the same. Did I do something wrong?