I need to train custom object detector for iOS and Android. Need to detect objects of 10 classes. I got a problem with exporting KerasCV model to CoreML. Got prepared data to train own object detection, training goes well, on python side everything is recognised correct, but after conversion with CoreML I got wired results.
creating model:
model = keras_cv.models.RetinaNet.from_preset(
"mobilenet_v3_large_imagenet",
num_classes=len(class_mapping),
# For more info on supported bounding box formats, visit
# https://keras.io/api/keras_cv/bounding_box/
bounding_box_format="xyxy",
)
model.compile(
classification_loss="focal",
box_loss="smoothl1",
optimizer=optimizer,
metrics=None,
)
model.fit(
train_ds.ragged_batch(4),
validation_data=eval_ds.ragged_batch(4),
epochs=40,
callbacks=[tensorboard_callback, VisualizeDetections(), model_checkpoint_callback],
)
I’am using mobilenet V3 and output should look like this:
boxes [num_detections, 4]
confidence [num_detections, 10]
classes [num_detections]
After converting model to CoreML with this code:
outputss = [ct.TensorType(name="Identity", dtype=np.float32), ct.TensorType(name="Identity_1", dtype=np.float32)]
converted_model = ct.convert(model, inputs=[ct.ImageType(shape=(1, 640, 640, 3))], outputs=outputss, convert_to="mlprogram")
print(converted_model.output_description)
#save converted model
converted_model.save("converted.mlpackage")
I got in output two arrays (boxes and confidence) of size 1 × 76725 × 4 and 1 × 76725 × 10 I know that this output should be passed through NMS but before that I try to get some results and every value in confidence array is negative. Why? What can I do to got some real values in CoreML model?