I have an object detection model to find eggs in the uploaded sample. When I run inference on my machine I get the following result:
Image of two eggs with their confidence levels
However when I deploy the same model on roboflow and try it on the same image again, the result is much worse. [The object on the top right is the actual egg and the one in middle is an impurity.]
Same image as before but confidence level is flipped
What's the cause of the inconsistency in the model's performance across the two platforms?
I used the yolov8s.pt model in this project.
The result was consistently right on my local machine where the egg was detected with higher confidence in each iteration but that's not the case when I deploy the same model to roboflow.