I am developing a Flask API to perform inference using a TensorFlow Lite model that I trained on a dataset of Alzheimer's 5 Class images which are ['AD - Alzheimer Disease', 'CN - Cognitively Normal', 'EMCI - Early Mild Cognitive Impairment', 'LMCI - Late Mild Cognitive Impairment', 'MCI - Mild Cognitive Impairment'].
The model works well in my training environment, but when I deploy it in a Flask API, there's an issue. The API consistently predicts the same class ("MCI - Mild Cognitive Impairment") for every image, while the model trained in my Colab notebook predicts various classes accurately. The API will later be integrated with React Native App.
Trained the model twice with different dataset but the issue persists. I've come to a dead end now, having no idea how to fix it.
Code for TFLite model: https://colab.research.google.com/drive/1xxW8v5ZBKLvlGrofL2fBy9WYk_Fn5Dj_?usp=sharing
Code for FlaskAPI:
from flask import Flask, request, jsonify
import tensorflow as tf
import cv2
import numpy as np
from PIL import Image
import io
app = Flask(__name__)
interpreter = tf.lite.Interpreter(model_path="latest_model.tflite")
interpreter.allocate_tensors()
class_names = ["CN - Cognitively Normal", "AD - Alzheimer Disease", "EMCI - Early Mild Cognitive Impairment", "MCI - Mild Cognitive Impairment", "LMCI - Late Mild Cognitive Impairment"]
def preprocess_image(image):
image = cv2.resize(image, (150, 150))
image = image.astype('float32') / 255.0
image = np.expand_dims(image, axis=0)
return image
@app.route('/predict', methods=['POST'])
def predict():
try:
file = request.files['file']
image_file = Image.open(io.BytesIO(file.read()))
image = cv2.cvtColor(np.array(image_file), cv2.COLOR_RGB2BGR)
if not (image.shape[0] >= 150 and image.shape[1] >= 150 and image.shape[2] == 3):
return jsonify({"error": "Invalid image shape"})
image = image.astype('float32') / 255.0
preprocessed_image = preprocess_image(image)
interpreter.set_tensor(interpreter.get_input_details()[0]['index'], preprocessed_image)
interpreter.invoke()
output_tensor = interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
predicted_class_index = np.argmax(output_tensor, axis=1)[0]
predicted_class_name = class_names[predicted_class_index]
result = {"prediction": predicted_class_name, "output_tensor": output_tensor.tolist()}
return jsonify(result)
except Exception as e:
return jsonify({"error": str(e)})
if __name__ == '__main__':
app.run(debug=True)
Tried logging output tensor but this is the output I get from Flask API. Know the output tensor suggests that biasness is towards the MCI class but why is it working perfectly in colab environment and not with Flask API, if that's the case?
Also, do you recommend any other better approach I can use instead of using Flask API to integrate the model with my React Native App?
{
"output_tensor": [
[
0.0004518234636634588,
0.0004140451201237738,
0.002781340153887868,
0.7277416586875916,
0.2686111330986023
]
],
"prediction": "MCI - Mild Cognitive Impairment"
}