Model using different random seeds leads to different interpretation result

49 Views Asked by At

I'm currently utilizing the shap library to interpret a hybrid model that combines a Cox proportional hazards model with a Multi-Layer Perceptron (MLP) using the pycox library. Interestingly, the neural network version of model outperforms the conventional linear Cox proportional hazards model. However, I'm facing an issue with the shap.DeepExplainer as it provides different interpretations when the models have different random seeds. This variability makes it challenging for me to obtain a stable result and draw conclusions about the most and least contributing features.

I'm seeking guidance on how to address this inconsistency in interpretations. Any suggestions would be greatly appreciated.

I am not sure the approaches described below make sense, but these are the potential ways that I can imagine to partially tackle the problem:

  1. Mostly naive: only use the model with the highest score (in my case, the concordance index). But how should I explain other models with the different random seeds and different interpretation results.
  2. Average multiple interpretation results: Given multiple random seeds, just calculate the average SHAP values of each feature using all the results of different models. But is this scientific and popular?
0

There are 0 best solutions below