I am using the Fairlearn functions similar to this:
eor = fairlearn.metrics.equalized_odds_ratio(y_true, y_pred, sensitive_features=sensitive_feature)
dpd = fairlearn.metrics.demographic_parity_difference(y_true, y_pred, sensitive_features=sensitive_feature)
di = fairlearn.metrics.demographic_parity_ratio(y_true, y_pred, sensitive_features=sensitive_feature)
where y_pred is a binary representing the computed predictions, y_true is also binary representing the truth labels, and sensitive_feature is a binary dataframe consisting of one column of 1's and 0's, for example if measuring the metrics for the groups young and old, 1 would represent young and 0 would represent old, old is then the protected group. What if young is the protected group? Do then I have to invert the column in my dataframe sensitive_feature and supply it again to the Fairlearn functions?
Fairlearn maintainer here!
No, you don't need to change anything. It doesn't matter for the outcome of these functions. For example, demographic parity just looks at y_pred and ignores y_true. Let's say "young" has a selection rate (percentage of 1s) of 0.8 and "old" has a selection rate of 0.6. The demographic parity difference will always be max-min, that is 0.8-0.6=0.2. Even if the other group is the max it's the same outcome. For the ratio, it's min/max, so 0.6/0.8=0.75.
If you have more than 2 groups it'll still work, but it only considers the max and min groups. Any groups in between won't be represented by this particular measure.