Regularizing the constraints in fairlearn's ThresholdOptimizer

235 Views Asked by At

Is there a way to have a lambda regularizer value on the constraints in the ThresholdOptimizer? For instance if we want to create accuracy vs SPD curves I want to have different thresholds enforced on the SPD/accuracy constraints that would indicate their importance (maybe initially accuracy is more important then gradually SPD gains importance).

1

There are 1 best solutions below

3
Roman Lutz On

Fairlearn maintainer here! [I can't comment on StackOverflow, so sadly these clarifying questions need to be in an "answer", but I'll update it once I understand your concern.]

What do you mean by SPD?

Can you describe a use case where it's clear what you mean by "initially accuracy is more important, then gradually SPD gains importance"? ThresholdOptimizer currently only supports the case where you satisfy your constraints 100%. One could think of ways to extend this to have some tolerance in constraint violation to improve the accuracy (or other performance measures).

You might have come across the built-in charts fairlearn provides for ThresholdOptimizer: https://fairlearn.org/v0.6.1/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.plot_threshold_optimizer The chart depends on your constraint, of course, but those may prove to be helpful in explaining how it arrived at the threshold(s).

If you have a concrete feature request feel free to open an issue directly in the repository as well! Thanks!