Can anyone suggest best practice guidelines on selecting thresholds for the disparity metrics to determine if a sensitive attribute is biased or not?
How does one use the Fairlearn metrics to make a decision on whether a feature is biased or not?
255 Views Asked by Sri At
1
There are 1 best solutions below
Related Questions in MODEL
- Can raw means and estimated marginal means be the same ? And when?
- Can't load the saved model in PyTorch
- Question answering model for determine TRL(Technology Readiness Levels)
- Cannot trace my own model using torch.jit.trace
- Get json field value in sqlite model from view django
- Loading the pre-trained model from the .h5 file (Works on Colab but does not work on Local)
- how to get a model in js for odoo 16
- Is there a way to connect two models in mern and access user id of other model
- Using service in the constructor of a MODEL (angular)
- Beta coefficient of direct effect increases after controlling for mediator
- Running a pretrained model on real-time applications
- How to create two separate sets of data (one for daylight hours and another for nighttime hours) from hourly netcdf model output using CDO
- How to understand the Sensor Setting Property ID in the SIG Mesh model
- ValueError: Unknown layer: 'Custom>TFMPNetMainLayer'
- How to generate thumbnail images or GIFs from .GLB 3D models in Python?
Related Questions in FAIRLEARN
- tensorflow error with in processing algorithm, adversarial debiasing, which is based on tf
- Sensitive feature in Fairlearn
- Mitigation for imblearn pipelines
- 'GridSearchCV' object has no attribute 'cv_results_' when fitting ExponentiatedGradient from fairlearn
- Regularizing the constraints in fairlearn's ThresholdOptimizer
- Can you use fairlearn for non-parity constraints? (binned monotonicity)
- Fairness metrics for multi-class classification
- Selection Rate in selection_rate_group_summary in fairlearn
- How does one use the Fairlearn metrics to make a decision on whether a feature is biased or not?
- How to handle 'Widget Loading...' Message in Google Cloud's JupyterLab AI Platform?
- How can I use Fairlearn with custom fairness constraints?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
This is a great question! To figure this out, it really helps to frame the impact of these analytical choices in terms of how it measures real harms to real people.
If you're into research papers, Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Blodget et al. 2020) suggests some ways to approach fairness work that can lead to more productive conversations than focusing on bias alone.
More practically, framing the conversation in terms of real harms to real people then allows you to express the impact of the threshold choices for fairness metrics in accessible human terms. That can go along way to illustrative to various stakeholders why this work matters and is worth doing.
To sketch this out a bit more, false positives often lead to different harms than false negatives, and if there are human review processes this influences how you might quantify these kinds of harms or risks. Upstream labeling noise can influence how much trust you put in the thresholding procedure to capture real harms. For decision support systems, downstream engagement, adoption, and trust in predictive systems often influences whether human decision makers actually make use of model predictions. A few lines of code can show stakeholders the impact of that kind of upstream or downstream noise on fairness metrics, and show stakeholders other ways that the technical system may be amplifying real harms.
If you want to chat more, or dig into more specifics that would help you explore this or kick off those conversation on your team, feel free to ask in https://gitter.im/fairlearn/community as well. Like most software engineering work, it's easier to give more actionable suggestions within a specific application, context or set of constraints.