Fairness in machine learning is crucial for making better decisions, as seen in the case of COMPAS, a risk assessment tool used in the US justice system. The tool was found to be biased against African Americans, leading to unfair sentencing.
Biases in machine learning algorithms can arise from the data used to train them. For example, if a dataset is biased towards a particular group, the algorithm will learn to make predictions based on those biases. This is what happened with COMPAS.
Machine learning algorithms can also perpetuate existing social inequalities, such as racial and gender disparities. This is evident in the case of facial recognition technology, which has been shown to be less accurate for people with darker skin tones.
Fairness in machine learning requires careful consideration of the data used to train algorithms, as well as the potential consequences of their use.
Curious to learn more? Check out: Algorithmic Fairness
What Is Fairness in Machine Learning?
Fairness in machine learning is a concept that's rooted in philosophy, law, and mathematics, where it's defined as a fair process that treats all involved parties justly and equally. This idea is fundamental to understanding fairness, but it's not straightforward to apply to complex machine learning models.
You might like: A Survey on Bias and Fairness in Machine Learning
There are multiple definitions of fairness in machine learning, each with its own advantages and disadvantages. Most approaches focus on quantitative fairness criteria, which allow for direct quantification of a model's performance on a specific metric.
Quantitative measures enable objective comparison of different interventions and improve model fairness by treating it as part of the optimization problem. This approach helps to directly quantify the performance of a model concerning fairness metrics.
The concept of fairness in machine learning is not unique to this field and has been explored in various other areas.
Types of Harms and Metrics
Fairness in machine learning is not just a buzzword, it's a crucial aspect of ensuring that AI systems don't perpetuate biases and harm certain groups of people. There are many types of harms that can occur, and understanding these is key to building fairer systems.
Allocation harms can occur when AI systems extend or withhold opportunities, resources, or information, which is a major concern in areas like hiring, school admissions, and lending. This type of harm can have serious consequences, such as denying someone a job or loan based on biased assumptions.
Quality-of-service harms, on the other hand, occur when a system doesn't work as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld. This can happen in areas like face recognition, document search, or product recommendation, where accuracy can vary significantly between different groups.
To measure fairness, we use various metrics, such as equalized odds, predictive parity, counterfactual fairness, and demographic parity. These metrics are mutually exclusive, meaning that they can't all be achieved at the same time, which is a challenge in itself.
Here are some common fairness metrics:
- equalized odds
- predictive parity
- counterfactual fairness
- demographic parity
The Disparate Impact Ratio (DIR) is a specific metric used to assess fairness in decision-making processes or outcomes. It measures the ratio of the probability of a positive outcome for one group to the probability of a positive outcome for another group, making it a useful tool for identifying biases.
Fairness Metrics
Fairness metrics are a crucial aspect of ensuring fairness in machine learning models. They provide a mathematical definition of "fairness" that is measurable.
Some commonly used fairness metrics include equalized odds, predictive parity, counterfactual fairness, and demographic parity. These metrics are mutually exclusive, meaning that they can't be used together.
Disparity metrics, on the other hand, evaluate how far a given predictor departs from satisfying a parity constraint. This can be done by comparing the behavior across different groups in terms of ratios or in terms of differences.
For example, demographic parity difference is defined as the maximum expected value of a predictor given a particular group minus the minimum expected value of the predictor given the same group. This can be calculated using the formula: (max_a E[h(X) | A=a]) - (min_a E[h(X) | A=a]).
Here are some common disparity metrics used in machine learning:
- Demographic parity difference
- Demographic parity ratio
Both of these metrics can provide valuable insights into how fair a machine learning model is, and can be used to identify areas where the model may be biased.
Fairness in Machine Learning Methods
Fairness in machine learning methods can be grouped into three main categories: pre-processing, in-processing, and post-processing methods. Pre-processing methods transform data to make fair predictions possible, while in-processing methods train a model that is fair by design. Post-processing methods transform model outputs to make predictions fair.
There are several types of group fairness metrics, including Statistical Parity, Demographic Parity, Equal Opportunity, Equalized Odds, and Predictive Parity. These metrics assess the fairness of a decision-making process or outcome for different groups within a population. For example, Statistical Parity measures whether the proportion of positive outcomes is the same for all groups.
To mitigate unfairness in machine learning models, developers can use parity constraints, which require some aspects of the predictor's behavior to be comparable across groups. The Fairlearn package supports several types of parity constraints, including Demographic Parity, Equalized Odds, and Bounded Group Loss. These constraints can be used in combination with mitigation algorithms, such as Reduction and Post-processing, to reduce unfairness in machine learning models.
Here are some examples of parity constraints and their purposes:
Terminology
Fairness in machine learning is all about making sure our models don't discriminate against certain groups of people.
To understand fairness metrics, we need to start with some basic terminology. A binary classification problem is when we want to predict a label, like whether someone is qualified for a job or not. This is what we're dealing with in the example of an application process.
In a binary classification problem, we have a label Y that can be either 0 or 1, where 1 means qualified and 0 means not qualified. The model's prediction is denoted as \hat{y}, which can also be either 0 or 1.
Sensitive attributes are information like gender or age that we want to avoid using to make predictions. We represent these attributes as A, and the set of all attributes associated with a candidate is denoted as X.
The available data is represented as (X, Y), where X is the set of attributes and Y is the label. This is the data we use to train and evaluate our machine learning models.
Broaden your view: Binary Classification
Group Attribution
Group attribution bias is a common issue in machine learning where we assume that what's true for an individual is also true for everyone in that group. This can lead to biased conclusions if a non-representative sample is used for data collection.
Assuming that a group's behavior is the same as an individual's can be exacerbated by convenience sampling, where we choose a sample that's easy to collect rather than a random sample. This can result in attributions that don't reflect reality.
In group attribution bias, we overlook the differences within a group and treat them as a single entity. For example, if we assume that all people from a certain region have the same behavior, we might be ignoring the fact that there are many variations within that region.
To avoid group attribution bias, it's essential to collect data from a representative sample and analyze the differences within each group. This can help us identify potential biases and make more accurate predictions.
Here's a summary of group attribution bias:
By being aware of group attribution bias and taking steps to avoid it, we can create more accurate and fair machine learning models that benefit everyone.
Pre-processing Methods
Pre-processing methods aim to transform your data in a way that makes fair predictions possible in downstream tasks. This is a key approach to improving fairness in machine learning.
One way to achieve this is through data transformation, which can help reduce disparities across different groups. For example, applicants of a certain gender might be upweighted or downweighted to retrain models.
Pre-processing methods can also involve feature engineering, where you create new features that are more informative and less biased. This can help reduce the impact of sensitive features on the model's predictions.
Some pre-processing methods include reweighting the training data, which involves assigning different weights to different data points to reduce disparities. This can be done using algorithms such as ExponentiatedGradient or GridSearch, which are described in the Fairlearn open-source package.
Broaden your view: Proximal Gradient Methods for Learning
Here are some pre-processing methods mentioned in the article:
These pre-processing methods can be powerful tools for improving fairness in machine learning, but it's essential to keep in mind that they come with limitations, as mentioned in the article.
In-Processing Methods
In-processing methods are a type of approach to improve fairness in machine learning models, where the model itself is trained to be fair according to specific metrics.
These methods are still a hot topic of research, so there are many approaches, which can be grouped into pre-processing, in-processing, and post-processing methods.
In-processing methods involve training a model that is fair from the start, which can be achieved by using disparity metrics to quantify fairness.
Fairlearn provides tools to assess fairness of predictors for classification and regression, and also provides tools that mitigate unfairness in classification and regression.
By training a model to be fair, you can ensure that it doesn't perpetuate biases and makes more accurate predictions for all types of data.
Readers also liked: Automatic Document Classification Machine Learning
Improving ML Methods
Improving the fairness of machine learning models is an ongoing research topic, with many approaches being explored. Pre-processing methods can be used to transform data to make fair predictions possible in downstream tasks, while in-processing methods involve training a model that is fair according to specific metrics.
There are three main approaches to improving fairness in ML: pre-processing, in-processing, and post-processing methods. Pre-processing methods transform data to make fair predictions possible, in-processing methods train a model that is fair according to specific metrics, and post-processing methods transform model outputs to make predictions fair.
Post-processing methods, such as adjusting the output of a model after it has been run, can be used to enforce fairness constraints without modifying the model itself. For example, a binary classifier can be post-processed to maintain equality of opportunity for a certain attribute by ensuring that the true positive rate is the same for all values of that attribute.
Curious to learn more? Check out: Confusion Matrix Metrics
Some common types of group fairness metrics include Statistical Parity, Demographic parity, Equal opportunity, Equalized odds, and Predictive parity. These metrics assess the fairness of a decision-making process or outcome for different groups within a population.
The Fairlearn open-source package provides tools to assess fairness of predictors for classification and regression, as well as tools to mitigate unfairness in these tasks. The package supports a set of constraints on the predictor's behavior called parity constraints or criteria, which can be used to mitigate observed fairness issues.
The following table summarizes some of the parity constraints supported by the Fairlearn package:
These parity constraints can be used to mitigate unfairness in machine learning models, but developers should consider other constraints and criteria for their specific use cases.
Automation
Automation plays a significant role in machine learning, but it can also lead to biases in decision-making.
Automation bias occurs when humans favor recommendations made by automated systems over information made without automation, even when the automated system makes errors.
A different take: Mlops Continuous Delivery and Automation Pipelines in Machine Learning
This bias can be particularly problematic, especially in high-stakes decision-making situations.
In fact, automation bias can lead to decisions that are not only incorrect but also unfair to certain groups of people.
For example, an automated system might make a decision based on a flawed algorithm, which a human would then blindly accept without questioning.
This can perpetuate existing biases and inequalities, rather than working to address them.
You might like: Decision Tree Algorithm Machine Learning
Sources
- https://developers.google.com/machine-learning/glossary/fairness
- https://dida.do/blog/fairness-in-ml
- https://ruivieira.dev/fairness-in-machine-learning.html
- https://learn.microsoft.com/en-us/azure/machine-learning/concept-fairness-ml
- https://fairlearn.org/v0.5.0/user_guide/fairness_in_machine_learning.html
Featured Images: pexels.com