Classification Fairness¶
Demographic fairness in binary classification.
The common and simples setting, but not the only one, of fairness of a binary classifier is the demographic one. It is assume that there is one sensitive attribute or more that represents one or more demographic groups (e.g., by gender, race or age), for which a classifier should be fair.
Important
The terminology and functionality is aligned with the book Fairness and Machine Learning - Limitations and Opportunities by Solon Barocas, Moritz Hardt and Arvind Narayanan. Therefore, it is advised to get familiar with Chapter 2, as it summarized the current core knowledge regarding fairness in classification.
Currently, the responsibly.fairness
module has two components:
Metrics (
responsibly.fairness.metrics
) for measuring unfairness.Algorithmic interventions (
responsibly.fairness.interventions
) for satisfying fairness criteria.
The demos section contains two examples of measuring the fairness of a classifier and applying intervention to adjust it:
Metrics¶
Demographic Classification Fairness Criteria.
The objectives of the demographic classification fairness criteria is to measure unfairness towards sensitive attribute valuse.
The metrics have the same interface and behavior as the ones in
sklearn.metrics
(e.g., using y_true
, y_pred
and y_score
).
One should keep in mind that the criteria are intended to measure unfairness, rather than to prove fairness, as it stated in the paper Equality of opportunity in supervised learning by Hardt et al. (2016):
… satisfying [the demographic criteria] should not be considered a conclusive proof of fairness. Similarly, violations of our condition are not meant to be a proof of unfairness. Rather we envision our framework as providing a reasonable way of discovering and measuring potential concerns that require further scrutiny. We believe that resolving fairness concerns is ultimately impossible without substantial domain-specific investigation.
The output of binary classifiers can come in two forms, either giving
a binary outcome prediction for input or producing
a real number score, which the common one is the probability
for the positive or negative label
(such as the method proba
of an Estimator
in sklearn
).
Therefore, the criteria come in two flavors, one for binary output,
and the second for score output.
The fundamental concept for defining the fairness criteria is conditional independence. Using Machine Learning and Fairness book’s notions:
A
- Sensitive attributeY
- Binary ground truth (correct) targetR
- Estimated binary targets or score as returned by a classifier
There are three demographic fairness criteria for classification:
Independence - R⊥A
Separation - R⊥A∣Y
Sufficiency - Y⊥A∣R
Independence¶
-
responsibly.fairness.metrics.
independence_binary
(y_pred, x_sens, x_sens_privileged=None, labels=None, as_df=False)[source]¶ Compute the independence criteria for binary prediction.
In classification terminology, it is the acceptance rate grouped by the sensitive attribute.
- Parameters
y_pred – Estimated targets as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
x_sens_privileged – The privileged value in the sensitive attribute. Relevent only if there are only two values for the sensitive attribute.
labels – List of labels to choose the negative and positive target. This may be used to reorder or select a subset of labels. If none is given, those that appear at least once in y_pred are used in sorted order; first is negative and the second is positive.
as_df – Whether to return the results as dict (if False) or as
pandas.DataFrame
(if True).
- Returns
Independence criteria and comparision if there are only two values for the sensitive attribute.
- Return type
-
responsibly.fairness.metrics.
separation_binary
(y_true, y_pred, x_sens, x_sens_privileged=None, labels=None, as_df=False)[source]¶ Compute the separation criteria for binary prediction.
In classification terminology, it is the TPR, FPR, TNR and FNR grouped by the sensitive attribute.
- Parameters
y_true – Binary ground truth (correct) target values.
y_pred – Estimated binary targets as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
x_sens_privileged – The privileged value in the sensitive attribute. Relevent only if there are only two values for the sensitive attribute.
labels – List of labels to choose the negative and positive target. This may be used to reorder or select a subset of labels. If none is given, those that appear at least once in y_pred are used in sorted order; first is negative and the second is positive.
as_df – Whether to return the results as dict (if False) or as
pandas.DataFrame
(if True).
- Returns
Separation criteria and comparision if there are only two values for the sensitive attribute.
- Return type
Separation¶
-
responsibly.fairness.metrics.
separation_binary
(y_true, y_pred, x_sens, x_sens_privileged=None, labels=None, as_df=False)[source]¶ Compute the separation criteria for binary prediction.
In classification terminology, it is the TPR, FPR, TNR and FNR grouped by the sensitive attribute.
- Parameters
y_true – Binary ground truth (correct) target values.
y_pred – Estimated binary targets as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
x_sens_privileged – The privileged value in the sensitive attribute. Relevent only if there are only two values for the sensitive attribute.
labels – List of labels to choose the negative and positive target. This may be used to reorder or select a subset of labels. If none is given, those that appear at least once in y_pred are used in sorted order; first is negative and the second is positive.
as_df – Whether to return the results as dict (if False) or as
pandas.DataFrame
(if True).
- Returns
Separation criteria and comparision if there are only two values for the sensitive attribute.
- Return type
-
responsibly.fairness.metrics.
separation_score
(y_true, y_score, x_sens, labels=None, as_df=False)[source]¶ Compute the separation criteria for score prediction.
In classification terminology, it is the FPR and TPR grouped by the score and the sensitive attribute.
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each estimated target.
as_df – Whether to return the results as
dict
(ifFalse
) or aspandas.DataFrame
(ifTrue
).
- Returns
Separation criteria.
- Return type
dict or
pandas.DataFrame
ROC¶
The separation criterion has strong relation to the ROC, therefore these functions can generate ROC and ROC-AUC per sensitive attribute values:
-
responsibly.fairness.metrics.
roc_auc_score_by_attr
(y_true, y_score, x_sens, sample_weight=None)[source]¶ Compute Area Under the ROC (AUC) by attribute.
Based on function:sklearn.metrics.roc_auc_score
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each estimated target.
sample_weight – Sample weights.
- Returns
ROC AUC grouped by the sensitive attribute.
- Return type
-
responsibly.fairness.metrics.
roc_curve_by_attr
(y_true, y_score, x_sens, pos_label=None, sample_weight=None, drop_intermediate=False)[source]¶ Compute Receiver operating characteristic (ROC) by attribute.
Based on
sklearn.metrics.roc_curve()
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each estimated target.
pos_label – Label considered as positive and others are considered negative.
sample_weight – Sample weights.
drop_intermediate – Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves.
- Returns
For each value of sensitive attribute: - fpr - Increasing false positive rates such
that element i is the false positive rate of predictions with score >= thresholds[i].
fpr - Increasing true positive rates such that element i is the true positive rate of predictions with score >= thresholds[i].
thresholds - Decreasing thresholds on the decision function used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1.
- Return type
Plotting¶
-
responsibly.fairness.metrics.
plot_roc_by_attr
(y_true, y_score, x_sens, title='ROC Curves by Attribute', ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Generate the ROC curves by attribute from targets and scores.
Based on
skplt.metrics.plot_roc()
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each estimated target.
title (str) – Title of the generated plot.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type
-
responsibly.fairness.metrics.
plot_roc_curves
(roc_curves, aucs=None, title='ROC Curves by Attribute', ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Generate the ROC curves by attribute from (fpr, tpr, thresholds).
Based on
skplt.metrics.plot_roc()
- Parameters
roc_curves (dict) – Receiver operating characteristic (ROC) by attribute.
aucs (dict) – Area Under the ROC (AUC) by attribute.
title (str) – Title of the generated plot.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type
Sufficiency¶
-
responsibly.fairness.metrics.
sufficiency_binary
(y_true, y_pred, x_sens, x_sens_privileged=None, labels=None, as_df=False)[source]¶ Compute the sufficiency criteria for binary prediction.
In classification terminology, it is the PPV and NPV grouped by the sensitive attribute.
- Parameters
y_true – Binary ground truth (correct) target values.
y_pred – Binary estimated targets as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
x_sens_privileged – The privileged value in the sensitive attribute. Relevent only if there are only two values for the sensitive attribute.
labels – List of labels to choose the negative and positive target. This may be used to reorder or select a subset of labels. If none is given, those that appear at least once in y_pred are used in sorted order; first is negative and the second is positive.
as_df – Whether to return the results as dict (if False) or as
pandas.DataFrame
(if True).
- Returns
Sufficiency criteria and comparision if there are only two values for the sensitive attribute.
- Return type
-
responsibly.fairness.metrics.
sufficiency_score
(y_true, y_score, x_sens, labels=None, within_score_percentile=False, as_df=False)[source]¶ Compute the sufficiency criteria for score prediction.
In classification terminology, it is the PPV and the NPV grouped by the score and the sensitive attribute.
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
as_df – Whether to return the results as
dict
(ifFalse
) or aspandas.DataFrame
(ifTrue
).
- Returns
Sufficiency criteria.
- Return type
dict or
pandas.DataFrame
Report¶
-
responsibly.fairness.metrics.
report_binary
(y_true, y_pred, x_sens, labels=None)[source]¶ Generate a report of criteria for binary prediction.
In classification terminology, the statistics are grouped by the sensitive attribute: - Number of observations per group - Proportion of of observations per group - Base rate - Acceptance rate - FNR - TPR - PPV - NPV
- Parameters
y_true – Binary ground truth (correct) target values.
y_pred – Binary estimated targets as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each target.
labels – List of labels to choose the negative and positive target. This may be used to reorder or select a subset of labels. If none is given, those that appear at least once in y_pred are used in sorted order; first is negative and the second is positive.
- Returns
Classification statistics grouped by the sensitive attribute.
- Return type
A Dictionary of criteria¶
https://fairmlbook.org/demographic.html#a-dictionary-of-criteria
Algorithmic Interventions¶
Algorithmic intervensions for satisfying fairness criteria.
There are three families of techniques:
Pre-processing - Adjust features in the dataset.
In-processing - Adjust the learning algorithm.
Post-processing - Adjust the learned classifier.
Threshold - Post-processing¶
Post-processing fairness intervension by choosing thresholds.
There are multiple definitions for choosing the thresholds:
Single threshold for all the sensitive attribute values that minimizes cost.
A threshold for each sensitive attribute value that minimize cost.
A threshold for each sensitive attribute value that achieve independence and minimize cost.
A threshold for each sensitive attribute value that achieve equal FNR (equal opportunity) and minimize cost.
A threshold for each sensitive attribute value that achieve separation (equalized odds) and minimize cost.
The code is based on fairmlbook repository.
- References:
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning In Advances in neural information processing systems (pp. 3315-3323).
Attacking discrimination with smarter machine learning by Google.
-
responsibly.fairness.interventions.threshold.
find_single_threshold
(roc_curves, base_rates, proportions, cost_matrix)[source]¶ Compute single threshold that minimizes cost.
- Parameters
- Returns
Threshold, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
find_min_cost_thresholds
(roc_curves, base_rates, proportions, cost_matrix)[source]¶ Compute thresholds by attribute values that minimize cost.
- Parameters
- Returns
Thresholds, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
find_independence_thresholds
(roc_curves, base_rates, proportions, cost_matrix)[source]¶ Compute thresholds that achieve independence and minimize cost.
- Parameters
- Returns
Thresholds, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
find_fnr_thresholds
(roc_curves, base_rates, proportions, cost_matrix)[source]¶ Compute thresholds that achieve equal FNRs and minimize cost.
Also known as equal opportunity.
- Parameters
- Returns
Thresholds, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
find_separation_thresholds
(roc_curves, base_rate, cost_matrix)[source]¶ Compute thresholds that achieve separation and minimize cost.
Also known as equalized odds.
-
responsibly.fairness.interventions.threshold.
find_thresholds
(roc_curves, proportions, base_rate, base_rates, cost_matrix, with_single=True, with_min_cost=True, with_independence=True, with_fnr=True, with_separation=True)[source]¶ Compute thresholds that achieve various criteria and minimize cost.
- Parameters
roc_curves (dict) – Receiver operating characteristic (ROC) by attribute.
proportions (dict) – Proportion of each attribute value.
base_rate (float) – Overall base rate.
base_rates (dict) – Base rate by attribute.
cost_matrix (sequence) – Cost matrix by [[tn, fp], [fn, tp]].
with_single (bool) – Compute single threshold.
with_min_cost (bool) – Compute minimum cost thresholds.
with_independence (bool) – Compute independence thresholds.
with_fnr (bool) – Compute FNR thresholds.
with_separation (bool) – Compute separation thresholds.
- Returns
Dictionary of threshold criteria, and for each criterion: thresholds, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
find_thresholds_by_attr
(y_true, y_score, x_sens, cost_matrix, with_single=True, with_min_cost=True, with_independence=True, with_fnr=True, with_separation=True, pos_label=None, sample_weight=None, drop_intermediate=False)[source]¶ Compute thresholds that achieve various criteria and minimize cost.
- Parameters
y_true – Binary ground truth (correct) target values.
y_score – Estimated target score as returned by a classifier.
x_sens – Sensitive attribute values corresponded to each estimated target.
cost_matrix (sequence) – Cost matrix by [[tn, fp], [fn, tp]].
pos_label – Label considered as positive and others are considered negative.
sample_weight – Sample weights.
drop_intermediate – Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves.
with_single (bool) – Compute single threshold.
with_min_cost (bool) – Compute minimum cost thresholds.
with_independence (bool) – Compute independence thresholds.
with_fnr (bool) – Compute FNR thresholds.
with_separation (bool) – Compute separation thresholds.
- Returns
Dictionary of threshold criteria, and for each criterion: thresholds, FPR and TPR by attribute and cost value.
- Return type
-
responsibly.fairness.interventions.threshold.
plot_roc_curves_thresholds
(roc_curves, thresholds_data, aucs=None, title='ROC Curves by Attribute', ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Generate the ROC curves by attribute with thresholds.
Based on
skplt.metrics.plot_roc()
- Parameters
roc_curves (dict) – Receiver operating characteristic (ROC) by attribute.
thresholds_data (dict) – Thresholds by attribute from the function
find_thresholds()
.aucs (dict) – Area Under the ROC (AUC) by attribute.
title (str) – Title of the generated plot.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type
-
responsibly.fairness.interventions.threshold.
plot_fpt_tpr
(roc_curves, title='FPR-TPR Curves by Attribute', ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Generate FPR and TPR curves by thresholds and by attribute.
Based on
skplt.metrics.plot_roc()
- Parameters
roc_curves (dict) – Receiver operating characteristic (ROC) by attribute.
title (str) – Title of the generated plot.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type
-
responsibly.fairness.interventions.threshold.
plot_costs
(thresholds_data, title='Cost by Threshold Strategy', ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Plot cost by threshold definition and by attribute.
Based on
skplt.metrics.plot_roc()
- Parameters
thresholds_data (dict) – Thresholds by attribute from the function
find_thresholds()
.title (str) – Title of the generated plot.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type
-
responsibly.fairness.interventions.threshold.
plot_thresholds
(thresholds_data, markersize=7, title='Thresholds by Strategy and Attribute', xlim=None, ax=None, figsize=None, title_fontsize='large', text_fontsize='medium')[source]¶ Plot thresholds by strategy and by attribute.
Based on
skplt.metrics.plot_roc()
- Parameters
thresholds_data (dict) – Thresholds by attribute from the function
find_thresholds()
.markersize (int) – Marker size.
title (str) – Title of the generated plot.
xlim (tuple) – Set the data limits for the x-axis.
ax – The axes upon which to plot the curve. If None, the plot is drawn on a new set of axes.
figsize (tuple) – Tuple denoting figure size of the plot e.g. (6, 6).
title_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
text_fontsize – Matplotlib-style fontsizes. Use e.g. ‘small’, ‘medium’, ‘large’ or integer-values.
- Returns
The axes on which the plot was drawn.
- Return type