AI Fairness 360 documentation¶
Algorithms¶
aif360.algorithms.preprocessing
¶
algorithms.preprocessing.DisparateImpactRemover ([…]) |
Disparate impact remover is a preprocessing technique that edits feature values increase group fairness while preserving rank-ordering within groups [1]_. |
algorithms.preprocessing.LFR (…[, k, Ax, …]) |
Learning fair representations is a pre-processing technique that finds a latent representation which encodes the data well but obfuscates information about protected attributes [2]_. |
algorithms.preprocessing.OptimPreproc (…[, …]) |
Optimized preprocessing is a preprocessing technique that learns a probabilistic transformation that edits the features and labels in the data with group fairness, individual distortion, and data fidelity constraints and objectives [3]_. |
algorithms.preprocessing.Reweighing (…) |
Reweighing is a preprocessing technique that Weights the examples in each (group, label) combination differently to ensure fairness before classification [4]_. |
aif360.algorithms.inprocessing
¶
algorithms.inprocessing.AdversarialDebiasing (…) |
Adversarial debiasing is an in-processing technique that learns a classifier to maximize prediction accuracy and simultaneously reduce an adversary’s ability to determine the protected attribute from the predictions [5]_. |
algorithms.inprocessing.ARTClassifier (…) |
Wraps an instance of an art.classifiers.Classifier to extend Transformer . |
algorithms.inprocessing.GerryFairClassifier ([…]) |
Model is an algorithm for learning classifiers that are fair with respect to rich subgroups. |
algorithms.inprocessing.MetaFairClassifier ([…]) |
The meta algorithm here takes the fairness metric as part of the input and returns a classifier optimized w.r.t. |
algorithms.inprocessing.PrejudiceRemover ([…]) |
Prejudice remover is an in-processing technique that adds a discrimination-aware regularization term to the learning objective [6]_. |
aif360.algorithms.postprocessing
¶
algorithms.postprocessing.CalibratedEqOddsPostprocessing (…) |
Calibrated equalized odds postprocessing is a post-processing technique that optimizes over calibrated classifier score outputs to find probabilities with which to change output labels with an equalized odds objective [7]_. |
algorithms.postprocessing.EqOddsPostprocessing (…) |
Equalized odds postprocessing is a post-processing technique that solves a linear program to find probabilities with which to change output labels to optimize equalized odds [8]_ [9]_. |
algorithms.postprocessing.RejectOptionClassification (…) |
Reject option classification is a postprocessing technique that gives favorable outcomes to unpriviliged groups and unfavorable outcomes to priviliged groups in a confidence band around the decision boundary with the highest uncertainty [10]_. |
aif360.algorithms
¶
algorithms.Transformer (**kwargs) |
Abstract base class for transformers. |
Datasets¶
aif360.datasets
¶
Base classes¶
datasets.Dataset (**kwargs) |
Abstract base class for datasets. |
datasets.StructuredDataset (df, label_names, …) |
Base class for all structured datasets. |
datasets.BinaryLabelDataset ([…]) |
Base class for all structured datasets with binary labels. |
datasets.StandardDataset (df, label_name, …) |
Base class for every BinaryLabelDataset provided out of the box by aif360. |
Common datasets¶
datasets.AdultDataset ([label_name, …]) |
Adult Census Income Dataset. |
datasets.BankDataset ([label_name, …]) |
Bank marketing Dataset. |
datasets.CompasDataset ([label_name, …]) |
ProPublica COMPAS Dataset. |
datasets.GermanDataset ([label_name, …]) |
German credit Dataset. |
Explainers¶
aif360.explainers
¶
explainers.MetricTextExplainer (metric) |
Class for explaining metric values with text. |
explainers.MetricJSONExplainer (metric) |
Class for explaining metric values in JSON format. |
Fairness Metrics¶
aif360.metrics
¶
metrics.DatasetMetric (dataset[, …]) |
Class for computing metrics based on one StructuredDataset. |
metrics.BinaryLabelDatasetMetric (dataset[, …]) |
Class for computing metrics based on a single BinaryLabelDataset . |
metrics.ClassificationMetric (dataset, …[, …]) |
Class for computing metrics based on two BinaryLabelDatasets. |
metrics.SampleDistortionMetric (dataset, …) |
Class for computing metrics based on two StructuredDatasets. |
aif360.metrics.utils
¶
This is the helper script for implementing metrics.
metrics.utils.compute_boolean_conditioning_vector (X, …) |
Compute the boolean conditioning vector. |
metrics.utils.compute_num_instances (X, w, …) |
Compute the number of instances, \(n\), conditioned on the protected attribute(s). |
metrics.utils.compute_num_pos_neg (X, y, w, …) |
Compute the number of positives, \(P\), or negatives, \(N\), optionally conditioned on protected attributes. |
metrics.utils.compute_num_TF_PN (X, y_true, …) |
Compute the number of true/false positives/negatives optionally conditioned on protected attributes. |
metrics.utils.compute_num_gen_TF_PN (X, …) |
Compute the number of generalized true/false positives/negatives optionally conditioned on protected attributes. |
metrics.utils.compute_distance (X_orig, …) |
Compute the distance element-wise for two sets of vectors. |
scikit-learn
-Compatible API Reference¶
This is the class and function reference for the scikit-learn
-compatible
version of the AIF360 API. It is functionally equivalent to the normal API but
it uses scikit-learn paradigms (where possible) and pandas.DataFrame
for
datasets. Not all functionality from AIF360 is supported yet. See
Getting Started
for a demo of the capabilities.
Note: This is under active development. Visit our GitHub page if you’d like to contribute!
aif360.sklearn.datasets
: Dataset loading functions¶
The dataset format for aif360.sklearn
is a pandas.DataFrame
with
protected attributes in the index.
Warning
Currently, while all scikit-learn classes will accept DataFrames as inputs,
most classes will return a numpy.ndarray
. Therefore, many pre-
processing steps, when placed before an aif360.sklearn
step in a
Pipeline, will cause errors.
Utils¶
datasets.ColumnAlreadyDroppedWarning |
Warning used if a column is attempted to be dropped twice. |
datasets.check_already_dropped (labels, …) |
Check if columns have already been dropped and return only those that haven’t. |
datasets.standardize_dataset (df, prot_attr, …) |
Separate data, targets, and possibly sample weights and populate protected attributes as sample properties. |
datasets.to_dataframe (data) |
Format an OpenML dataset Bunch as a DataFrame with categorical features if needed. |
Loaders¶
datasets.fetch_adult ([subset, data_home, …]) |
Load the Adult Census Income Dataset. |
datasets.fetch_german ([data_home, …]) |
Load the German Credit Dataset. |
datasets.fetch_bank ([data_home, percent10, …]) |
Load the Bank Marketing Dataset. |
datasets.fetch_compas ([data_home, …]) |
Load the COMPAS Recidivism Risk Scores dataset. |
aif360.sklearn.metrics
: Fairness metrics¶
aif360.sklearn
implements a number of fairness metrics for group fairness
and individual fairness. For guidance on which metric to use for a given
application, see our
Guidance page.
Meta-metrics¶
metrics.difference (func, y, *args[, …]) |
Compute the difference between unprivileged and privileged subsets for an arbitrary metric. |
metrics.ratio (func, y, *args[, prot_attr, …]) |
Compute the ratio between unprivileged and privileged subsets for an arbitrary metric. |
Scorers¶
metrics.make_scorer (score_func[, is_ratio]) |
Make a scorer from a ‘difference’ or ‘ratio’ metric (e.g. |
Generic metrics¶
metrics.specificity_score (y_true, y_pred[, …]) |
Compute the specificity or true negative rate. |
metrics.sensitivity_score (y_true, y_pred[, …]) |
Alias of sklearn.metrics.recall_score() for binary classes only. |
metrics.base_rate (y_true[, y_pred, …]) |
Compute the base rate, \(Pr(Y = \text{pos_label}) = \frac{P}{P+N}\). |
metrics.selection_rate (y_true, y_pred[, …]) |
Compute the selection rate, \(Pr(\hat{Y} = \text{pos_label}) = \frac{TP + FP}{P + N}\). |
metrics.generalized_fpr (y_true, probas_pred) |
Return the ratio of generalized false positives to negative examples in the dataset, \(GFPR = \tfrac{GFP}{N}\). |
metrics.generalized_fnr (y_true, probas_pred) |
Return the ratio of generalized false negatives to positive examples in the dataset, \(GFNR = \tfrac{GFN}{P}\). |
Group fairness metrics¶
metrics.statistical_parity_difference (*y[, …]) |
Difference in selection rates. |
metrics.mean_difference (*y[, prot_attr, …]) |
Alias of statistical_parity_difference() . |
metrics.disparate_impact_ratio (*y[, …]) |
Ratio of selection rates. |
metrics.equal_opportunity_difference (y_true, …) |
A relaxed version of equality of opportunity. |
metrics.average_odds_difference (y_true, y_pred) |
A relaxed version of equality of odds. |
metrics.average_odds_error (y_true, y_pred[, …]) |
A relaxed version of equality of odds. |
metrics.between_group_generalized_entropy_error (…) |
Compute the between-group generalized entropy. |
Individual fairness metrics¶
metrics.generalized_entropy_index (b[, alpha]) |
Generalized entropy index measures inequality over a population. |
metrics.generalized_entropy_error (y_true, y_pred) |
Compute the generalized entropy. |
metrics.theil_index (b) |
The Theil index is the generalized_entropy_index() with \(\alpha = 1\). |
metrics.coefficient_of_variation (b) |
The coefficient of variation is two times the square root of the generalized_entropy_index() with \(\alpha = 2\). |
metrics.consistency_score (X, y[, n_neighbors]) |
Compute the consistency score. |
aif360.sklearn.preprocessing
: Pre-processing algorithms¶
Pre-processing algorithms modify a dataset to be more fair (data in, data out).
Pre-processors¶
preprocessing.Reweighing ([prot_attr]) |
Sample reweighing. |
Meta-Estimator¶
preprocessing.ReweighingMeta (estimator[, …]) |
A meta-estimator which wraps a given estimator with a reweighing preprocessing step. |
aif360.sklearn.inprocessing
: In-processing algorithms¶
In-processing algorithms train a fair classifier (data in, predictions out).
In-processors¶
inprocessing.AdversarialDebiasing ([…]) |
Debiasing with adversarial learning. |
aif360.sklearn.postprocessing
: Post-processing algorithms¶
Post-processing algorithms modify predictions to be more fair (predictions in, predictions out).
Post-processors¶
postprocessing.CalibratedEqualizedOdds ([…]) |
Calibrated equalized odds post-processor. |
Meta-Estimator¶
postprocessing.PostProcessingMeta (estimator) |
A meta-estimator which wraps a given estimator with a post-processing step. |
aif360.sklearn.utils
: Utility functions¶
Validation¶
utils.check_inputs (X, y[, sample_weight, …]) |
Input validation for debiasing algorithms. |
utils.check_groups (arr, prot_attr[, …]) |
Get groups from the index of arr. |