aif360.sklearn.inprocessing.AdversarialDebiasing

class aif360.sklearn.inprocessing.AdversarialDebiasing(prot_attr=None, scope_name='classifier', adversary_loss_weight=0.1, num_epochs=50, batch_size=128, classifier_num_hidden_units=200, debias=True, verbose=False, random_state=None)[source]

Debiasing with adversarial learning.

Adversarial debiasing is an in-processing technique that learns a classifier to maximize prediction accuracy and simultaneously reduce an adversary’s ability to determine the protected attribute from the predictions [1]. This approach leads to a fair classifier as the predictions cannot carry any group discrimination information that the adversary can exploit.

References

[1]B. H. Zhang, B. Lemoine, and M. Mitchell, “Mitigating Unwanted Biases with Adversarial Learning,” AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2018.
Variables:
  • prot_attr (str or list(str)) – Protected attribute(s) used for debiasing.
  • groups (array, shape (n_groups,)) – A list of group labels known to the classifier.
  • classes (array, shape (n_classes,)) – A list of class labels known to the classifier.
  • sess (tensorflow.Session) – The TensorFlow Session used for the computations. Note: this can be manually closed to free up resources with self.sess_.close().
  • classifier_logits (tensorflow.Tensor) – Tensor containing output logits from the classifier.
  • adversary_logits (tensorflow.Tensor) – Tensor containing output logits from the adversary.
Parameters:
  • prot_attr (single label or list-like, optional) – Protected attribute(s) to use in the debiasing process. If more than one attribute, all combinations of values (intersections) are considered. Default is None meaning all protected attributes from the dataset are used.
  • scope_name (str, optional) – TensorFlow “variable_scope” name for the entire model (classifier and adversary).
  • adversary_loss_weight (float or None, optional) – If None, this will use the suggestion from the paper: \(\alpha = \sqrt(global_step)\) with inverse time decay on the learning rate. Otherwise, it uses the provided coefficient with exponential learning rate decay.
  • num_epochs (int, optional) – Number of epochs for which to train.
  • batch_size (int, optional) – Size of mini-batch for training.
  • classifier_num_hidden_units (int, optional) – Number of hidden units in the classifier.
  • debias (bool, optional) – If False, learn a classifier without an adversary.
  • verbose (bool, optional) – If True, print losses every 200 steps.
  • random_state (int or numpy.RandomState, optional) – Seed of pseudo- random number generator for shuffling data and seeding weights.

Methods

decision_function Soft prediction scores.
fit Train the classifier and adversary (if debias == True) with the given training data.
get_params Get parameters for this estimator.
predict Predict class labels for the given samples.
predict_proba Probability estimates.
score Return the mean accuracy on the given test data and labels.
set_params Set the parameters of this estimator.
__init__(prot_attr=None, scope_name='classifier', adversary_loss_weight=0.1, num_epochs=50, batch_size=128, classifier_num_hidden_units=200, debias=True, verbose=False, random_state=None)[source]
Parameters:
  • prot_attr (single label or list-like, optional) – Protected attribute(s) to use in the debiasing process. If more than one attribute, all combinations of values (intersections) are considered. Default is None meaning all protected attributes from the dataset are used.
  • scope_name (str, optional) – TensorFlow “variable_scope” name for the entire model (classifier and adversary).
  • adversary_loss_weight (float or None, optional) – If None, this will use the suggestion from the paper: \(\alpha = \sqrt(global_step)\) with inverse time decay on the learning rate. Otherwise, it uses the provided coefficient with exponential learning rate decay.
  • num_epochs (int, optional) – Number of epochs for which to train.
  • batch_size (int, optional) – Size of mini-batch for training.
  • classifier_num_hidden_units (int, optional) – Number of hidden units in the classifier.
  • debias (bool, optional) – If False, learn a classifier without an adversary.
  • verbose (bool, optional) – If True, print losses every 200 steps.
  • random_state (int or numpy.RandomState, optional) – Seed of pseudo- random number generator for shuffling data and seeding weights.
decision_function(X)[source]

Soft prediction scores.

Parameters:X (pandas.DataFrame) – Test samples.
Returns:numpy.ndarray – Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
fit(X, y)[source]

Train the classifier and adversary (if debias == True) with the given training data.

Parameters:
  • X (pandas.DataFrame) – Training samples.
  • y (array-like) – Training labels.
Returns:

self

predict(X)[source]

Predict class labels for the given samples.

Parameters:X (pandas.DataFrame) – Test samples.
Returns:numpy.ndarray – Predicted class label per sample.
predict_proba(X)[source]

Probability estimates.

The returned estimates for all classes are ordered by the label of classes.

Parameters:X (pandas.DataFrame) – Test samples.
Returns:numpy.ndarray – Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.