aif360.sklearn.inprocessing
.ExponentiatedGradientReduction
- class aif360.sklearn.inprocessing.ExponentiatedGradientReduction(prot_attr, estimator, constraints, eps=0.01, max_iter=50, nu=None, eta0=2.0, run_linprog_step=True, drop_prot_attr=True)[source]
Exponentiated gradient reduction for fair classification.
Exponentiated gradient reduction is an in-processing technique that reduces fair classification to a sequence of cost-sensitive classification problems, returning a randomized classifier with the lowest empirical error subject to fair classification constraints [1].
References
- Parameters:
prot_attr – String or array-like column indices or column names of protected attributes.
estimator – An estimator implementing methods
fit(X, y, sample_weight)
andpredict(X)
, whereX
is the matrix of features,y
is the vector of labels, andsample_weight
is a vector of weights; labelsy
and predictions returned bypredict(X)
are either 0 or 1 – e.g. scikit-learn classifiers.constraints (str or fairlearn.reductions.Moment) – If string, keyword denoting the
fairlearn.reductions.Moment
object defining the disparity constraints – e.g., “DemographicParity” or “EqualizedOdds”. For a full list of possible options seeself.model.moments
. Otherwise, provide the desiredMoment
object defining the disparity constraints.eps – Allowed fairness constraint violation; the solution is guaranteed to have the error within
2*best_gap
of the best error under constraint eps; the constraint violation is at most2*(eps+best_gap)
.max_iter – Maximum number of iterations.
nu – Convergence threshold for the duality gap, corresponding to a conservative automatic setting based on the statistical uncertainty in measuring classification error.
eta0 – Initial setting of the learning rate.
run_linprog_step – If True each step of exponentiated gradient is followed by the saddle point optimization over the convex hull of classifiers returned so far.
drop_prot_attr – Boolean flag indicating whether to drop protected attributes from training data.
Methods
Learns randomized model with less bias
get_metadata_routing
Get metadata routing of this object.
get_params
Get parameters for this estimator.
Predict class labels for the given samples.
Probability estimates.
score
Return the mean accuracy on the given test data and labels.
set_params
Set the parameters of this estimator.
Request metadata passed to the
score
method.- __init__(prot_attr, estimator, constraints, eps=0.01, max_iter=50, nu=None, eta0=2.0, run_linprog_step=True, drop_prot_attr=True)[source]
- Parameters:
prot_attr – String or array-like column indices or column names of protected attributes.
estimator – An estimator implementing methods
fit(X, y, sample_weight)
andpredict(X)
, whereX
is the matrix of features,y
is the vector of labels, andsample_weight
is a vector of weights; labelsy
and predictions returned bypredict(X)
are either 0 or 1 – e.g. scikit-learn classifiers.constraints (str or fairlearn.reductions.Moment) – If string, keyword denoting the
fairlearn.reductions.Moment
object defining the disparity constraints – e.g., “DemographicParity” or “EqualizedOdds”. For a full list of possible options seeself.model.moments
. Otherwise, provide the desiredMoment
object defining the disparity constraints.eps – Allowed fairness constraint violation; the solution is guaranteed to have the error within
2*best_gap
of the best error under constraint eps; the constraint violation is at most2*(eps+best_gap)
.max_iter – Maximum number of iterations.
nu – Convergence threshold for the duality gap, corresponding to a conservative automatic setting based on the statistical uncertainty in measuring classification error.
eta0 – Initial setting of the learning rate.
run_linprog_step – If True each step of exponentiated gradient is followed by the saddle point optimization over the convex hull of classifiers returned so far.
drop_prot_attr – Boolean flag indicating whether to drop protected attributes from training data.
- fit(X, y)[source]
Learns randomized model with less bias
- Parameters:
X (pandas.DataFrame) – Training samples.
y (array-like) – Training labels.
- Returns:
self
- predict(X)[source]
Predict class labels for the given samples. :param X: Test samples. :type X: pandas.DataFrame
- Returns:
numpy.ndarray – Predicted class label per sample.
- predict_proba(X)[source]
Probability estimates.
The returned estimates for all classes are ordered by the label of classes.
- Parameters:
X (pandas.DataFrame) – Test samples.
- Returns:
numpy.ndarray – Returns the probability of the sample for each class in the model, where classes are ordered as they are in
self.classes_
.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') ExponentiatedGradientReduction [source]
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
sample_weight
parameter inscore
.- Returns:
self (object) – The updated object.