Base Measure for FairnessSource:
This measure extends
mlr3::Measure() with statistical group fairness:
A common approach to quantifying a model's fairness is to compute the difference between a
protected and an unprotected group according w.r.t. some performance metric, e.g.
classification error (mlr_measures_classif.ce) or
false positive rate
The operation for comparison (e.g., difference or quotient) can be specified using the
Composite measures encompasing multiple fairness metrics can be built using MeasureFairnessComposite.
The protected attribute is specified as a
col_role in the corresponding
<Task>$col_roles$pta = "name_of_attribute"
This also allows specifying more than one protected attribute, in which case fairness will be considered on the level of intersecting groups defined by all columns selected as a predicted attribute.
Creates a new instance of this R6 class.
The measure's id. Set to 'fairness.<base_measure_id>' if ommited.
The base metric evaluated within each subgroup.
The operation used to compute the difference. A function that returns a single value given input: computed metric for each subgroup. Defaults to groupdiff_absdiff.
Should the measure be minimized? Defaults to
Range of the resulting measure. Defaults to
library("mlr3") # Create MeasureFairness to measure the Predictive Parity. t = tsk("adult_train") learner = lrn("classif.rpart", cp = .01) learner$train(t) measure = msr("fairness", base_measure = msr("classif.ppv")) predictions = learner$predict(t) predictions$score(measure, task = t) #> fairness.ppv #> 0.1202326