This measure extends `mlr3::Measure()`

with statistical group fairness:
A common approach to quantifying a model's fairness is to compute the difference between a
protected and an unprotected group according w.r.t. some performance metric, e.g.
`classification error`

(mlr_measures_classif.ce) or `false positive rate`

(mlr_measures_classif.fpr).
The operation for comparison (e.g., difference or quotient) can be specified using the `operation`

parameter, e.g. `groupdiff_absdiff()`

or `groupdiff_tau()`

.

Composite measures encompasing multiple fairness metrics can be built using MeasureFairnessComposite.

Some popular predefined measures can be found in the dictionary mlr_measures.

## Protected Attributes

The protected attribute is specified as a `col_role`

in the corresponding `Task()`

:`<Task>$col_roles$pta = "name_of_attribute"`

This also allows specifying more than one protected attribute,
in which case fairness will be considered on the level of intersecting groups defined by all columns
selected as a predicted attribute.

## Super class

`mlr3::Measure`

-> `MeasureFairness`

## Public fields

`base_measure`

(

`Measure()`

)

The base measure to be used by the fairness measures, e.g. mlr_measures_classif.fpr for the false positive rate.`operation`

(

`function()`

)

The operation used to compute the difference. A function with args 'x' and 'y' that returns a single value. Defaults to`abs(x - y)`

.

## Methods

## Inherited methods

### Method `new()`

Creates a new instance of this R6 class.

#### Usage

```
MeasureFairness$new(
id = NULL,
base_measure,
operation = groupdiff_absdiff,
minimize = TRUE,
range = c(-Inf, Inf)
)
```

#### Arguments

`id`

(

`character`

)

The measure's id. Set to 'fairness.<base_measure_id>' if ommited.`base_measure`

(

`Measure()`

)

The base metric evaluated within each subgroup.`operation`

(

`function`

)

The operation used to compute the difference. A function that returns a single value given input: computed metric for each subgroup. Defaults to groupdiff_absdiff.`minimize`

(

`logical()`

)

Should the measure be minimized? Defaults to`TRUE`

.`range`

(

`numeric(2)`

)

Range of the resulting measure. Defaults to`c(-Inf, Inf)`

.

## Examples

```
library("mlr3")
# Create MeasureFairness to measure the Predictive Parity.
t = tsk("adult_train")
learner = lrn("classif.rpart", cp = .01)
learner$train(t)
measure = msr("fairness", base_measure = msr("classif.ppv"))
predictions = learner$predict(t)
predictions$score(measure, task = t)
#> fairness.ppv
#> 0.1202326
```