Audit Report: Summary

This document introduces on how to use mlr3fairness to create audit reports with different tasks throughout the fairness exploration process.

There are three main sections for this document. Which describe the details for the task, the model and the interpretability of the parameters.

Jump to section:

Task details

In this fairness report, we investigate the fairness of the following task:

#> 
#> ── <TaskClassif> (700x5) ───────────────────────────────────────────────────────
#> • Target: target
#> • Target classes: <=50K (positive class, 77%), >50K (23%)
#> • Properties: twoclass
#> • Features (4):
#>   • fct (4): education, marital_status, race, sex
#> • Protected attribute: sex

Task Documentation:

Here we the basic details for the task.

Value
Audit Date: 2025-06-24
Task Name: adult_train
Number of observations: 700
Number of features: 5
Target Name: target
Feature Names: education, marital_status, race, sex
Protected Attribute(s): sex

Exploratory Data Analysis:

We also report the number of missing values, types and the levels for each feature:

id type levels label fix_factor_levels Missings….
5 education factor 10th , 11th , 12th , 1st-4th , 5th-6th , 7th-8th , 9th , Assoc-acdm , Assoc-voc , Bachelors , Doctorate , HS-grad , Masters , Preschool , Prof-school , Some-college NA FALSE 0
8 marital_status factor Divorced , Married-AF-spouse , Married-civ-spouse , Married-spouse-absent, Never-married , Separated , Widowed NA FALSE 0
10 race factor Amer-Indian-Eskimo, Asian-Pac-Islander, Black , Other , White NA FALSE 0
12 sex factor Female, Male NA FALSE 0
13 target factor <=50K, >50K NA FALSE 0

We first look at the label distribution:

Model details

We could see the model that has been used in resample_result:

#> 
#> ── <LearnerClassifRpart> (classif.rpart): Classification Tree ──────────────────
#> • Model: -
#> • Parameters: xval=0
#> • Packages: mlr3 and rpart
#> • Predict Types: response and [prob]
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, missings, multiclass, selected_features, twoclass,
#> and weights
#> • Other settings: use_weights = 'use'

Fairness Metrics

We furthermore report more than one fairness metric. Below metrics are the mean across all the resample results.

value
fairness.acc 0.0850618
fairness.equalized_odds 0.1629949
fairness.fnr 0.0972116
fairness.fpr 0.2287782
fairness.ppv 0.0883333
fairness.pp 0.1692584

We can furthermore employ several visualizations to report the fairness. For example, the fairness and accuracy trade off, compare metrics visualization and the fairness prediction density of our model. For more detailed usage and examples, you may want to check the visualization vignette.

Interpretability

Finally, we use the external package to gain further insight into our model. For the following example we use the iml package as a demonstration. We need first extract the learner from resample_result and wrap it in a Predictor object.

You could generate the variable importance plot like this

Or generate the feature effects plot:

For more details on interpretability, check the documentation of the iml package.