Getting Fairness Right: Towards a Toolbox for Practitioners
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large. As policy makers are willing to set the standards of algorithms and AI techniques, the issue on how to refine existing regulation, in order to enforce that decisions made by automated systems are fair and non-discriminatory, is again critical. Meanwhile, researchers have demonstrated that the various existing metrics for fairness are statistically mutually exclusive and the right choice mostly depends on the use case and the definition of fairness.
Recognizing that the solutions for implementing fair AI are not purely mathematical but require the commitments of the stakeholders to define the desired nature of fairness, this paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices. Based on the nature of the application and the available training data, but also on legal requirements and ethical, philosophical and cultural dimensions, the toolbox aims to identify the most appropriate fairness objective. This approach attempts to structure the complex landscape of fairness metrics and, therefore, makes the different available options more accessible to non-technical people. In the proven absence of a silver bullet solution for fair AI, this toolbox intends to produce the fairest AI systems possible with respect to their local context.
Ruf, B., Boutharouite, C., & Detyniecki, M. Getting Fairness Right: Towards a Toolbox for Practitioners.
Ruf, Boris, Chaouki Boutharouite, and Marcin Detyniecki. Getting Fairness Right: Towards a Toolbox for Practitioners, n.d.
Ruf, Boris, et al. Getting Fairness Right: Towards a Toolbox for Practitioners.