How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness

Ana Valdivia*, Javier Sánchez-Monedero, Jorge Casillas

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

25 Citations (Scopus)

Abstract

Fair machine learning has been focusing on the development of equitable algorithms that address discrimination. Yet, many of these fairness-aware approaches aim to obtain a unique solution to the problem, which leads to a poor understanding of the statistical limits of bias mitigation interventions. In this study, a novel methodology is presented to explore the tradeoff in terms of a Pareto front between accuracy and fairness. To this end, we propose a multiobjective framework that seeks to optimize both measures. The experimental framework is focused on logistiregression and decision tree classifiers since they are well-known by the machine learning community. We conclude experimentally that our method can optimize classifiers by being fairer with a small cost on the classification accuracy. We believe that our contribution will help stakeholders of sociotechnical systems to assess how far they can go being fair and accurate, thus serving in the support of enhanced decision making where machine learning is used.

Original languageEnglish
Pages (from-to)1619-1643
Number of pages25
JournalINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
Volume36
Issue number4
DOIs
Publication statusPublished - Apr 2021

Keywords

  • algorithmic fairness
  • group fairness
  • multiobjective optimization

Fingerprint

Dive into the research topics of 'How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness'. Together they form a unique fingerprint.

Cite this