Abstract
Fair machine learning has been focusing on the development of equitable algorithms that address discrimination. Yet, many of these fairness-aware approaches aim to obtain a unique solution to the problem, which leads to a poor understanding of the statistical limits of bias mitigation interventions. In this study, a novel methodology is presented to explore the tradeoff in terms of a Pareto front between accuracy and fairness. To this end, we propose a multiobjective framework that seeks to optimize both measures. The experimental framework is focused on logistiregression and decision tree classifiers since they are well-known by the machine learning community. We conclude experimentally that our method can optimize classifiers by being fairer with a small cost on the classification accuracy. We believe that our contribution will help stakeholders of sociotechnical systems to assess how far they can go being fair and accurate, thus serving in the support of enhanced decision making where machine learning is used.
Original language | English |
---|---|
Pages (from-to) | 1619-1643 |
Number of pages | 25 |
Journal | INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS |
Volume | 36 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2021 |
Keywords
- algorithmic fairness
- group fairness
- multiobjective optimization