Intersectional Experiences of Unfair Treatment Caused by Automated Computational Systems

Research output: Contribution to journalArticlepeer-review

268 Downloads (Pure)

Abstract

This paper reports on empirical work conducted to study perceptions of unfair treatment caused by automated computational systems. While the pervasiveness of algorithmic bias has been widely acknowledged, and perceptions of fairness are commonly studied in Human Computer Interaction, there is a lack of research on how unfair treatment by automated computational systems is experienced by users from disadvantaged and marginalised backgrounds. There is a need for more diversification in terms of the investigated users, domains, and tasks, and regarding the strategies that users employ to reduce harm. To unpack these issues, we ran a prescreened survey of 663 participants, oversampling those with at-risk characteristics. We collected occurrences and types of conflicts regarding unfair and discriminatory treatment and systems, as well as the actions taken towards resolving these situations. Drawing on intersectional research, we combine qualitative and quantitative approaches in order to highlight the nuances around power and privilege in the perceptions of automated computational systems. Among our participants, we discuss experiences of computational essentialism, attribute-based exclusion, and expected harm. We derive suggestions to address these perceptions of unfairness as they occur.
Original languageEnglish
JournalProceedings of the ACM on Human-Computer Interaction - CSCW
Publication statusAccepted/In press - 22 May 2022

Keywords

  • Algorithmic fairness
  • conflicts
  • artificial intelligence; explainability; trust; governance
  • intersectionaity

Fingerprint

Dive into the research topics of 'Intersectional Experiences of Unfair Treatment Caused by Automated Computational Systems'. Together they form a unique fingerprint.

Cite this