Manifestations of xenophobia in AI systems

Nenad Tomasev*, Jonathan Leader Maynard, Iason Gabriel

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems.

Original languageEnglish
JournalAI and Society
DOIs
Publication statusPublished - 21 Mar 2024

Keywords

  • Algorithmic fairness
  • Artificial intelligence
  • Ethics
  • Machine learning
  • Marginalised groups
  • Xenophobia

Fingerprint

Dive into the research topics of 'Manifestations of xenophobia in AI systems'. Together they form a unique fingerprint.

Cite this