Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization

Rita Borgo, Peta Masters, Emily Wall, Laura Matzen, Mennatallah El-Assady, Helia Hosseinpour, Alex Endert, Polo Chau, Adam Perer, Harald Schupp, Hendrik Strobelt, Lace Padilla

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

257 Downloads (Pure)

Abstract

Many papers make claims about specific visualization techniques that
are said to enhance or calibrate trust in AI systems. But a design
choice that enhances trust in some cases appears to damage it in others.
In this paper, we explore this inherent duality through an analogy
with “knobs”. Turning a knob too far in one direction may result in
under-trust, too far in the other, over-trust or, turned up further still, in
a confusing distortion. While the designs or so-called “knobs” are not
inherently evil, they can be misused or used in an adversarial context
and thereby manipulated to mislead users or promote unwarranted
levels of trust in AI systems. When a visualization that has no mean-
ingful connection with the underlying model or data is employed to
enhance trust, we refer to the result as “trust junk.” From a review
of 65 papers, we identify nine commonly made claims about trust
calibration. We synthesize them into a framework of knobs that can be
used for good or “evil,” and distill our findings into observed pitfalls
for the responsible design of human-AI systems.
Original languageEnglish
Title of host publicationIEEE PacificVis conference proceedings
PublisherIEEE
Publication statusAccepted/In press - 2024

Publication series

NamePacificVis conference proceedings

Keywords

  • Artificial Intelligence (AI)
  • Human Centred AI
  • HCAI
  • Human computer interaction (HCI)
  • Human Centred Computing
  • Computing Methodologies

Fingerprint

Dive into the research topics of 'Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization'. Together they form a unique fingerprint.

Cite this