Abstract
Many papers make claims about specific visualization techniques that
are said to enhance or calibrate trust in AI systems. But a design
choice that enhances trust in some cases appears to damage it in others.
In this paper, we explore this inherent duality through an analogy
with “knobs”. Turning a knob too far in one direction may result in
under-trust, too far in the other, over-trust or, turned up further still, in
a confusing distortion. While the designs or so-called “knobs” are not
inherently evil, they can be misused or used in an adversarial context
and thereby manipulated to mislead users or promote unwarranted
levels of trust in AI systems. When a visualization that has no mean-
ingful connection with the underlying model or data is employed to
enhance trust, we refer to the result as “trust junk.” From a review
of 65 papers, we identify nine commonly made claims about trust
calibration. We synthesize them into a framework of knobs that can be
used for good or “evil,” and distill our findings into observed pitfalls
for the responsible design of human-AI systems.
are said to enhance or calibrate trust in AI systems. But a design
choice that enhances trust in some cases appears to damage it in others.
In this paper, we explore this inherent duality through an analogy
with “knobs”. Turning a knob too far in one direction may result in
under-trust, too far in the other, over-trust or, turned up further still, in
a confusing distortion. While the designs or so-called “knobs” are not
inherently evil, they can be misused or used in an adversarial context
and thereby manipulated to mislead users or promote unwarranted
levels of trust in AI systems. When a visualization that has no mean-
ingful connection with the underlying model or data is employed to
enhance trust, we refer to the result as “trust junk.” From a review
of 65 papers, we identify nine commonly made claims about trust
calibration. We synthesize them into a framework of knobs that can be
used for good or “evil,” and distill our findings into observed pitfalls
for the responsible design of human-AI systems.
Original language | English |
---|---|
Title of host publication | IEEE PacificVis conference proceedings |
Publisher | IEEE |
Publication status | Accepted/In press - 2024 |
Publication series
Name | PacificVis conference proceedings |
---|
Keywords
- Artificial Intelligence (AI)
- Human Centred AI
- HCAI
- Human computer interaction (HCI)
- Human Centred Computing
- Computing Methodologies