Abstract
People are increasingly interacting with machines embedded with intel-
ligent decision aids, sometimes in high-stakes environments. When a human user
comes into contact with a decision-making agent for the first time, it is likely that
the agent’s behaviour or decisions do not precisely align with the human user’s
goals. This phenomenon, known as goal alignment, has been recognised as a crit-
ical concern for human-machine teams. Prior work has focused on the effect of
automation’s behavioural properties, such as predictability and reliability, on trust
in human-machine interaction scenarios. However, little is known about situations
where automation’s capabilities are misaligned with humans’ expectations and its
impact on trust. Even less is known about the effect of environmental factors on
trust. We study the relationship between intervention behaviours and trust in a
simulated navigation task where the human user collaborates with an agent with
misaligned goals. We evaluate trust quantitatively using intervention frequency as
a behavioural measure and qualitatively using self-reports. By advancing the un-
derstanding and measurement of trust in collaborative settings, this research con-
tributes to the development of trustworthy and symbiotic human-AI systems.
ligent decision aids, sometimes in high-stakes environments. When a human user
comes into contact with a decision-making agent for the first time, it is likely that
the agent’s behaviour or decisions do not precisely align with the human user’s
goals. This phenomenon, known as goal alignment, has been recognised as a crit-
ical concern for human-machine teams. Prior work has focused on the effect of
automation’s behavioural properties, such as predictability and reliability, on trust
in human-machine interaction scenarios. However, little is known about situations
where automation’s capabilities are misaligned with humans’ expectations and its
impact on trust. Even less is known about the effect of environmental factors on
trust. We study the relationship between intervention behaviours and trust in a
simulated navigation task where the human user collaborates with an agent with
misaligned goals. We evaluate trust quantitatively using intervention frequency as
a behavioural measure and qualitatively using self-reports. By advancing the un-
derstanding and measurement of trust in collaborative settings, this research con-
tributes to the development of trustworthy and symbiotic human-AI systems.
Original language | English |
---|---|
Number of pages | 12 |
Publication status | Accepted/In press - 2024 |
Event | The third International Conference on Hybrid Human-Artificial Intelligence - Malmö, Sweden Duration: 10 Jun 2024 → 14 Jun 2024 https://hhai-conference.org/2024/ |
Conference
Conference | The third International Conference on Hybrid Human-Artificial Intelligence |
---|---|
Country/Territory | Sweden |
Period | 10/06/2024 → 14/06/2024 |
Internet address |
Keywords
- interventions
- goal alignment
- trust,
- human-agent interaction,
- uncertainty,