A Survey of Evaluation Methods and Metrics for Explanations in Human–Robot Interaction (HRI)

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

117 Downloads (Pure)

Abstract

The crucial role of explanations in making AI safe and trustworthy was not only recognized by the machine learning community but also by roboticists and human–robot interaction researchers. A robot that can explain its actions is supposed to be better perceived by the user, be more reliable, and seem more trustworthy. In collaborative scenarios, explanations are often expected to even improve the team's performance. To test whether a developed explanation-related ability meets these promises, it is essential to rigorously evaluate them. Due to the many aspects of explanations that can be evaluated, and their varying importance in different circumstances, a plethora of evaluation methods are available. In this survey, we provide a comprehensive overview of such methods while discussing features and considerations unique to explanations given during human–robot interactions.
Original languageEnglish
Title of host publicationExplainable Robotics Workshop at IEEE International Conference on Robotics and Automation (ICRA) 2023
Number of pages7
Publication statusPublished - 9 May 2023

Keywords

  • Robotics
  • Explainable Agents
  • Explainable AI
  • XAI
  • Evaluation
  • Survey

Fingerprint

Dive into the research topics of 'A Survey of Evaluation Methods and Metrics for Explanations in Human–Robot Interaction (HRI)'. Together they form a unique fingerprint.

Cite this