Explaining Reputation Assessments

Ingrid Nunes, Phillip Taylor, Lina Barakat, Nathan Griffiths, Simon Miles

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
136 Downloads (Pure)

Abstract

Reputation is crucial to enabling human or software agents to select among alternative providers. Although several effective reputation assessment methods exist, they typically distil reputation into a numerical representation, with no accompanying explanation of the rationale behind the assessment. Such explanations would allow users or clients to make a richer assessment of providers, and tailor selection according to their preferences and current context. In this paper, we propose an approach to explain the rationale behind assessments from quantitative reputation models, by generating arguments that are combined to form explanations. Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models in the context of reputation. We present example argument templates, and describe how to select their parameters using explanation algorithms. Our proposal was evaluated by means of a user study, which followed an existing protocol. Our results give evidence that although explanations present a subset of the information of trust scores, they are sufficient to equally evaluate providers recommended based on their trust score. Moreover, when explanation arguments reveal implicit model information, they are less persuasive than scores.
Original languageEnglish
JournalINTERNATIONAL JOURNAL OF HUMAN COMPUTER STUDIES
Early online date3 Nov 2018
DOIs
Publication statusE-pub ahead of print - 3 Nov 2018

Keywords

  • Reputation
  • Trust
  • Explanation
  • Arguments
  • User study

Fingerprint

Dive into the research topics of 'Explaining Reputation Assessments'. Together they form a unique fingerprint.

Cite this