Value-Aligned and Explainable Agents for Collective Decision Making: Privacy Application: Doctoral Consortium

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

76 Downloads (Pure)

Abstract

Multiuser privacy is reported to cause concern among the users of online services, such as social networks, which do not support collective privacy management. In this research, informed by previous work and empirical studies in privacy, artificial intelligence and social science, we model a new multi-agent architecture that will support users in the resolution of multiuser privacy conflicts. We design agents which are value-aligned, i.e. able to behave according to their users' moral preference, and explainable, i.e. able to justify their outputs. We will validate the efficacy of our model through user studies, oriented also to gather further insights about the usability of automated explanations.
Original languageEnglish
Title of host publicationProc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020)
Publication statusAccepted/In press - Mar 2020

Keywords

  • Multiuser Privacy
  • Explainable Agents
  • Morally-aligned Agents

Fingerprint

Dive into the research topics of 'Value-Aligned and Explainable Agents for Collective Decision Making: Privacy Application: Doctoral Consortium'. Together they form a unique fingerprint.

Cite this