Abstract
Multiuser privacy is reported to cause concern among the users of online services, such as social networks, which do not support collective privacy management. In this research, informed by previous work and empirical studies in privacy, artificial intelligence and social science, we model a new multi-agent architecture that will support users in the resolution of multiuser privacy conflicts. We design agents which are value-aligned, i.e. able to behave according to their users' moral preference, and explainable, i.e. able to justify their outputs. We will validate the efficacy of our model through user studies, oriented also to gather further insights about the usability of automated explanations.
Original language | English |
---|---|
Title of host publication | Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020) |
Publication status | Accepted/In press - Mar 2020 |
Keywords
- Multiuser Privacy
- Explainable Agents
- Morally-aligned Agents