34 Citations (Scopus)
617 Downloads (Pure)

Abstract

Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.
Original languageEnglish
Pages (from-to)287-302
Number of pages16
JournalAI COMMUNICATIONS
Volume32
Issue number4
Early online date11 Oct 2019
DOIs
Publication statusPublished - 2019

Keywords

  • agent-oriented programming languages
  • Deception
  • machine deception
  • theory of mind

Fingerprint

Dive into the research topics of 'Modelling Deception using Theory of Mind in Multi-Agent Systems'. Together they form a unique fingerprint.

Cite this