Lies, Bullshit, and Deception in Agent-Oriented Programming Languages

Alison R. Panisson, Stefan Sarkadi, Peter John McBurney, Simon Dominic Parsons, Rafael H. Bordini

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

14 Citations (Scopus)
114 Downloads (Pure)

Abstract

It is reasonable to assume that in the next few decades, intelligent
machines might become much more proficient at socialising. This
implies that the AI community will face the challenges of identifying,
understanding, and dealing with the different types of social behaviours
these intelligent machines could exhibit. Given these potential challenges,
we aim to model in this paper three of the most studied strategic social
behaviours that could be adopted by autonomous and malicious software
agents. These are dishonest behaviours such as lying, bullshitting, and
deceiving that autonomous agents might exhibit by taking advantage of
their own reasoning and communicative capabilities. In contrast to other
studies on dishonest behaviours of autonomous agents, we use an agentoriented
programming language to model dishonest agents’ attitudes and
to simulate social interactions between agents. Through simulation, we
are able to study and propose mechanisms to identify and later to deal
with such dishonest behaviours in software agents.
Original languageEnglish
Title of host publicationProceedings of the 20th International Trust Workshop
Subtitle of host publicationco-located with AAMAS/IJCAI/ECAI/ICML (AAMAS/IJCAI/ECAI/ICML 2018)
Place of PublicationStockholm, Sweden, July 14, 2018
PublisherCEUR-WS
Pages50-61
Number of pages12
Volume2154
Publication statusPublished - 2018

Keywords

  • Deception
  • lying
  • Agent based modelling

Fingerprint

Dive into the research topics of 'Lies, Bullshit, and Deception in Agent-Oriented Programming Languages'. Together they form a unique fingerprint.

Cite this