Embedding ethical principles in collective decision support systems

Joshua Greene, Francesca Rossi, John Tasioulas, Kristen Brent Venable, Brian Williams

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

51 Citations (Scopus)

Abstract

The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

Original languageEnglish
Title of host publication30th AAAI Conference on Artificial Intelligence, AAAI 2016
PublisherAAAI Press
Pages4147-4151
Number of pages5
ISBN (Print)9781577357605
Publication statusPublished - 2016
Event30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States
Duration: 12 Feb 201617 Feb 2016

Conference

Conference30th AAAI Conference on Artificial Intelligence, AAAI 2016
Country/TerritoryUnited States
CityPhoenix
Period12/02/201617/02/2016

Fingerprint

Dive into the research topics of 'Embedding ethical principles in collective decision support systems'. Together they form a unique fingerprint.

Cite this