Deceptive AI Ecosystems: The Case of ChatGPT

Nicole Zhan, Yifan Xu, Stefan Sarkadi*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

399 Downloads (Pure)

Abstract

ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its development and deployment. This paper emphasizes the need to study ChatGPT "in the wild", as part of the ecosystem it is embedded in, with a strong focus on user involvement. We examine the ethical challenges stemming from ChatGPT's deceptive human-like interactions and propose a roadmap for developing more transparent and trustworthy chatbots. Central to our approach is the importance of proactive risk assessment and user participation in shaping the future of chatbot technology.
Original languageEnglish
Title of host publicationACM conference on Conversational User Interfaces
PublisherACM
Number of pages6
Publication statusPublished - 19 Jul 2023

Keywords

  • artificial intelligence
  • conversational agents
  • deceptive AI
  • ChatGPT

Fingerprint

Dive into the research topics of 'Deceptive AI Ecosystems: The Case of ChatGPT'. Together they form a unique fingerprint.

Cite this