Should Agentic Conversational AI Change How We Think About Ethics? Characterising an Interactional Ethics Centred on Respect

Abstract

With the growing popularity of conversational agents based on large language models (LLMs), we need to ensure their behaviour is ethical and appropriate. Work in this area largely centres around the ‘HHH’ criteria - making outputs more helpful and honest, and avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus isu seful when viewing LLM agents as mere mediums or output-generating systems, it fails to account for pragmatic factors that can make the same speech act seem more or less tactless or inconsiderate in different social situations. With the push towards agentic AI, wherein systems become increasingly proactive in chasing goals and performing actions in the world, considering the pragmatics of interaction becomes essential. We propose an interactional approach to ethics that is centred on relational and situational factors. We explore what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated social interaction, and offers practical suggestions to help agentic LLM technologies treat people well.

Publication
arXiv:2401.09082v2 [cs.CL]
Lize Alberts
Lize Alberts
Doctoral Candidate | Research Fellow

DPhil candidate in Computer Science at the University of Oxford | Research Assistant at the Leverhulme Centre for the Future of Intelligence | Research Fellow at Stellenbosch University’s Unit for the Ethics of Technology