I finally felt I had the tools to control these urges': Empowering Students to Achieve Their Device Use Goals With the Reduce Digital Distraction Workshop

Abstract

With the growing popularity of dialogue agents based on large language models (LLMs), urgent attention has been drawn to finding ways to ensure their behaviour is ethical and appropriate. These are largely interpreted in terms of the ‘HHH’ criteria for making outputs more helpful and honest, and avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus is useful from the perspective of viewing LLM agents as mere mediums for information, it fails to account for pragmatic factors that can make the same utterance seem more or less offensive or tactless in different social situations. We propose an approach to ethics that is more centred on relational and situational factors, exploring what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated interaction, and offers practical suggestions to help LLM technologies behave as good social actors and treat people respectfully.

Publication
To appear in Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. Honolulu, Hawai’i, USA. 11-16 May 2024
Lize Alberts
Lize Alberts
Doctoral Candidate, Research Fellow

DPhil candidate in Computer Science at the University of Oxford and Research Fellow at Stellenbosch University’s Unit for the Ethics of Technology.