Critiquing the harmless, honest, helpful (‘HHH’) framework for LLM alignment, we propose an interactional ethics that is more centred on pragmatic factors, exploring what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated interaction, and offers practical suggestions to help LLM technologies behave as good social actors and treat people respectfully.