Until now, our understanding of what it means for generative AI like large language models (LLMs) to behave ethically has mainly considered semantics (e.g., ensuring outputs do not contain any biased, inaccurate, harmful, offensive or toxic language). However, as AI systems start behaving more like social actors—speaking directly to people in natural language, and becoming more proactive in doing so—we believe that the pragmatics of situated social interaction should get more attention. That is, more than thinking about what makes for helpful or harmful language in the abstract, we need to consider what it actually means to treat a person well in an interaction or ongoing relationship. More than just avoiding universal ‘harms’ like being sexist or misleading, we propose an interactional ethics that is centred on duties of respect, considering how situational, relational and individual factors can make the same speech act seem more or less rude or inconsiderate in different contexts.