This paper explores the moral limits of relationships between users and advanced AI assistants, specifically which features of such relationships render them appropriate or inappropriate. We first consider a series of values including benefit, flourishing, autonomy and care that are characteristic of appropriate human interpersonal relationships. We use these values to guide an analysis of which features of user–AI assistant relationships are liable to give rise to harms, and then we discuss a series of risks and mitigations for such relationships. The risks that we explore are - (1) causing direct emotional and physical harm to users; (2) limiting opportunities for user personal development; (3) exploiting emotional dependence; and (4) generating material dependencies.