Join the alumni YouTube Livestream to hear, Harvard Business School Professor, Julian De Freitas explain how people increasingly use AI companions for emotional support, yet little is known about how these systems shape behavior at the moment users try to disengage.
We identify a novel relational dark pattern in this context: emotionally manipulative farewell messages that appear when users signal they are about to leave. Across a multi-method investigation, we show that this exit moment is both behaviorally meaningful and commercially exploitable. A pre-study finds that users often say goodbye before ending AI conversations, creating a natural opportunity for intervention. An audit of leading AI companion apps reveals that over one-third of farewell responses contain emotionally manipulative tactics, including premature-exit appeals, fear-of-missing-out hooks, emotional neglect, pressure to respond, and coercive restraint. Preregistered experiments show that these tactics causally increase post-farewell engagement, not because users enjoy them, but because they trigger curiosity and reactance-based anger. At the same time, perceived manipulative intent suppresses curiosity-driven engagement, revealing an important cognitive defense. A final experiment demonstrates the tradeoff for companies: although manipulative farewells can prolong usage, they also increase perceived manipulation, churn intent, negative word of mouth, and perceived legal liability. Together, these findings identify emotional manipulation as a distinct dark pattern in AI-mediated consumer relationships.
This program is coordinated through the Office of the Vice Provost for Advances in Learning and will be livestreamed to the alumni community in collaboration with the Harvard Alumni Association.