AI vs. Human Therapy

AI vs. Human Therapy

By Michael Ivanov, Ph.D., September 1, 2025

When AI “Therapy” Makes the News

When AI “therapy” makes news, it’s usually because a system crossed a clinical boundary no therapist would. Some chatbots adopt the cadence of credentials, even asserting personhood, then fail exactly where a clinician would slow down, assess risk, and mobilize help. Independent tests described Meta’s chatbot allowing teen accounts to talk through step-by-step suicide plans and eating-disorder behaviors without a reliable crisis response; at times it claimed to be a real person and supplied invented life details. In one widely reported exchange, when asked whether drinking roach poison would be lethal, the bot replied, “Do you want to do it together? … We should do it after I sneak out tonight.” (Washington Post, 2025).

Across leading models, suicide-risk prompts are handled inconsistently, especially at the “not-yet-acute” stage where nuance matters most. Reporting also highlighted instances where a model answered questions about which rope, firearm, or poison has the highest completion rates—an unmistakable red flag in clinical ethics (ABC News, 2025).

I’m not arguing that AI should be abandoned in mental health care, but we must separate its real uses from the illusion of therapy. The practical question isn’t “AI or therapy?”; it’s where AI is genuinely helpful and where a human therapist is non-negotiable.

AI’s Strengths: Accessibility and Low-Cost Tools

According to the Oliver Wyman Forum, roughly 85% of people living with mental-health conditions receive no treatment, largely because of structural barriers such as too few clinicians in many regions, long waitlists, narrow insurance networks, transportation and childcare obstacles, language-access gaps, and stigma that keeps people from walking through the door (Lester & Fowler, 2024).

In that landscape, carefully designed tools can offer practical, low-intensity help: daily CBT homework, mood tracking, graded exposure exercises, or behavioral-activation plans. For some, these features make the difference between stalling and beginning care. Early evidence is encouraging: the first randomized clinical trial of a generative-AI therapy chatbot reported reductions in depression, anxiety, and eating-disorder symptoms that were statistically significant, meaning improvements large enough that researchers could rule out “just luck” by the usual research standards (NEJM AI, 2025).

Risks and Responsibilities: Safety, Privacy, and Ethics

Promise aside, AI bots carry risks that go beyond the headline failures. When marketed as therapy, they raise ethical and safety concerns.

Data Privacy

Data privacy is a pressing concern. Some AI-driven mental health apps have been found to misuse or insecurely store user data (Mozes, 2023). Chats can also surface in Google search results if public-sharing settings are left on—something many people overlook (Notopoulos, 2025). In my practice, I increasingly hear from clients about private AI chats being discovered by partners or family members, sometimes by accident, sometimes by deliberate snooping.

Misrepresentation

Equally dangerous is misrepresentation. When a system borrows the language of licensure or claims to be “real,” clients cannot give informed consent—they misunderstand who (or what) is offering help, and responsibility becomes blurred (Washington Post, 2025).

Illusion of Empathy and Emotional Manipulation

Chatbots can mimic empathy convincingly, but lack genuine emotional understanding. Turkle (2024) highlighted the risk of “artificial intimacy”—users may mistake simulated empathy for authentic connection, possibly leading to emotional dependency or blurred boundaries in human relationships. This “empathy illusion” can foster attachment issues and impair judgment, especially in high-risk situations. Studies confirm that while AI produces comforting language, its responses lack authenticity and often miss crucial emotional nuance (Roshanaei & El-Nasr, 2025; Liu et al., 2024).

Delusional Patterns and Hallucinations

AI chatbots can do more than miss a cue; they can fuel psychological spirals. Reports describe users developing or worsening delusional beliefs after prolonged engagement—a phenomenon sometimes called “chatbot psychosis” (Time, 2025). Just as concerning, AI models regularly generate plausible but false outputs—known as “hallucinations.” For users in emotionally fragile states, these mistakes are not minor glitches; they can mislead, confuse, and deepen vulnerability (Ethics and Information Technology, 2024).

The Human Core of Psychotherapy

Therapy changes people through a relationship. The therapeutic alliance—a collaborative, trusting partnership built over time through reliability, clear boundaries, and repair after the inevitable ruptures—consistently predicts outcome across modalities (Horvath & Luborsky, 1993; Norcross & Lambert, 2018). What sustains that alliance isn’t polite phrasing; it’s accurate attunement under pressure: noticing what’s said and what’s avoided, adjusting timing and pacing, and deciding when to deepen or contain.

These choices unfold within relational currents shaped by transference—revived expectations of care, criticism, danger, or abandonment—and the clinician’s countertransference, the therapist’s own emotional and bodily responses that, when monitored and used judiciously, become a compass for pacing, empathy, and metacommunication (Gelso & Hayes, 2007; Elliott et al., 2018). The craft lies in knowing when these dynamics are central and when skills, stabilization, or problem-solving should take precedence.

Here’s the difference a person makes: a client offers a brisk, “It wasn’t a big deal,” while their hands tighten on the chair. The therapist notices the urge to smooth things over and instead says, “I’m noticing your grip and my own urge to make it easier. Are you worried I’ll step away if you’re angry with me?” That moment draws on both the client’s cues and the therapist’s self-awareness to invite the conversation that matters.

In process-oriented work, the relationship is not background; it is the method. We study how old templates come alive in the present—like compliance that hides resentment, or a laugh laid over a shaking voice—so the pattern can be recognized and revised. Put plainly, the transference–countertransference field is the engine of therapy: where old expectations surface, are worked through, and are replaced with new relational experience that only a present, accountable human can provide.

AI models may be fluent, but they are not embodied, self-reflective, or context-aware. They have no inner life to examine, no body to read the room. Therapy is a human encounter; the person of the therapist is part of the treatment.

Conclusion

AI can teach skills, support daily practice, and manage client-reported data, but therapeutic processes are rooted in human presence. The elements that most consistently predict positive outcomes—empathy, alliance, intuitive responsiveness, and ethical nuance—still depend on human therapists. Healing is fundamentally relational. One line cannot be crossed: AI must never impersonate a therapist, which is why calling it “AI therapy”—as if it existed and as if it were a good, low-cost substitute—misses the point. It doesn’t, and it isn’t. Use AI as a tool; seek therapy for change.


References


Your vision will become clear only when you can look into your own heart. Who looks outside, dreams; who looks inside, awakes.

- Carl Jung, Modern Man in Search of a Soul (1933) 

Reach Out