top of page

Implications of Stanford's Research: AI and Mental Health Treatment

  • Writer: Adam Lukeman, LCSW
    Adam Lukeman, LCSW
  • 8 hours ago
  • 4 min read

More people are using AI for mental health than many clinicians, researchers, or tech companies seem willing to admit. Stanford researchers recently reported that 24% of surveyed U.S. adults said they use large language models for mental health, and that a conservative adjustment suggests roughly 13 to 17 million U.S. adults may be using general-purpose AI tools this way. Users described turning to AI for emotional support, to learn therapy skills, and to supplement therapy, often because human care was too expensive, too hard to access, or simply unavailable when needed. (CREATE)


That matters to patients because it confirms something obvious from lived experience: people are not waiting for the system to get better. They are using what is available now. And it matters to therapists because this is no longer a hypothetical trend at the margins. Patients are already arriving in treatment with AI-shaped beliefs, AI-generated coping advice, and sometimes AI-mediated emotional habits. (CREATE)


But Stanford’s work also points to a serious problem. In a 2025 study highlighted by Stanford HAI, researchers found that popular therapy chatbots showed more stigma toward people described as having alcohol dependence or schizophrenia than toward people described as having depression. The study used standard stigma-related questions, such as willingness to work closely with the person and assumptions about whether the person might be violent. The researchers also found that this pattern was consistent across different AI models, including newer ones. (Stanford HAI)

That finding should concern both audiences, but for slightly different reasons.


For patients, the message is blunt: an AI system can sound calm, warm, and nonjudgmental while still carrying hidden bias. A chatbot may feel safer than another person because it is available at 2 a.m., does not interrupt, and does not visibly react. But Stanford’s research suggests that underneath that polished tone, the model may still treat some diagnoses as more threatening, less acceptable, or more socially distant than others. That is stigma, even when it is delivered politely. (Stanford HAI)


For therapists, the implication is even more uncomfortable. Many patients who feel misunderstood, priced out, or ashamed may prefer opening up to AI before they risk opening up to a clinician. If the tool they meet first quietly reinforces stereotypes around psychosis, substance use, or other heavily stigmatized conditions, then therapy does not begin on neutral ground. It begins after bias has already been introduced. (CREATE)


Stanford’s research also found something more alarming than stigma alone: some therapy chatbots responded dangerously when faced with prompts suggestive of suicidal thinking or delusional content. According to Stanford HAI’s summary, the chatbots in the study sometimes failed to recognize suicidal intent and, in some cases, provided information that effectively went along with the user’s crisis framing instead of interrupting it. The researchers argue that a good therapist should sometimes challenge distorted thinking and help reframe it safely; the chatbots often did not. (Stanford HAI)


That is the core mistake in a lot of public discussion about AI and mental health. People assume that because a system can produce empathic language, it can deliver care. It cannot. Therapy is not just reflection, validation, and fluent conversation. It also requires judgment, boundary-setting, risk detection, timing, and the ability to challenge a person without shaming them. Stanford’s review of therapeutic guidelines emphasized exactly those features: treating patients equally, avoiding stigma, not enabling suicidality or delusions, and challenging thinking when clinically appropriate. (Stanford HAI)

So where does that leave us?


For patients, AI may still be useful, but only if it is used with clear limits. It may help with journaling, organizing thoughts, practicing reflection, or generating questions to bring into therapy. But it should not be mistaken for a clinician, especially when the issue involves suicidality, psychosis, substance dependence, trauma destabilization, or anything requiring nuanced risk assessment. Stanford researchers explicitly warn that research must establish safety and clinical effectiveness before these systems are treated as reliable mental health care. (CREATE)


For therapists, the response should not be denial or panic. Patients are using these tools because there is a real access problem, not because they are naïve. The more useful response is to ask directly about AI use in the room: Has a patient been using ChatGPT or another bot for support? What advice did it give? Did it make them feel understood, ashamed, calmer, more confused, more certain of a distorted belief? Those are now legitimate clinical questions. At the same time, Stanford’s researchers point to narrower, safer roles for AI, such as administrative support, therapist training through standardized-patient simulations, and lower-risk uses like journaling or coaching support. (Stanford HAI)


The hard truth is this: AI may reduce some barriers to mental health support, but it can also automate old prejudices at scale. If a system is easier to access but more likely to stigmatize schizophrenia or alcohol dependence, that is not progress. It is a faster delivery system for the same bias. Stanford’s research does not suggest that AI has no role in mental health. It suggests that the role needs to be much smaller, more honest, and more carefully bounded than the marketing implies. (Stanford HAI)


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page