AI, Chatbots and Mental Health: What Families and Communities Need to Know
As artificial intelligence integrates into mental healthcare, AFT members should consider how to use it and why guardrails and human connection matter.
Share
February 20, 2026
As artificial intelligence integrates into mental healthcare, AFT members should consider how to use it and why guardrails and human connection matter.
Share
As artificial intelligence tools move from a novelty to part of our everyday routine, many families, educators and healthcare professionals are beginning to think about a new use case: What role should AI play in mental health?
AI chatbots have become an increasingly popular option. The large-language model simulates human conversation by generating responses to text or voice prompts. Many people are already familiar with chatbots as tools for academic or workplace tasks, such as drafting emails, summarizing articles or assisting with research.
However, AI tools are not confined to classrooms and offices. They are already being used across healthcare—from documentation support and symptom screening to digital therapeutic platforms. In recent years, mental health has become one of the fastest-growing areas of experimentation, driven in part by significant unmet needs and persistent gaps in access to care.
As AI systems increasingly enter emotionally sensitive spaces, the conversation shifts from efficiency and convenience to questions about privacy, safety, effectiveness and the appropriate boundaries of technology in these spaces.
At a recent AFT Vital Lessons webinar, public health leader Dr. Vin Gupta, AFT President Randi Weingarten and Omer Golan, founder and CEO of MyWhatIf Foundation, a nonprofit developing a self-help AI tool, explored what responsible use of AI in mental health could look like and where caution is essential.
The message was clear: AI may be a tool, but it is not a substitute for human care.
Many Americans live in areas with limited access to mental health services and often wait months for appointments. Others never seek care at all. In that environment, technology is increasingly seen as a way to expand access, reduce cost barriers, provide support between appointments, and encourage early engagement before crises escalate.
Dr. Gupta emphasized that AI cannot replace clinicians, but it can sometimes serve as an amplifier in helping people gather information, reflect or prepare for next steps they might otherwise postpone.
The key question is not whether AI will be used but rather how it can be used efficiently and responsibly.
One of the central themes of the conversation was governance. AI systems are developed by private companies with varying levels of transparency, oversight and accountability. While some states have introduced or enacted AI-related regulations, federal guardrails remain fragmented and inconsistent, leading to skepticism on the use of AI.
Not all AI mental health tools are built the same way. A general-purpose chatbot that can draft emails and summarize articles is fundamentally different from a tool intentionally designed for trauma recovery or a specific psychiatric condition. As Golan put it, using a general chatbot for a highly specific mental health support can be like “using a bread knife for surgery.”
Families, educators and clinicians evaluating these tools should consider:
Privacy concerns deserve particular attention. Widely available AI platforms may not offer the protections users assume, especially when sensitive mental health information is involved. Some tools rely on large foundational modes that store or process data in ways that are not always clear to users. Before sharing personal information, users should review the platform’s privacy policy and avoid sharing personally identifiable or highly sensitive details unless protections are explicitly outlined.
Importantly, guardrails are not anti-technology—they are pro-human.
The webinar highlighted MyWhatIf as one example of a more focused approach to AI-supported mental health. Developed as a nonprofit initiative, MyWhatIf was created to address trauma and post-traumatic stress disorder through a structured, six-week, self-guided digital experience.
Rather than asking users to relive painful memories, the program centers on narrative imagination. Drawing on neuroscience and trauma research, it is built around the idea that trauma can disrupt the brain’s ability to imagine positive future outcomes, or what Golan describes as turning off the “hope switch.” When individuals feel stuck in survival mode, imagining a better future can become neurologically difficult.
Through guided “what if” prompts, the program encourages participants to explore alternative future narratives. The goal is not to erase trauma but to gradually restore future-oriented thinking and a sense of agency. Early pilot data shared during the webinar indicated that many participants reported increased hope and reductions in trauma-related symptoms following the program.
Golan emphasized that tools like MyWhatIf are not therapy delivered by a machine. They are structured psychological experiences shaped through AI and intended to complement licensed mental health professionals. For individuals already in therapy, such tools may serve as a bridge between sessions. For others who have never sought care, they may lower the threshold to begin.
At a time when many communities are navigating prolonged stress and uncertainty, scalable tools may provide supplemental support when traditional systems are strained. Even so, technology must remain a bridge to human care and not a substitute for it.
Throughout the discussion, one principle remained constant: Human touch is irreplaceable.
AI may simulate empathy and may respond without judgment, but research shows that conversational agents lack genuine emotional understanding, ethical judgment, and the ability to form the deep interpersonal connections that human therapists provide. Some studies also note that while AI can feel “supportive,” it can encourage emotional dependency or produce responses that lack nuanced understanding of individual contexts.
When machines attempt to position themselves as lifelong companions or substitutes for human care, warning signs may emerge. Mental health support depends on trust, boundaries, professional training and ethical obligations that no algorithm can fully replicate.
For educators and healthcare providers, this means remaining vigilant. If AI tools enter classrooms or care settings, they should do so with clear boundaries and human oversight. Monitoring for overuse, reinforcing referral pathways and maintaining strong professional relationships are essential.
AI in mental health is neither inherently harmful nor inherently transformative. Its impact depends on how it is designed, regulated and integrated into systems of care.
Families and professions can approach these tools with both curiosity and caution. Ask how the system works, what data it collects, what outcomes it measures and who is accountable if something goes wrong. Look for evidence of effectiveness, privacy protections and clear boundaries. Most important, keep licensed professionals and trusted relationships at the center of care decisions.
Innovation may open new pathways to support, particularly for people who face barriers to care. But the foundation of mental health will always rest on human dignity, agency and connection.
Join Dr. Vin Gupta—pulmonologist, public health expert, and professor—for a yearlong series offering expert-led webinars, blogs, resources, and Q&A sessions on pressing health issues to help AFT members and communities stay informed and healthy. Access all on-demand town halls and register for the next one.
Want to see more stories like this one? Subscribe to the SML e-newsletter!