Google rolls out Gemini mental health features as Gen Z AI therapy habits raise stakes

The news: Google is rolling out new mental health support features for its Gemini chatbot.

Digging into the details: Updated chatbot safeguards will expand access to crisis support resources, ensure Gemini responds appropriately in acute mental health situations, and better protect younger users.

A “help available” module will trigger quick connections to care. When a chat suggests a user may need mental health support, Gemini surfaces a one-touch interface that instantly connects them to crisis resources via chat, call, text, or direct access to the hotline website.

Models will be trained to detect when users may need help. Offering crisis support tools doesn’t necessarily solve the core problem that AI systems can fail to recognize when a mental health situation requires escalation. Google says Gemini is trained to avoid reinforcing false beliefs, distinguish subjective experience from objective fact, and not validate harmful behaviors such as self-harm urges.

Safeguards will be updated to protect minors. Persona guardrails are designed to prevent Gemini from behaving like a companion, avoiding language that simulates intimacy or expresses needs, while also not encouraging bullying.

Why it matters: It’s estimated that millions of people worldwide may discuss suicidal thoughts on ChatGPT every week. With a rapidly growing user base, Gemini might not be far behind.

Younger generations in particular are increasingly using AI tools for both mental health support and casual emotional connection.

  • Among those who seek health information via AI, Gen Z (38%) consumers are far more likely than average (22%) to use AI for mental health or therapy, per EMARKETER’s January 2026 US Digital Health survey.
  • Gen Z (44%) is also about twice as likely as the average US adult (23%) to turn to AI chatbots for general emotional support outside of formal therapy, according to a December 2025 BreakThrough by BasePoint survey.

Implications for AI companies: Leading AI players OpenAI and Google have swiftly emphasized changes made to their platforms following lawsuits alleging that ChatGPT and Gemini caused harm or death to loved ones.

  • Google recently got hit with its first chatbot-related wrongful death case in the US; OpenAI is currently dealing with several.
  • Both companies defended their AI tools as having appropriate safeguards during those chatbot interactions—Gemini, for example, referred the user to a crisis hotline—while pledging additional protections.

However, there’s no single fix to ensure general-purpose AI chatbots can always detect when a user may be approaching self-harm. And there’s no way to prevent people from turning to AI for mental health support or companionship, especially younger users who may face barriers to affordable therapy or experience higher levels of loneliness. AI companies should be exhaustive in implementing even stronger guardrails, such as regularly issuing “I’m not a licensed professional” disclaimers, ending conversations at the first sign of potential harm, and conducting audits of their LLMs with child psychologists and safety experts.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!