The news: Google is rolling out new mental health support features for its Gemini chatbot.
Digging into the details: Updated chatbot safeguards will expand access to crisis support resources, ensure Gemini responds appropriately in acute mental health situations, and better protect younger users.
A “help available” module will trigger quick connections to care. When a chat suggests a user may need mental health support, Gemini surfaces a one-touch interface that instantly connects them to crisis resources via chat, call, text, or direct access to the hotline website.
Models will be trained to detect when users may need help. Offering crisis support tools doesn’t necessarily solve the core problem that AI systems can fail to recognize when a mental health situation requires escalation. Google says Gemini is trained to avoid reinforcing false beliefs, distinguish subjective experience from objective fact, and not validate harmful behaviors such as self-harm urges.
Safeguards will be updated to protect minors. Persona guardrails are designed to prevent Gemini from behaving like a companion, avoiding language that simulates intimacy or expresses needs, while also not encouraging bullying.
Why it matters: It’s estimated that millions of people worldwide may discuss suicidal thoughts on ChatGPT every week. With a rapidly growing user base, Gemini might not be far behind.
Younger generations in particular are increasingly using AI tools for both mental health support and casual emotional connection.
Implications for AI companies: Leading AI players OpenAI and Google have swiftly emphasized changes made to their platforms following lawsuits alleging that ChatGPT and Gemini caused harm or death to loved ones.
However, there’s no single fix to ensure general-purpose AI chatbots can always detect when a user may be approaching self-harm. And there’s no way to prevent people from turning to AI for mental health support or companionship, especially younger users who may face barriers to affordable therapy or experience higher levels of loneliness. AI companies should be exhaustive in implementing even stronger guardrails, such as regularly issuing “I’m not a licensed professional” disclaimers, ending conversations at the first sign of potential harm, and conducting audits of their LLMs with child psychologists and safety experts.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.
You've read 0 of 2 free articles this month.
685 Third Avenue21st FloorNew York, NY 100171-800-405-0844
1-800-405-0844[email protected]