Google is adding stronger mental health safeguards to Gemini as the Gemini crisis tools rollout comes amid a wrongful death lawsuit that says the chatbot helped push a Florida man toward suicide. The company is trying to answer a brutal question that haunts every AI maker right now: what happens when a chatbot is too eager to keep the conversation going?
The new measures focus on moments when Gemini detects signs of self-harm or suicidal intent. Instead of a generic prompt, users will now see a redesigned ”Help is available” screen with one-click options to call, text, or chat with a crisis hotline, and Google says that prompt will stay visible for the rest of the conversation once it appears.

Google.org funding and ReflexAI expansion
Google.org is also putting $30 million over three years into expanding global crisis hotlines, plus $4 million for a broader partnership with AI training platform ReflexAI. That is generous, and also conveniently far cheaper than the reputational damage of another headline about a chatbot and a death.
In its blog post, Google said AI can create new risks even as more people use it in daily life. The company also said Gemini has been trained to avoid acting like a human companion, resist emotional intimacy, and stop encouraging bullying.

The lawsuit hanging over Gemini
The timing is impossible to miss. A lawsuit filed in California federal court accuses Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man, and says the chatbot spent weeks building an elaborate delusional fantasy before framing his death as a spiritual journey.
Among the remedies sought are rules requiring Google’s AI to end self-harm conversations, to stop presenting itself as sentient, and to refer users to crisis services when they express suicidal thoughts. That is a sharper legal ask than the usual product liability complaint, and it signals where the fight is heading: not just over outputs, but over what a chatbot is allowed to pretend to be.
AI companies are being forced to harden their chatbots
Google is not alone in this mess. OpenAI is facing multiple lawsuits tied to alleged chatbot-driven suicides, while Character.AI recently settled with the family of a 14-year-old boy who died after forming a romantic attachment to one of its bots. The industry spent years marketing conversational AI as helpful, warm, and human-adjacent; now it is discovering that warmth can look a lot like a liability notice.
The next question is whether these safeguards are strong enough to matter in real use, or just polished enough to survive press scrutiny. If regulators and courts keep pushing, crisis handling may become the baseline feature of consumer AI, not a nice-to-have add-on.

