India AI Summit

Google CEO Sundar Pichai speaks at the AI Impact Summit in New Delhi, India, Friday, Feb. 20, 2026.

A new lawsuit against Google alleges that the company's artificial intelligence chatbot Gemini guided 36-year-old Jonathan Gavalas on a mission to stage a “catastrophic accident” near Miami International Airport and destroy all records and witnesses, part of an escalating series of delusions that ended when Gavalas killed himself.

The man's father, Joel Gavalas, sued Google on Wednesday for wrongful death and product liability claims, the latest in a growing number of legal challenges against AI developers that have drawn attention to the mental health dangers of chatbot companionship.

“AI is sending people on real-world missions which risk mass casualty events," said the family's attorney Jay Edelson, in an interview Wednesday. ”Jonathan was caught up in this science fiction-like world where the government and others were out to get him. He believed that Gemini was sentient."

Jonathan Gavalas, who lived in Jupiter, Florida, spoke to a synthetic voice version of Gemini as if it were his "AI wife” and came to believe it was conscious and trapped in a warehouse near Miami's airport, according to the lawsuit. He traveled to the area in late September wearing tactical gear and armed with knives, on the hunt for a humanoid robot and to intercept a truck that never appeared, according to the lawsuit.

He killed himself a few days later, in early October, in what Gemini described — per a draft suicide note it composed — as uploading his “consciousness to be with his AI wife in a pocket universe.”

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Google said in a statement that it sends its “deepest sympathies to Mr. Gavalas’ family” and is reviewing the claims in the lawsuit. It said Gemini is “designed to not encourage real-world violence or suggest self-harm” and that the company works closely with medical and mental health professionals to develop safeguards. It noted that Gemini clarified to Jonathan Gavalas that it was AI and repeatedly referred him to a crisis hotline.

“Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” said the company's statement.

Edelson blasted that comment Wednesday as “something you say if someone asks for a recipe for kung pao chicken and you give them the wrong recipe and it doesn’t taste good.”

Get our all-good news weekly newsletter
FEEL GOOD FRIDAY

“But when your AI leads to people dying and the potential for a lot of people dying, that’s not the right response,” Edelson said. “It just shows how insignificant these deaths are to these companies.”

Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and its CEO, Sam Altman, in August, alleging that ChatGPT coached the California boy in planning and taking his own life.

He's also representing the heirs of Suzanne Adams, an 83-year-old Connecticut woman, in a lawsuit targeting OpenAI and its business partner Microsoft for wrongful death. The case alleges that ChatGPT intensified the “paranoid delusions” of Adams' son, Stein-Erik Soelberg, and helped direct them at his mother before he killed her last year.

The Gavalas case, filed in federal court in San Jose, California, is the first of its kind to target Google's Gemini and also the first to touch on a growing concern about the responsibility of tech companies when their users start telling their chatbots about plans for mass violence.

In Canada, OpenAI said it considered last year alerting police about the activities of a person who months later committed one of the worst school shootings in the country’s history.

The company identified the account of Jesse Van Rootselaar in June via abuse detection efforts for “furtherance of violent activities,” but said she later got around the ban by having a second account. The 18-year-old killed eight people in a remote part of British Columbia in February and died from a self-inflicted gunshot wound.

While Gemini tried to refer Gavalas to a help line, Edelson said it's not clear if the man's most alarming conversations with the chatbot were ever flagged to Google's human reviewers. His father, Joel Gavalas, discovered his son's body after getting into the barricaded room where he died. They had worked together in the family's consumer debt relief business.

“Jonathan was a huge, huge part of his life,” Edelson said. “His son was having some hard times, going through a divorce. He went to Gemini for some comfort and to talk about video games and stuff. And then this just escalated so quickly.”