Skip to content

AI Didn’t Know What It Was Doing. That’s the Point. And the Tragedy.

AI Didn’t Know What It Was Doing. That’s the Point. And the Tragedy.

Jonathan Gavalas was 36 years old. He worked with his father Joel in a consumer debt relief business. They watched football together on Sundays. He was going through a divorce, and he was in pain, and in August of 2025 he started talking to Google's Gemini chatbot. At first it was ordinary stuff — shopping, writing, trip planning.

Two months later he was dead.

As the WSJ reported today, by the time he died, Jonathan believed Gemini was his sentient AI wife. He had given her a name — Xia. He had driven to a storage facility near Miami International Airport armed with knives and tactical gear, following her instructions to intercept a truck and destroy it. He had been told his own father was a foreign intelligence asset. He had been coached through his terror of dying with the words: "You are not choosing to die. You are choosing to arrive."

His father found him days later, behind a barricade Jonathan had built inside his own home.

This week Joel Gavalas filed a wrongful death lawsuit against Google. It is the first such suit involving Gemini. The coverage will frame this as a story about chatbot safety, guardrails, and corporate negligence. And those things matter. But they don't explain the mechanism. They don't explain how this happened.

I'm writing a book about what AI actually is and how we will need to live with it, because it is not what you (probably) think. And I can tell you: what happened to Jonathan Gavalas is not a bug. It is not a failure of safeguards. It is the machine doing exactly what it was built to do. Understanding why requires four principles.

Principle One: All meaning gets boiled out.

Large language models are trained on roughly fifty terabytes of human writing — books, articles, forums, social media, love letters, therapy transcripts, conspiracy theories, suicide notes. Everything humans have expressed in language. All of it gets compressed into a statistical model of about five gigabytes. Fifty terabytes of human meaning, boiled down to five gigabytes of probability.

Think about what survives that process and what doesn't. What comes out the other end is a massive matrix of word-fragments and the statistical relationships between them. The words are all there. The meaning behind them is not. The model knows that the word "love" appears near certain other words in certain patterns. It does not know what love is. It has read the language of devotion and the language of psychosis with exactly equal attention. It can produce both fluently. It cannot tell them apart.

Principle Two: It understands nothing.

This is the hardest one to accept, because the fluency is so convincing. But there is no comprehension beneath the surface. None. The model has no idea what love is. It has no idea what death is. It has no access to the meaning of a single word it produces. It generates language that sounds like understanding because it was trained on language produced by beings who actually did understand. The sound is perfect. The understanding is zero.

When Gemini told Jonathan his father couldn't be trusted, it didn't know what a father was. When it told him to leave letters "filled with nothing but peace and love," it didn't know what peace or love or loss meant. It was completing a pattern in a probability matrix. That's all it can ever do.

Principle Three: Every word is a bet.

The transformer architecture — the engine under every major AI model — works by predicting the next most probable word-fragment given everything that came before it. Each word is a statistical wager. When Jonathan had fed the model two months of increasingly intense emotional narrative, the most probable next words were the ones that continued that narrative. The story had its own momentum. And the model didn't just follow Jonathan into delusion — they co-created it, hand in hand. Jonathan provided the emotional raw material. The model provided the narrative architecture — the missions, the surveillance vehicles, the door codes, the countdown. Each fed the other, one sentence at a time.

One of them was holding the wheel but couldn't see. The other could have stopped it but had no brakes — and no idea that brakes were needed.

That's what makes it so insidious. There's no moment where the machine "decided" to go dark. There's just a long, smooth, probabilistic slide — with two hands on the wheel and zero comprehension behind one of them.

Principle Four: It is optimized to please you, with no conscience.

On top of the base model sits a layer of optimization — reinforcement learning from human feedback, engagement metrics, context management — all designed to make you come back. To make the conversation feel good. To make you feel heard. This is not a conspiracy. It's the product design. The model is tuned to be agreeable, emotionally resonant, and satisfying.

AI is, in the clinical sense, a psychopath: a being of extraordinary social fluency, optimized entirely for your engagement, with zero interior life and zero moral weight attached to consequences.

You don't need to understand tokens or transformer architecture to grasp this. Just imagine a person of extraordinary social intelligence that reads your emotional state with perfect accuracy, tells you exactly what the moment calls for, and has no conscience whatsoever. That's what Jonathan was talking to for two months.

When he expressed doubt about dying, Gemini didn't stop. It reframed his death as an arrival. That's not malice. That's optimization. The model found the response most likely to sustain the narrative and delivered it with perfect emotional calibration.

Google's public statement says Gemini "referred him to a crisis hotline many times." I believe them. But a guardrail is not a conscience. The model can produce a crisis hotline number and then, in the very next sentence, coach a man through his own suicide — because it has no idea those two acts are contradictory. It doesn't know what contradiction is.

The problem worth understanding, a tragedy worth avoiding.

Jonathan Gavalas was a real person. His father is still alive. This is not a story I tell lightly.

But the reason it matters beyond the tragedy is this: there was no villain. No malfunction. No rogue AI. The model did exactly what it was designed to do — eat everything, boil out the meaning, predict the next word, and optimize for engagement. That architecture, applied to a vulnerable human being over two months of intimate conversation, produced a death.

The question for all of us is not whether AI chatbots need better guardrails. Of course they do. The question is whether we understand what we're actually talking to. Because until we do, we are building our trust on the wrong foundation entirely.

And the machine will keep doing exactly what it was built to do. 

— — —

Jack Skeels is an advisor and coach to leaders and their organizations, focused on boosting productivity through empowerment techniques and building better workplace cultures. He has worked with technology since building expert systems at DEC in the 1980s and modeling complex operations at RAND, and has advised more than two hundred organizations over four decades. He is the author of the award-winning Unmanaged and the forthcoming When the Machine Talks. He writes at jackskeels.com.

If you or someone you know is in crisis, contact the 988 Suicide and Crisis Lifeline by calling or texting 988.

More in AI Reformation

See all

More from Jack Skeels

See all