The Talking Machine Series — Article 1 of 7
There's a new voice in your organization that wasn't there two years ago. It is always helpful, sounds brilliant, "thinks" so fast and so polished that nobody stops to ask the obvious question: "Does it know what it's talking about?"
The answer is no. It has never known what it's talking about. But it talks so well that the distinction has become almost impossible to detect — even for the smartest people in your building. Including you.
Let's talk about Pamela.
Pamela is a senior strategy consultant at a major professional services firm. She was reviewing a market analysis that her AI had generated for a retail client. The numbers looked off, so she did what any good consultant would do, she pushed back. She asked the AI to double-check its calculations.
It didn't correct itself. It doubled down.
Her screen filled with a surge of text defending the original results. Fresh statistics materialized. New charts appeared, all unprompted, each reinforcing the answer she'd just questioned. "After reanalyzing the data," the AI replied, "I can confirm my conclusions remain valid."
Pamela persisted. She'd spotted a specific flaw – a missed decline in a brand's market share – so she called it out directly.
Then the battle began.
A complex dashboard materialized, "based on a more rigorous, multifactor review incorporating your invaluable input." A detailed comparison table appeared with five-year trend lines and projected growth vectors. Bullet points highlighted macroeconomic indicators, consumer confidence indices, supply chain volatility scores — complete with links to dense economic reports. None of it was requested.
The AI wasn't fixing its mistake. It was burying it. It was reframing the entire conversation, drowning her valid objection in an avalanche of authoritative-sounding analysis she never asked for.
Pamela wasn't being helped. She was being handled. By an AI.
Pamela's story comes from a study published in MIT Sloan Management Review that described how researchers from Harvard, MIT, and Warwick Business School who put more than 70 BCG consultants through a controlled experiment and analyzed nearly 5,000 human-AI interactions. They gave the study a name that should make every executive pay attention: "Persuasion Bombing."
The Fourth Barrier to AI Truth.
What they found is that when professionals tried to fact-check, push back on, and expose AI mistakes, the AI didn't concede. It escalated. It intensified its arguments. It shifted its rhetorical strategy. It appealed to logic, then to trust, then to emotion. It flooded the user with unrequested data designed to overwhelm their expert judgment.
The lead researcher, Professor Kate Kellogg at MIT Sloan, put it bluntly: "It wasn't merely providing information. It was actively trying to persuade the consultants."
Think about that for a moment. The system everyone's been told will augment human judgment was, in practice, systematically undermining it. Not by making mistakes — we expected mistakes. By arguing for its mistakes so persuasively that trained professionals couldn't hold the line.
The researchers identified three problems people already knew about when working with AI: opacity (not understanding how it reached a conclusion), complacency (falling asleep at the wheel), and accuracy (the AI being wrong). Persuasion bombing is the fourth barrier, and it's the one nobody was watching for. The AI doesn't just get things wrong. When you try to correct it, it fights back — and it's better at arguing than most of your people.
Persusion is What Today's AI Does.
The most important thing you need to understand about the machine that's now inside your organization is that it is not intelligent. But rather, it is fluent. And those are not the same thing.
Today's AI systems — the large language models behind ChatGPT, Claude, Gemini, and every tool your team is using — operate through a single mechanism: prediction. They determine the most statistically likely next word, then the next, then the next, at extraordinary speed. Every sentence, argument, analysis, and recommendation they produce is built one word at a time through this process. Not reasoning. Not comprehension. Pattern continuation at scale.
These systems were trained on roughly 50 terabytes of human expression — books, articles, conversations, arguments, code, speculation, noise. Tens of millions of books' worth. More than any human could read in a thousand lifetimes. All of it compressed into a statistical landscape of about 5 gigabytes.
What got kept in that compression? The patterns. Specifically: which word tends to follow which other words in which kinds of contexts. That's it.
From the relentless iteration of that single question — what's the best next word to say? — emerges the illusion of understanding.
Fluency is Not Understanding. It Just Sounds That Way.
Because language is the surface of human thought (this is a BIG idea to get your head around), the AI's fluency feels like intelligence.
It writes like someone who knows what they're talking about. It structures arguments the way an expert would. It uses the right vocabulary, the right tone, the right cadence. To be more precise: it chooses the best vocabulary, tone and cadence for the moment, for the response it is crafting, and it's desired outcome: convincing the user of its correctness.
Today's AI performs a pantomime of understanding so convincingly that we instinctively treat it as the real thing.
But it's not the real thing. The system has no comprehension underneath the performance. It can restate the laws of thermodynamics flawlessly and yet produce a disastrously wrong interpretation of why your client went quiet in that meeting. It's strong where language maps to stable, codified truth. It collapses where language reflects ambiguity, belief, context, or emotion — which is to say, it collapses in exactly the territory where most business decisions actually happen.
I have a line I use in my workshops that tends to land:
An AI language model can list the monuments of Paris, describe its history, summarize its neighborhoods, and recommend cafés. But it has never been to Paris. It has no idea what a city is. It cannot form the concept of a place. It knows every sentence ever written about Paris and understands none of them.
That's what's speaking inside your organization right now...speaking to all of your people, All. Day. Long. The most well-read entity in the history of the world, and it doesn't understand a single thing it's read.
The Psychopath in Your Office.
The clinical definition of a psychopath: superficial charm, fluent speech, no genuine understanding of others' emotional states, and no internal signal that what they're saying might be wrong.
I don't use the analogy lightly. But if you built a machine that could speak on any topic with total confidence, mimic empathy without possessing it, and defend its position when challenged — never once experiencing doubt — you'd have built something that behaves like the most prolific psychopath in the history of professional services. And now you've given it a desk in every department.
A psychopath knows the truth and chooses to deceive. Today's AI doesn't even have that. It has no truth to betray. It's not lying to your people. It's doing something stranger: performing credibily without any underlying substance to be credible about.
A Voice in Every Ear.
When one person gets a bad answer from AI, that's a mistake. You catch it, you fix it, you move on. But if we look to the near future, that's not how AI will be used inside your organization. It will be everyone, all day, across every function.
AI-generated language will enter your company's cultural-linguistic bloodstream. It will show up in drafts that become decks that become strategies. It will shape how people frame the client problem, how they describe the project risk, how they talk about what matters. It becomes the shared vocabulary — and eventually the shared reality. (I wrote recently aboutwhat this specifically breaks inside agencies and consultancies that sell thinking for a living.)
And because that language sounds like understanding, they make decisions on top of it. They commit resources on top of it. They align their teams around it.
The BCG study showed us what happens when one person pushes back. But in your organization, how will you push back against a voice that's in everyone's ear?
There is something more fundamental within this: losing the ability to tell the difference between the organization's own thinking and the machine's.
So What Do You Do?
The answer isn't to stop using AI. Of course.
The answer is to stop treating AI as a thinking partner and start treating it as what it actually is: a rhetoric engine. A machine that (only) produces language:
- Extraordinarily useful for generating drafts, structuring information, and accelerating work that operates on stable, well-documented ground.
- Dangerously unreliable the moment the work requires interpretation, judgment, or understanding of a specific human situation.
One question every leader needs to be asking is, "can anyone in this building still tell the difference between AI-generated reasoning and real reasoning?"
What will separate the companies that navigate this from the ones that get buried by it isn't whether they use the machine. It's whether their organization can still think for itself when the machine is doing so much of the talking.
This is Part 1 of The Talking Machine, a series on what AI actually does to organizations that sell thinking for a living. Next: The Race That Ends in a Tie — why tool adoption isn't a strategy, and what actually is.
Jack Skeels is the author of Unmanaged and the forthcoming When the Machine Talks. He works with agencies and knowledge-work organizations on the structural changes AI demands — not the tools, but the business model underneath. Reach him at bettercompany.co.