Skip to content

No, It’s Not Really AI Yet

No, It’s Not Really AI Yet
Published:

One of the most common misconceptions right now is the belief that the systems we have today — GPTs and their cousins — are on a glidepath to becoming actual artificial intelligence. That if we just scale them, or bolt on enough retrieval, or give them more compute, they will inevitably cross the threshold into understanding, agency, or general reasoning.

They won’t. And that misunderstanding creates more confusion than any of the old science-fiction tropes ever did.

What we have today is a very specific kind of system. I call it Rhetorical AI — models that operate entirely in the domain of language, prediction, and output shaping. They perform astonishingly well at appearing intelligent because language is the surface of intelligence. Give a system enough patterns, context windows, and inference tricks, and it can mimic the shape of thought with remarkable fidelity.

Mimicry isn’t understanding. Likewise, trajectory isn’t destiny.

A recent WSJ article highlighted Yann LeCun’s recent comments and interviews. LeCun — certainly not a Luddite, and arguably one of the intellectual architects behind modern deep learning — has been explicit: scaling LLMs will not produce AGI, Artificial General Intelligence. In his view, and in the view of many researchers working in cognitive architectures, something structurally different is required. A real intelligence needs a world model. It needs grounding. It needs mechanisms for understanding causality, physicality, agency, and planning. It needs to operate beyond the domain of text.

In other words: language models do language. They do not “become” minds as you make them bigger.

This doesn’t diminish their power. In fact, it’s precisely because they’re constrained to rhetoric that they are so explosively useful right now. The world runs on language — briefs, emails, slides, drafts, analysis, proposals, conversation. This Rhetorical AI accelerates all of that. It collapses cycle time. It strips friction from knowledge work. It amplifies the productive capacity of well-structured teams.

But using a thing well requires naming it accurately.

Calling today’s systems “AI” — as in, intelligence — blurs the boundary between what exists and what does not. It encourages executives and practitioners to assume capability where there is none, and inevitability where the future is still undecided. It also leads to organizational missteps: assuming the tool can replace judgment, assuming it can reason across contexts, assuming that its fluency signals reliability.

What we have is real. It’s powerful. It’s transformative. But it is not on autopilot toward AGI.

If AGI emerges, it will come from architectural breakthroughs closer to what LeCun is describing — systems that learn from the world, not just from text. Systems that build models, not just tokens. Systems built around perception, exploration, and self-supervised understanding.

Until then, we are in the era of rhetorical AI: beautifully articulate, incredibly useful, profoundly limited.

And if we want to use these tools well, the first step is simple: stop pretending they’re something they’re not.

More in AI Reformation

See all

More from Jack Skeels

See all