Skip to content

How AI Is Making Your Projects More Dangerous

How AI Is Making Your Projects More Dangerous
Published:

AI isn't making projects dangerous by producing bad work. It's making them dangerous by suppressing the signals that tell you something needs more thinking.

The risk isn't in the output. It's in what you stop doing.

The Similar-vs-Same Trap

Agency work is almost never the same twice. It's similar. And similar is where all the judgment lives.

Different client politics. Shifted constraints. A stakeholder who cares about something the last stakeholder didn't. A platform that works almost like the last one but not quite. The work looks familiar. The details are not.

Our brains are wired to collapse similar into same. Daniel Kahneman called the mechanism substitution: the brain gets asked a hard question ("what does this specific project require?") and silently swaps in an easier one ("what did the last project like this require?"). You don't notice the swap happened. You just feel confident. You've done this before. You know how it goes.

Your confidence is a seed for the danger.

The projects that blow up are almost never the truly novel ones. Novel work triggers caution naturally. You know you don't know. You ask more questions. You build in buffers. The uncertainty is visible and therefore manageable.

The projects that blow up are the ones someone said "we've done this before" about. Familiarity is the enemy of vigilance.

The Scope You Don't Talk About

It's not the scope you get wrong that kills projects. It's the scope you never discuss.

The assumption nobody names. The constraint that was different this time but nobody surfaced because the project looked familiar enough. The risk that lived in the gap between what was said in the brief and what actually mattered to the client.

A human in over their head will sometimes hesitate. Ask a question. Say "I'm not sure about this part." That hesitation is a signal. It's the organizational immune system doing its job, surfacing uncertainty before it becomes a crisis.

The scope you don't talk about is the scope that gets you into trouble. And most teams never build a structure for surfacing it. This is something that we have taught very successfully to a couple hundred agencies – when you see the difference it is quite shocking. (the picture below shows a massive project that had approximately 30% undefined scope when the SOW was signed.)

How It Gets Worse When We Add AI

These two traps, the familiarity that suppresses vigilance, and the silence around what nobody thought to raise, have always existed. Every agency has lived them.

The question is what happens when you add AI to a system that already has these vulnerabilities.

AI output is polished, confident, and stylistically consistent regardless of the underlying problem. It makes everything look more similar than it is. The roughness and uncertainty that would normally trigger a pause (ex: the moment where someone says "wait, something feels different here") gets smoothed away before anyone sees it.

But there's a deeper problem. AI doesn't know what it doesn't know, and it will never tell you. Where a human might hesitate or flag uncertainty, AI papers over the gap with fluent, confident language. It doesn't raise its hand and say "we haven't talked about this part." It just fills the space with something plausible.

So both failure modes get worse at the same time: the similar looks more same, and the unspoken stays unspoken. The two traps that have always killed projects now have an accelerant.

AI Adoption Also Overstresses the Judgment Layer

To understand why this matters structurally, you need to separate two things that get blurred together: action and judgment.

Action is what an organization produces — deliverables, content, code, designs. Judgment is how an organization decides what actions make sense — interpreting context, weighing tradeoffs, aligning on what "good" looks like, determining what's in and what's out. (I've written about this framework in more detail here: What AI Actually Breaks When You Sell Thinking for a Living.)

AI has dramatically accelerated action. But it has done nothing to accelerate judgment. If anything, it's made the judgment problem worse — because every new piece of output generated at speed is another thing that needs to be evaluated, directed, or corrected by a human who understands the problem.

A recent Wall Street Journal article reported research (ActivTrak) provides striking evidence of exactly this dynamic. Tracking 164,000 workers across 443 million hours of digital activity, the research found that AI adopters saw focused thinking time — the deep cognitive work where judgment actually happens — drop 9%, while messaging and coordination work more than doubled. The people organizations rely on for judgment are spending less time thinking and more time reacting.

Now, here's the thing about scope: the senior people who are drowning in this data were never the right people to own detailed scope in the first place. They are the ones who bring the business in, get the client engaged, and set direction. That's essential work. But detailed scope — the kind that actually prevents projects from going sideways — has always emerged from team-based activity. Either the team figures it out collaboratively at the front end, or they figure it out the hard way, climbing Mount Death March, discovering scope one mistake at a time.

The difference is whether you pay for that understanding before production or during it.

And here's the opportunity hiding inside the problem. The team members doing the production work — the people closest to the actual deliverables — now have more available capacity than ever before, because AI has accelerated their action. They have time to think. They have time to ask questions. They have time to surface what hasn't been discussed.

They have time to scope. And they're the ones who should be doing it.

The collaborative briefing and team-based scoping methods we've been developing and teaching for more than a decade were designed for exactly this — to surface the unspoken, interrogate familiarity, and distribute the judgment load across the people who will actually carry the work forward. It turns out that what was built to fix a chronic agency delivery problem works awesomely in an AI-accelerated environment.

Measure Twice, Cut Once — Revisited

The old rule was measure twice, cut once. The logic was simple: cutting was expensive, so you invested in measurement to avoid waste.

AI made cutting nearly free. You can generate, iterate, and produce at almost no cost. So organizations stopped measuring. Why deliberate when you can just try things? Hell, cut a bunch of times and just sort through them to see which one fits.

But when do you think about that, you realize that the cost didn't go away, it just shifted from cutting to testing… From action to judgment.

Bad scope no longer wastes action — action is cheap. Bad scope wastes judgment. Senior attention spent evaluating work that shouldn't exist. Correcting directions that should have been caught at the start. Re-aligning teams that were never aligned in the first place. Every unnecessary iteration is a tax on the scarcest resource in your organization: the ability of your best people to think clearly about what matters.

In a world where action is cheap and judgment is scarce, measuring twice isn't a craft virtue. It's a survival strategy.

More in Agency-Life

See all

More from Jack Skeels

See all