Skip to content

Why Scope Fails in Your Agency, and How AI Will Make It Worse

Why Scope Fails in Your Agency, and How AI Will Make It Worse

Scope isn’t just about client requests. In fact, it is something much more complex: shared mental models, the internal, team-level understanding of what work actually is and how it should be done. When those aren’t aligned, teams pay the cost in burnout, rework, and eroding trust. And that cost is far higher than most leaders realize.

Why Shared Understanding Matters

Every project starts with a transfer moment. A client describes what they need. A account person translates that into a proposal. The leads polish it in their own ways. A project manager turns the proposal into a plan. A team receives that plan and begins work.

At each handoff, understanding decays. The original intent grows more vague and complex, assumptions go unspoken, and context that was obvious to one person becomes invisible to the next. By the time work begins, the team is often operating on a fundamentally different model of the project than the one the client described...or what your agency thought it had sold.

This isn’t a communication failure. It’s a structural one. Information was transferred, but understanding wasn’t created, and in fact decayed, taking alignment with it.

The research on team cognition is clear: teams with shared mental models — where members hold aligned representations of the task, the strategy, and each other’s roles — coordinate more effectively, make fewer errors, and produce better outcomes. A meta-analysis of 23 studies found that this shared cognitive foundation is positively linked to both team process and team performance (DeChurch & Mesmer-Magnus, 2010). The mechanism isn’t mysterious: when people share a model, they can anticipate each other’s needs and adapt without constant re-coordination.

When Mental Models Are Misaligned

Here’s the trap: teams almost always think they’re aligned. They sat in the same meeting. They heard the same words. They nodded at the same slides. But hearing words and building the same internal representation of work are very different things.

Here's what happens in your (and most) agencies: a creative director hears “bold campaign” and imagines visual risk-taking; strategist hears the same phrase and imagines a provocative positioning statement; developer hears it and starts thining about which CMS features can handle whatever they’re about to be asked to build. Everyone leaves the room believing they’re aligned. Four weeks later, and for months after that, the rework cycle begins.

My own research across 200+ engagements tells the same story: requirements documents lose 50–80% of their information between the person who writes them and the person who reads them. And that’s not because people aren’t trying. It’s because understanding doesn’t transfer through documents. It forms through inquiry.

Empirical work on shared mental models confirms that explicit, deliberate alignment correlates with better team process and performance outcomes — not just “being in the same room” but actively building a shared representation of the work (Gamble et al., 2025). Passive exposure doesn’t produce alignment. Structure does.

The Cognitive Cost of Misalignment

Misalignment isn’t a soft problem. It has hard, measurable consequences.

When a team starts work without a shared model, every surprise becomes a negotiation. Every assumption that surfaces mid-project creates cognitive load, the mental effort of reprocessing, recalibrating, and rebuilding context that should have been established before execution began. That load compounds. Surprises create rework. Rework creates time pressure. Time pressure creates shortcuts. Shortcuts create more surprises. The cycle is self-reinforcing, and it ends in burnout.

I call it the Agency Hamster Wheel.

Research on cognitive load and team performance shows exactly this pattern: when information isn’t structured for shared comprehension before work begins, the downstream cognitive cost can overwhelm a team’s capacity to perform effectively (DelNero et al., 2025). Structured pre-briefing (ex: deliberately building shared understanding before action) reduces that load and improves both team dynamics and outcomes.

This is why the “scope creep” conversation in agencies is a futile pursuit of "the usual suspects." The naive managerial reaction to work expanding beyond the original (hoped for) plan is to look for someone to hold accountable: the client changed their mind, the team didn’t follow the brief, the timeline was too aggressive. But the root cause is almost always simpler than that: the team never actually shared a model of what the work was. The “creep” was there from day one, hiding in the gap between what people assumed and what they actually understood.

Enjoying this article?

Get new articles delivered to your inbox — no spam, just ideas worth reading.

Why AI Is Accelerating This Problem

AI makes both sides of this dynamic worse.

On the front end, AI short-circuits the formulation feedback loop. When you assemble something by hand, for example, a layout, a strategy document, or a campaign architecture, the very act of building it forces you to reflect on whether you understood the brief in the first place. You hit friction points that send you back to the scope: “Wait, does the client actually want X or Y?” That friction is where misunderstanding surfaces early, while it’s still cheap to fix. (I explore this dynamic more fully in What AI Actually Breaks When You Sell Thinking for a Living.)

AI tools skip that friction. You get a polished first draft in minutes, but you never had the moment where assembly revealed misunderstanding. The work looks more done but is actually less correct. The early rework signal that used to come from the maker’s own process now stays hidden — until it surfaces mid-project, when it’s more expensive.

On the back end, AI overwhelms the judgment layer. AI output is fluent. It reads well, it sounds confident, and it arrives fast. That fluency makes it harder to evaluate. The judgment work of deciding whether something is actually right becomes more cognitively demanding, not less. And because AI increases output velocity, the volume of things requiring judgment goes up at the same time that each individual judgment call gets harder. The leads, strategists, and reviewers responsible for scope quality get squeezed from both directions: more to evaluate, and each evaluation is harder to do well. (For a deeper look at this structural dynamic, see The Bottleneck AI Can’t Fix.)

The result is a faster version of the same burnout cycle. More work produced before alignment exists. More fluent-looking output that masks misunderstanding. More pressure on the people whose job is to catch the gap. AI doesn’t create the misalignment problem — but it removes the natural friction that used to make the problem visible early enough to fix.

Structural Solutions for Real Alignment

The fix isn’t better project plans or more detailed SOWs. The fix is building understanding before work begins — deliberately, structurally, and with the right people leading the conversation.

The Briefing and Scoping Model I’ve developed works on a simple principle: understanding doesn’t form through explanation. It forms through inquiry. The people who will carry the work forward (the learners) must lead the conversation. The people who hold the context (the knowers) respond to questions rather than presenting answers.

This inverts the typical kickoff. Instead of a senior person presenting slides while the team listens politely, the team asks questions. Silence is protected, speaking order is explicit, biased toward junior team members going first. Knowers are not allowed to volunteer explanations or summarize unprompted.

The method operates in two modes, the context briefing, which clears the fog around the project and client itself, and the roadmap, a declarative model that explains how we are creating value for the client, and where the key value points are within every single piece of work. Both modes build shared mental models across seven domains: business context, project purpose and goals, critical behaviors, risks, open issues, platform and approach, and doneness.

Artifacts come last, not first. They capture understanding once it exists. They don’t create it. The results are consistent: fewer interruptions, decisions that stick, reduced rework, faster onboarding, shared language across roles, fewer client surprises, and increased team autonomy. Not because people tried harder, but because the structure and process enabled deeper understanding and allignment across the team.

Go Deeper

I’m running two free webinars on March 12 (and March 19) that unpack the scope and burnout dynamics behind misalignment:

“Who Owns Scope? And What Happens When AI Makes It Worse” (9:00 AM PDT) explores how scope ownership breaks down in multi-client environments and why AI tools are accelerating the problem by making it easier to produce work nobody aligned on.

“How Poor Scoping & Handoffs Fuel Agency Burnout — And Why AI Spins the Wheel Faster”(9:00 PM PDT) traces the direct line from misaligned scope to the burnout cycle — and shows how structural fixes at the briefing stage can break it.

If you’ve ever watched a team work hard on the wrong thing — and wondered how it happened — these sessions will give you new insights as to what is really going wrong.


I work with leadership teams at agencies and project-driven organizations on exactly these kinds of structural challenges — the ones that make growth feel harder than it should. If what I'm describing here sounds familiar, I'd enjoy hearing about it.

bettercompany.co

— Jack Skeels

Jack Skeels is the author of Unmanaged: Master the Magic of Creating Empowered and Happy Organizations and founder of Better Company, an organizational consultancy focused on agencies and project-driven firms. He has led 200+ organizational transformations and writes regularly about structure, AI, and the future of collaborative work.

More in Agency-Life

See all

More from Jack Skeels

See all