- Published on
Deloitte Found Where 93% of AI Budgets Actually Go
- Authors

- Name
- Ptrck Brgr
Ninety-three cents of every AI dollar goes to tech and tooling. Seven cents covers everything else—culture, change management, learning, process redesign. Then everyone wonders why pilots stall.
Bill Briggs, Deloitte's CTO, lays out the math in The AI Investment Trap—this imbalance directly explains why fewer than 30% of agentic pilots reach production at scale, despite 70%+ of enterprises believing AI holds massive potential.
I keep seeing this pattern. Teams don't fail because the models are bad. They fail because nobody funded data readiness, process simplification, or getting frontline workers to trust the tools. The 93/7 stat gave me a number for something I've felt but couldn't pin down.
Weaponized Inefficiency
Here's the line that stuck with me:
If you apply AI into an inefficient process... you're going to weaponize inefficiency and actually probably pay a lot. — Bill Briggs, Deloitte
Most teams layer AI onto existing workflows and expect magic. But the workflows are the problem—accumulated over decades, shaped by whatever ERP forced a particular step sequence.
AI doesn't fix that. It accelerates it. The invoice process that "always had 10 steps" might need far fewer stripped back to first principles. But nobody funded the redesign—93 cents went to tooling.
The Trust Cliff
This one genuinely surprised me. Briggs shared a stat on AI trust across org charts:
The trust in AI from the C-suite was 70%... until you got to the frontline entry level worker and the trust was 6.7%. So from 70 to 6.7. — Bill Briggs, Deloitte
Seventy to 6.7. Steep decay from the boardroom to the people actually doing the work.
I didn't expect a gap that steep. It flips the usual narrative—we keep talking about executive buy-in as the bottleneck, but the C-suite is already at 70%. The real problem is the frontline worker who's heard "AI will eliminate your job" for two years. If that worker has the best intuition about what's actually broken (and Briggs argues they do), we're shutting out the people we need most.
I could be wrong here, but trust doesn't come from town halls. It comes from giving people agency over how AI changes their work.
Counting Agents Is the New Counting PowerPoints
Are organizations with "tens of thousands of agents" getting real value? Briggs doesn't mince words:
When we're measuring volumes of use case or volumes of agents as the thing that's the bar... it's a tell. — Bill Briggs, Deloitte
A tell. Like poker. If the headline is "we deployed 10,000 agents," the value metrics aren't there yet. If they were, the headline would be "we cut R&D cycles by four months" or "restocks dropped 40%."
Activity metrics lie. Output metrics tell truth. And here's the question I keep coming back to: is outcome measurement genuinely hard, or is it just easier to report effort than results? (honestly, I think it's both)
What's the Disciplinary Action for an AI Agent?
Briggs floated something I haven't heard framed this sharply. What can enterprises learn from the HR lifecycle for governing agents? Onboarding, access controls, performance management, accountability. And the provocative bit: if an agent makes a mistake, is it a trouble ticket? A training issue? Or—and this is where it gets interesting—a disciplinary action?
As agents scale, accountability becomes urgent. Most enterprise governance was built for human workforces and deterministic software. Neither maps onto semi-autonomous agents.
The $100-to-$500K Surprise
Without AI ops discipline, inference costs swing wildly. Briggs's point: with mature governance, a $100 monthly bill won't become $500,000 the next month. But that requires rigor most orgs haven't built yet—per-developer keys, usage thresholds, monitoring. The same playbook that tamed cloud bill shock a decade ago.
But here's the catch: a lot of AI buying happens outside the CIO's purview. Vendors go straight to marketing or supply chain leads. By the time the tech org finds out, there's a cost problem and no governance. Shadow IT with inference bills attached.
Why This Matters
The 93/7 ratio isn't just a budget problem. It's a diagnostic for three downstream failures: pilots that can't scale, inference costs that blow up without ops maturity, and an agent workforce with no governance model.
Briggs doesn't prescribe an exact ratio. A Deloitte colleague of Briggs suggests every tech dollar should be matched by eight or nine in change investment—nearly inverting the numbers. Nobody knows the right split, but it's a long way from 93/7.
The uncomfortable truth—and this connects to something I noticed at Tier that I didn't fully understand until now—is that the tech isn't the constraint. The constraint is whether the organization can absorb what the tech makes possible. That capacity lives in the 7%, not the 93%.
What Works
Start with outcomes, not use cases. Every AI conversation should anchor to a business metric. If you can't name it, you're not ready to deploy.
Fund the 7%. Change management, learning programs, process redesign. These aren't soft costs. They're the difference between a pilot and a product.
Build AI ops before you need it. Per-developer inference keys, usage thresholds, cost monitoring. The orgs that set this up early avoid the $500K surprise.
Treat agent governance like workforce governance. Onboarding, access controls, performance management, accountability chains. The field hasn't figured this out yet—but starting now beats retrofitting policy after 10,000 agents are running unsupervised.
Measure trust at every level. If your C-suite sees 70% confidence and your frontline sees 6.7%, no amount of tooling solves that.
This works when leadership invests in foundations. It doesn't work when the mandate is "deploy AI fast, show results by Q3"—which is still the default in most enterprises.
Full talk: Watch on YouTube