- Published on
From Agile to AI-Native: Rethinking Software Delivery
- Authors

- Name
- Ptrck Brgr
Most teams are bolting AI onto agile workflows and getting marginal gains—according to McKinsey, maybe 20-40% faster initially. Then they plateau. AI speeds up coding, but review queues balloon, work allocation gets chaotic, and the bottlenecks just shift.
Martin Harrysson and Natasha Maniar at McKinsey make the case for AI-native workflows in Moving away from Agile: What's Next: continuous planning instead of quarterly sprints, 3-5 person pods instead of two-pizza teams, and deliberate task allocation between humans and agents.
From enterprise deployments, orchestration—not tools—separates winners from losers. Teams that break through don't add AI to existing processes. They redesign workflows and roles around what AI can and can't do.
Main Story
The core problem: uneven productivity. AI makes coding faster, but everything else stays slow—work allocation, reviews, legacy team structures. The bottleneck just shifts. Faster code generation creates review backlogs and technical debt.
Most large companies today are stuck a little bit in a world of relatively marginal gains. — Martin Harrysson, McKinsey & Company
Work allocation is now a critical rate limiter. AI excels at some tasks but struggles with others, and developer skill with agents varies widely. Without deliberate matching of tasks to human and agent strengths, inefficiencies grow.
In practice, this manifests as a coordination crisis. I've observed teams where junior engineers get complex architectural tasks while senior engineers spend hours reviewing trivial boilerplate AI generated perfectly. Effective AI-native teams develop explicit decision frameworks: agents own routine implementation, humans own ambiguous requirements and architectural trade-offs.
Review processes often mismatch AI outputs. Agents work from loose acceptance criteria, producing code that needs heavy manual correction—eroding automation gains.
Top performers move from quarterly planning to continuous cycles, shift from story-driven to spec-driven development, and integrate AI across the full SDLC. Pods shrink to 3–5 people, consolidating roles, with product builders orchestrating agents end-to-end.
AI native roles essentially means that we're moving away from the two pizza structure to one pizza pods of three to five individuals. — Natasha Maniar, McKinsey & Company
Roles need to change. Engineers orchestrate agents instead of writing every line. PMs prototype in code instead of writing specs. QA merges into the pod. But most enterprises keep the old org chart—and wonder why the gains don't materialize.
Scaling AI-native practices demands disciplined change management—clear role expectations, hands-on upskilling, and robust measurement systems that track inputs, outputs, and economic outcomes.
Technical Considerations
- Workflow integration: Agents must be embedded across multiple SDLC stages to avoid isolated gains
- Task matching: Use delivery history and velocity data to align work with human/agent strengths
- Spec-driven dev: Replace story-driven sprints with precise specifications to guide agent output
- Review automation: Enhance acceptance criteria and automate validation to reduce manual rework
- Pod tooling: Equip small pods with unified tools for orchestration, testing, and deployment
Business Impact & Strategy
- Velocity gains: Continuous planning cycles shorten delivery timelines
- Cost efficiency: Smaller pods reduce overhead while covering full-stack responsibilities
- Quality control: Better specs and review alignment cut technical debt
- Role clarity: Redefined responsibilities improve collaboration and agent use
- Outcome tracking: MECI framework links AI investment to economic impact
Key Insights
- Marginal gains dominate when AI is layered onto legacy agile workflows
- Work allocation is now a primary bottleneck in AI-assisted development
- Loose acceptance criteria undermine automation benefits
- AI-native workflows use continuous planning and spec-driven development
- Smaller pods consolidate roles and orchestrate agents end-to-end
- Scaling AI-native practices requires coordinated change management and measurement
Why It Matters
This shift is structural—comparable to moving from waterfall to agile. AI-native teams operating in 3-5 person pods can deliver what traditional 8-12 person agile teams produce. But reaching that efficiency takes 3-6 months of workflow redesign, role redefinition, and intensive upskilling.
For technical teams, the work changes—less coding from scratch, more orchestrating agents and refining specifications. For business leaders, AI-native delivery directly impacts time-to-revenue, cost per feature, and customer responsiveness. The gains come from systemic changes in structure, roles, and governance—not from tools alone.
Actionable Playbook
- Redesign workflows: Move from story-driven sprints to spec-driven continuous planning; integrate agents across multiple SDLC stages
- Restructure teams: Form 3–5 person pods with full-stack capabilities and agent orchestration skills
- Refine work allocation: Use analytics to assign tasks based on human/agent strengths and past delivery patterns
- Embed change management: Provide hands-on labs, clear role expectations, and coach-led sprints during adoption
- Measure holistically: Apply MECI to track investments, delivery speed, quality, and economic outcomes
Conclusion
AI in software delivery needs more than tools—it needs a structural shift. The biggest gains come when workflows, roles, and measurement systems are redesigned for AI-native operation.
Questions or feedback? Reach out, or dive deeper in the full discussion: https://www.youtube.com/watch?v=SZStlIhyTCY.