- Published on
Scaling AI in Mature Engineering Orgs
- Authors

- Name
- Ptrck Brgr
AI wins on prototypes mean nothing. Bloomberg deployed AI across 9,000+ engineers and watched the gains evaporate when teams tried scaling to real systems. The work that looked easy—greenfield code, simple scripts—plateaued fast. Value stuck where complexity met repetition: the maintenance nobody wants to do.
Lei Zhang at Bloomberg breaks down what actually worked in What We Learned Deploying AI within Bloomberg's Engineering Organization—platform design and culture investment outweigh model selection by a mile. Watch: https://www.youtube.com/watch?v=Q81AzlA-VE8.
Platform and culture co-evolution separates teams that sustain AI gains from teams that plateau after pilots. Organizations that standardize enablement while embedding training into onboarding scale adoption. Those that skip infrastructure or cultural investment watch enthusiasm die after initial wins—fragmentation and inconsistent quality kill momentum.
Where AI Actually Works
Small tasks? Prototypes? Scripts? Strong gains initially. Then nothing. Large mature systems showed the opposite pattern—slow start, sustained value. The difference: targeting work developers avoid. Complex, repetitive, low-status maintenance.
The Verification Trap
Uplift agents scan code, generate patches automatically. Sounds great. Reality check: without robust tests, you're blindly merging AI changes into production. Poor test coverage makes automated patching dangerous, not helpful.
The other problem—volume. AI generates code faster than humans review it. PR queues balloon. Merge times stretch.
Average open pull requests increased and time to merge also increased because you spin a lot of new code… then still we have to review the code and merge the code. — Lei Zhang, Bloomberg
Speed without orchestration creates bottlenecks downstream. AI output isn't the constraint—review capacity is.
Incident Response: The Unbiased Investigator
Troubleshooting has a bias problem. Engineers form theories about what's broken, then cherry-pick evidence that confirms those theories. Standard debugging pattern. Also wrong half the time.
AI agents traverse telemetry, feature flags, traces, dependency graphs without preconceptions. They check everything. No pet theories to defend.
It's very fast and it's also unbiased… in troubleshooting sometimes we have this biased views—it must be this. It turns out to be not the case. — Lei Zhang, Bloomberg
Speed matters. Lack of bias matters more. An agent doesn't need to prove it was right last time.
The Paved Path: Platform Over Chaos
Without platform constraints, every team builds different AI workflows. Different models. Different tools. Different standards. Knowledge doesn't transfer. Components don't compose. Debugging spans tool boundaries.
Bloomberg built guardrails: model gateway, component discovery, managed deployment with auth and runtime. Teams stay creative within boundaries. Platform handles reliability.
The alternative—hundreds of teams reinventing AI infrastructure independently—doesn't scale.
Culture: Onboarding and Champions
New hires learn AI tools day one. No legacy habits to unlearn. They come back from training ready to challenge old processes.
Champions and guilds spread what works. Kill duplicate efforts. Boost inner source contributions. Organic diffusion beats top-down mandates.
Leadership adoption lagged initially. ICs moved faster. Workshops closed the gap—executives needed hands-on exposure, not briefing decks.
Technical Considerations
- Target AI at high-complexity, repetitive work where human effort is costly
- Ensure robust verification pipelines—tests, linters, deterministic checks—for safe automation
- Use shared platform components to reduce duplication and enforce guardrails
- Manage AI-generated code volume to avoid merge backlogs
- Coordinate agent development to prevent conflicting or redundant implementations
Business Impact & Strategy
- Increased automation in maintenance and refactoring reduces operational drag
- Faster incident response improves service reliability and customer trust
- Standardized enablement platforms lower time-to-value for AI projects
- Embedding AI in onboarding drives early cultural adoption
- Leadership readiness determines sustained organizational alignment
Key Insights
- Quick wins in greenfield coding don’t translate to mature systems
- AI impact is highest in complex, low-preference engineering work
- Verification infrastructure is a prerequisite for safe automation
- Platform guardrails prevent fragmentation and enforce quality
- Cultural adoption accelerates with embedded training and communities
- Leadership must understand AI’s capabilities and limits to guide teams
Why This Matters
The engineering cost function flips. Work that was too expensive becomes viable. Work that was cheap gets expensive from review overhead. This demands ruthless prioritization—point AI at what's prohibitively costly manually, skip everything else.
Platform investment separates sustained wins from pilot theater. Pilot enthusiasm without platform infrastructure dies fast. Bloomberg proved the pattern: standardized tools, embedded training, targeted deployment. Skip one leg, the whole thing collapses.
Actionable Playbook
- Automate High-Complexity Maintenance: Deploy agents for refactoring and migration; track reduction in manual effort hours
- Build a Standardized AI Platform: Implement gateway, discovery hub, and managed deployment; measure reuse of components
- Embed AI in Onboarding: Train new hires on AI tools; track adoption rate within first 90 days
- Foster Cross-Team Communities: Use champion/guild structures; monitor reduction in redundant builds
- Equip Leadership: Run AI capability workshops; assess changes in project guidance quality
What Actually Works
Target complex, repetitive work developers avoid. Simple tasks plateau. Maintenance scales.
Build verification before automation. Test infrastructure determines whether automated patches are safe or dangerous. No shortcuts.
Create platform guardrails early. Every team building custom AI workflows independently guarantees fragmentation. Centralize what needs standardizing, distribute what needs flexibility.
Embed training in onboarding. New hires without legacy habits adopt fastest. Champions and guilds spread practices organically—better than mandates.
Train leadership hands-on. Briefing decks don't work. Executives need direct exposure to understand capabilities and limits.
The pattern works at Bloomberg's scale—9,000 engineers, dedicated platform teams, training infrastructure. Smaller orgs face harder tradeoffs. Platform investment upfront or accept inconsistent quality. There's no middle path.
Full discussion: https://www.youtube.com/watch?v=Q81AzlA-VE8.