Published on

What Bloomberg Learned Deploying AI to 9,000 Engineers

Authors
  • avatar
    Name
    Ptrck Brgr
    Twitter

Pull request count up. Time to merge up. Effective output? Flat.

Lei Zhang at Bloomberg explains in What We Learned Deploying AI within Bloomberg's Engineering Organization—AI productivity gains don't scale. Quick wins in prototypes and greenfield projects disappear when you hit organizational complexity. Bloomberg deployed AI across 9,000+ engineers and watched the productivity curve flatten almost immediately.

From enterprise AI deployments, I've seen this pattern repeat across business units. The teams that break through the scale ceiling don't solve it with better models—they solve it with platform thinking. The bottleneck isn't technical capacity, it's organizational infrastructure.

The Productivity Plateau

Bloomberg started like everyone else. Survey the landscape. Pick tools. Deploy. Measure.

Early results looked promising. Faster proof-of-concepts. More one-time scripts. Better test coverage. The usual suspects.

the measurements dropped actually pretty quickly when you go beyond all the green field type of thing — Lei Zhang, Bloomberg

AI coding tools excel at isolated tasks with clear contexts. Throw them at a hundred-million-line codebase with decades of dependencies? Performance collapses.

The math is brutal. System complexity grows polynomially with codebase size. AI context windows don't.

Where ROI Lives

Bloomberg's insight: stop thinking about AI as coding assistance. Start thinking about it as software engineering automation.

They shifted focus to work developers actively avoid—maintenance, migrations, incident response. Tasks with clear inputs and outputs. Deterministic verification paths.

the average open pull requests increased and time to merge also increased because you spin a lot of new code and then still we have to review the code and merge the code — Lei Zhang, Bloomberg

Here's the catch: AI accelerates code creation but not code integration. More PRs mean more review burden. More review burden means slower cycle times. You optimize the wrong bottleneck.

I'm still thinking about this pattern from my PhD work in autonomous systems. Optimizing individual agent performance often degrades system performance. Same dynamic here.

The Platform Answer

Bloomberg's solution wasn't better prompts or smarter models. It was platform infrastructure.

They built what they call a "golden path"—a gateway for model selection, an MCP server directory for tool discovery, standardized deployment for custom tooling. Most importantly, they made the right patterns easy and the wrong patterns hard.

We want to make easy things extremely easy to do. Sorry, the right thing is extremely easy to do and we want to make sure the wrong thing is ridiculous hard to do — Lei Zhang, Bloomberg

This matches what I observed at ENVAIO when we were scaling edge ML deployments. Teams that build infrastructure for their AI initiatives outperform teams that build AI applications directly. The constraint becomes the accelerant.

But here's where it gets interesting. Bloomberg didn't just build technical infrastructure—they built organizational infrastructure. Training programs. Champion networks. Leadership enablement workshops.

The Change Function

The most fascinating insight from Lei's talk isn't technical—it's economic.

with a lot of creativity and innovation in the GI space, it actually changes the cost function of software engineering — Lei Zhang, Bloomberg

Some work becomes dramatically cheaper (uplift agents for code patching). Some work becomes more expensive (code review and integration). The strategic question isn't "how do we use AI tools?" It's "which work should we automate first?"

Bloomberg's approach: automate the work developers don't want to do. Manual maintenance. Legacy system integration. Incident response correlation. Tasks with high cognitive overhead but low creative value.

Why This Matters

The enterprise AI adoption curve follows a predictable pattern. Initial excitement. Quick wins in demos. Productivity plateau at scale. Executive skepticism.

Most organizations get stuck at the plateau because they're solving the wrong problem. They optimize individual productivity when the constraint is system productivity. They focus on better tools when the constraint is better processes.

Bloomberg's data shows something I didn't expect: individual contributors adopt AI tools faster than leadership teams. The organizational bottleneck isn't technical competence—it's managerial competence. Leaders learned software engineering in a pre-AI world. Their mental models don't account for AI's cost function changes.

The teams that break through invest in three areas: technical platform infrastructure, organizational change management, and leadership enablement. Most teams invest in one. Few invest in all three.

What Works

Start with work developers avoid. Maintenance, migrations, incident correlation. High-volume, low-creativity tasks with deterministic verification paths.

Build platform infrastructure before optimizing individual tools. Gateway for model selection. Directory for tool discovery. Standardized deployment for custom workflows. Make the right patterns obvious.

Integrate AI training into existing onboarding programs. New hires learn AI-native workflows from day one. They become change agents when they join established teams.

Create cross-functional communities for knowledge sharing. Champion programs. Guild structures. Organic adoption beats top-down mandates in complex organizations.

Invest in leadership enablement. The constraint isn't developer capability—it's management understanding of AI's economic implications.

This works for organizations with platform thinking and change management discipline. Most enterprises have neither. Without both, you get expensive pilots that don't scale.

Full talk: Watch on YouTube