- Published on
The Context Problem
- Authors

- Name
- Ptrck Brgr
Context engineering is the #1 lever for agent performance. Not model upgrades. Not better orchestration. Context.
Mahesh Murag at Anthropic explains in Building Agents with Model Context Protocol—agents fail when they can't access the right data at runtime, not when they lack reasoning power.
Most teams chase new models when they should fix their context problems. From enterprise deployments, I've seen this pattern repeat—the teams that solve data access first build agents that actually ship. The rest get stuck in integration hell.
The Copy-Paste Ceiling
Before MCP, agents got context the hard way: copy-paste, manual uploads, static prompts. Works fine for demos. Breaks immediately in production.
Models are only as good as the context we provide to them. — Mahesh Murag, Anthropic
The problem isn't technical. It's organizational. Your sales team needs CRM data. Your support team needs ticket history. Your analysts need database access. Building custom integrations for each use case costs $500K minimum and takes months.
But here's the catch—model capability doesn't matter if the agent can't reach your systems. GPT-4o with bad context loses to Claude 3 Haiku with good context. Every time.
This is where I'm skeptical of the "just use a better model" approach. In enterprise deployments, I've seen teams burn quarters chasing model upgrades while their agents fail on basic data retrieval. The math doesn't work.
Protocol Over Plumbing
MCP flips the script. Instead of building custom integrations, you build MCP servers—lightweight interfaces that expose resources, tools, and prompts.
MCP enables seamless integration between AI apps and agents and your tools and data sources. — Mahesh Murag, Anthropic
Think APIs for agents. Your database becomes an MCP server. Your file system becomes an MCP server. Your Slack workspace becomes an MCP server. Agents connect once and get access to everything.
The pattern is simple:
- Resources: Data your agent can read (files, database records, API responses)
- Tools: Actions your agent can take (send email, create tickets, run queries)
- Prompts: Templates for common tasks (meeting summaries, code reviews, analysis workflows)
Three deployment patterns emerge. Client-server: agents connect to multiple MCP servers. Sub-agent: agents control their own MCP servers for specialized tasks. Skills: modular expertise packages that agents can load dynamically.
The sub-agent pattern gets interesting—agents that spawn specialized sub-agents with their own context servers. This matches what I observe when building skill-based architectures: modular expertise beats monolithic orchestration (and this sounds obvious in retrospect).
Where ROI Lives
The economics shift fast. Instead of $500K custom builds per domain, you build MCP servers once and reuse them across agents. Database access server works for finance, sales, support, and analytics. File system server works for legal, HR, engineering, and operations.
I could be wrong here, but early MCP adoption looks like early API adoption—scattered experiments, then sudden ecosystem growth. Anthropic reports rapid server development across databases, productivity tools, and enterprise systems.
The skills pattern deserves attention. Rather than hard-coding domain expertise into agent logic, you package knowledge into portable skills that agents load at runtime. This is where the 60-80% onboarding time reduction comes from—new domains get existing skill libraries instead of custom development.
At Tier, we hit this exact problem with edge deployment agents. Custom integrations for each data source meant weeks of engineering time per new vehicle type. A protocol approach would have compressed that to hours.
The Integration Trap
Most agent platforms fail because they optimize for features, not protocols. They build 200 integrations instead of one extensible interface. This creates vendor lock-in and fragmentation.
MCP inverts this. The protocol is open. Servers are lightweight. Switching costs stay low. The ecosystem can grow without Anthropic's permission—crucial for enterprise adoption where compliance and control matter more than convenience.
The pattern repeats: platforms that enable ecosystems win over platforms that control them. MCP follows the Unix philosophy—small, composable tools that do one thing well.
Why This Matters
Context access determines which agents ship and which ones spiral. Manual context gathering doesn't scale. Custom integrations fragment teams. Static prompts degrade over time.
MCP solves the infrastructure problem so teams can focus on the domain problem. Instead of spending months building data plumbing, you spend weeks encoding business logic. The bottleneck moves from scarce integration engineering to abundant domain expertise.
This matters because agent deployment is currently an engineering problem pretending to be an AI problem. Most failures trace to data access, not reasoning quality. Fix context and average models outperform frontier models with bad data.
What Works
Build MCP servers for your core data systems. Start with databases and file systems—highest usage, clearest ROI.
Package domain expertise into skills, not prompts. Version control them. Let non-technical teams contribute. Measure skill reuse across agents and domains.
Focus on the sub-agent pattern for complex workflows. Let agents spawn specialized sub-agents with focused context servers. This scales better than monolithic orchestration.
Test with lightweight servers first. Database connectors and API wrappers require minimal code but unlock major workflows. Complex tools come later.
This works when your data is already structured and accessible. Messy databases and unstructured files still need cleanup before MCP helps. Don't skip data hygiene for protocol adoption.
Full talk: Watch on YouTube