Published on

Every MCP Tool Is a Door. Most Teams Leave Them Wide Open

Authors
  • avatar
    Name
    Ptrck Brgr
    Twitter

Twenty simultaneous connections. Twenty out of twenty-two requests failed. That's how fast a standard IO MCP server collapses under real concurrency.

Tun Shwe and Jeremy Frenay at Lenses explain why in Your Insecure MCP Server Won't Survive Production—and the core argument isn't what I expected. Not "add OAuth." It's "fix your tool design first, because no auth layer rescues a badly shaped interface."

At ENVAIO, we designed IoT surfaces for devices with severe resource constraints—curate what the constrained client needs, not everything the backend can do. Agents are the same kind of constrained client. Limited context window, no intuition, no memory between sessions. Most MCP servers treat them like human developers browsing docs. That mismatch is where security problems start.

The Discovery Tax

Every time an agent connects to an MCP server, it enumerates every tool and reads every description. Humans scan docs once, find three endpoints, move on. Agents can't. That difference matters more than most teams realize.

Every time it connects to an MCP server, it enumerates every single tool and reads every single description. — Tun Shwe, Lenses

That's not just expensive in tokens. Each description is a surface for tool poisoning—attackers embedding hidden instructions the model follows without question. The OWASP MCP Top 10 lists tool poisoning at number three and context injection at number ten. Discovery itself becomes a vulnerability.

The fix is ruthless curation. Coarse-grained operations that produce outcomes, not fine-grained CRUD mirroring the underlying API. Don't give the agent access to delete users when it just needs to check an order.

Design Is Security

If you get the design wrong, no amount of OAuth will save you. — Tun Shwe, Lenses

Shwe lays out five principles that amount to one insight: good MCP design and good MCP security are the same discipline. Shrink the surface by consolidating operations. Constrain inputs—enums, Pydantic, no free-form nested payloads. Write complete documentation so poisoned neighboring servers can't shadow yours. Return only what the agent needs. Scope permissions per tool, not per session.

None of this requires OAuth. All of it reduces your threat surface before a single line of auth code.

I'm skeptical most teams do this. The pattern is consistent—teams jump to identity plumbing because it feels like "real security." But the interface is where you decide what the agent can reach, and that decision matters more than how you verify who's reaching.

The Cliff Between Local and Production

You can't do a little bit of production. You're either behind the wall or you're standing out in the open. — Tun Shwe, Lenses

Standard IO mode is comfortable. Single user, local process, no network exposure. But the jump to streamable HTTP is a cliff, not a ramp. OAuth, token management, CORS, TLS, rate limiting—all at once.

The numbers are brutal: 20 out of 22 requests failed with just 20 simultaneous connections on standard IO.

This maps to a pattern that keeps repeating. Teams prototype locally, demo well, then stall because security requirements hit all at once. A great prototype means nothing if you can't cross that gap.

The OAuth Maze

Jeremy Frenay picks up the auth side. An MCP authorization server requires more than 10 specifications—core OAuth flow, client discovery, metadata, token lifecycle. And traditional OAuth assumes you know your clients up front. With MCP, that breaks. Any client can discover and connect to any server at runtime.

Dynamic Client Registration solves self-registration but creates new problems: non-portable registrations, phishing risk, weak identity verification. A malicious client can claim to be Claude. The server can't tell.

CIMD—client ID metadata documents at URLs controlled by the client owner—is the preferred direction since late 2025. But (and this is the part nobody wants to hear) even CIMD handles only one layer. Enterprise production still needs per-tool permissions, data masking, audit logging, and end-to-end tracing. More than 50% of MCP servers still run on long-lived, unscoped API keys.

The Organizational Gap

Here's the question I don't have a clean answer for: does better protocol design actually make MCP enterprise-ready, or does it just make the protocol layer less embarrassing while the harder problems stay unsolved?

Shwe and Frenay present a strong technical case. But the ceiling for agent systems is rarely the protocol—it's ownership. Who approves which agents to connect? Who's accountable when a tool misbehaves at 2 AM? Frenay notes that tracing for agentic AI follows distributed-systems observability principles. Most organizations haven't solved that for microservices yet.

What's missing from the talk: a gateway layer. Per-server design principles don't scale to fifty servers across ten teams. An MCP gateway—rate limiting, token scoping, tool-level access control, audit logging—applied once at the proxy is the right pattern. API management learned this a decade ago: you don't enforce policy at every endpoint. You enforce it at the gateway.

My read: reduce tool count, add a gateway, then worry about per-server OAuth. That sequence matters.

Why This Matters

Getting MCP design wrong compounds fast. Every extra tool is a door. Every unmasked field is data one injection away from exfiltration. Every retry broadcasts your conversation history—sensitive data included.

Shwe's five design principles cost nothing to apply today. The full OAuth + CIMD + token exchange stack requires serious engineering. Most teams skip design and jump to auth—ending up with a well-authenticated server that's still badly designed.

The protocol can be perfect. If your tools are bloated and your data exposure is careless, agents fail expensively.

What Works

Shrink your tool surface first. Consolidate fine-grained operations into outcome-oriented tools. Strip everything the agent doesn't need for its immediate task.

Constrain inputs. Enums, flat structures, validated types. Free-form string arguments are where injection lives.

Treat documentation as defense. Complete tool descriptions crowd out space for poisoning from neighboring servers.

Move past API keys. Short-lived, scoped tokens via OAuth 2.1. CIMD over DCR if you can. But auth is step two, not step one.

Build observability before you scale. Which agent called which tool, what parameters, what data came back. If you can't trace it, you can't govern it.

These principles assume you control the MCP server. For third-party servers you can't redesign, the gateway becomes your enforcement layer—tool filtering, credential injection, response masking, audit logging all happen at the proxy. What a gateway can't do: verify that a tool actually does what its description claims. You still need to curate which servers you connect to. The gateway controls the pipe. Trust in the endpoint is a different problem.

Full talk: Watch on YouTube