- Published on
Replit's Bet: AI Agents Without Training Wheels
- Authors

- Name
- Ptrck Brgr
Runtime length isn't autonomy. Decision scope is.
Most coding agents run for hours but still need expert supervision. They handle complex orchestration, manage multi-step workflows, and process massive codebases—yet fail the moment a non-technical user tries to delegate a task and walk away.
Michele Catasta explains in Autonomy Is All You Need—there's a missing dimension in how we think about agent capability. Everyone maps agents on latency vs runtime, but the real divide is supervised vs unsupervised autonomy.
From enterprise deployments, I see this pattern constantly. Teams deploy sophisticated agents that impress technical users but fail when non-experts try to use them. The bottleneck isn't orchestration quality—it's the assumption that someone with a CS degree will stay in the loop.
The Tesla Problem
Replit faces a unique challenge: building agents for people who can't code.
So at Replit, we're building a coding agent for nontechnical users. It's a very peculiar challenge I would say compared to many people in this room. — Michele Catasta, Replit
Most agent builders optimize for developers. You prompt precisely, review outputs carefully, and iterate until the agent delivers. This works when users understand the problem space and can spot errors quickly.
But here's the catch: Non-technical users can't supervise what they don't understand.
Tesla's Full Self-Driving illustrates this perfectly. You still need a license. You sit behind the wheel. The car handles 99% of driving, but you're expected to intervene when edge cases appear. Supervised autonomy works because drivers understand driving.
Coding agents for non-technical users need unsupervised autonomy. No expert in the loop. No quick corrections when the agent drifts. The system succeeds or fails on its own.
The Missing Dimension
The agent capability landscape maps latency against runtime. Low latency keeps experts in flow state. Long runtime handles complex tasks. Replit started in the "uncanny valley"—too slow for flow, too unreliable for delegation.
there is an additional dimension like a third dimension to this plot that you know it hasn't been covered here and namely the fact is how do we build autonomous agents for nontechnical users. — Michele Catasta, Replit
This third dimension changes everything. Expert supervision vs zero supervision isn't a minor tweak—it's a fundamental architecture shift.
Consider the failure modes. Expert users catch hallucinations, guide context switches, and recover from dead ends. Non-technical users can't. They describe what they want, delegate the task, and expect it done correctly. No debugging. No iteration. No "let me adjust your prompt."
The pattern repeats across domains. Enterprise AI projects assume technical oversight. Consumer AI tools assume user expertise. The gap between supervised and unsupervised autonomy kills most attempts to democratize complex workflows.
Where Control Lives
Unsupervised autonomy requires different control mechanisms.
Replit evolved from interactive coding assistance to multi-hour autonomous sessions. Not just longer runtimes—fundamentally different decision-making. The agent must handle ambiguity, recover from errors, and deliver complete solutions without expert intervention.
we managed to go all the way on the right and now we have agents that runs for several hours in a row. — Michele Catasta, Replit
This works when the agent controls its own context. Instead of relying on user prompts for course corrections, it maintains goal state, evaluates progress, and adapts strategies autonomously. No human in the loop to catch drift or provide clarifications.
My work at Tier taught me that edge constraints force better design. Limited compute on scooters meant every algorithm had to be bulletproof. No human driver to take over when perception failed. Unsupervised autonomy demands the same discipline—every failure mode must be handled internally.
The Verification Challenge
Non-technical users can't verify code quality. They can only test end results.
This changes how agents must validate their work. Expert users review code line-by-line, spot architectural issues, and provide technical feedback. Non-technical users run the app, check if it works, and report bugs.
The agent needs internal verification loops. Code reviews against standards, automated testing, security scans—all without human oversight. I'm skeptical of agents that skip these steps and rely on users to catch problems downstream.
The cost multiplies in enterprise settings. Bad code from supervised agents gets fixed quickly by experts reviewing. Bad code from unsupervised agents ships to production and becomes technical debt—and here's the part most teams miss—that's expensive to unwind later.
Why This Matters
The supervision gap determines which AI capabilities actually democratize versus which just serve existing experts better.
Most agent investments optimize for technical users who can supervise effectively. This improves developer productivity but doesn't expand the market beyond people who already understand the domain. The real prize is enabling non-experts to accomplish complex tasks independently.
But unsupervised autonomy is harder to build and riskier to deploy. Failure modes are more severe when no expert is watching. Quality control must be built into the agent rather than relying on user oversight. The engineering complexity jumps significantly.
I could be wrong here—maybe supervised autonomy is sufficient for most use cases. But watching Replit's evolution suggests there's real demand for agents that work without training wheels. The teams that crack unsupervised autonomy will unlock markets that supervised agents can't touch.
What Works
Build for zero supervision from day one. Test with non-technical users who can't debug your agent's mistakes. If it breaks without expert intervention, redesign the control loops.
Invest in internal verification. Code quality checks, automated testing, security scanning—all embedded in the agent workflow. Don't rely on users to catch what they can't understand.
Start with constrained domains. Full autonomy across all coding tasks is impossibly hard. Pick specific workflows where success criteria are clear and failure modes are manageable. Expand gradually.
This works when your target users genuinely can't supervise the domain. If your users understand the technical details, supervised autonomy is cheaper and often better. But for true democratization—bringing complex capabilities to non-experts—unsupervised autonomy is the only path.
Full talk: Watch on YouTube