- Published on
AI Tools Multiply Whatever You Already Are
- Authors

- Name
- Ptrck Brgr
Speed without judgment is chaos at 10x velocity.
Kitze explains in From Vibe Coding To Vibe Engineering—the difference between vibe coding and vibe engineering isn't about how much you use AI. It's about whether you know when to stop steering and ship.
That distinction landed hard for me. At Tier, we had engineers across a wide experience spectrum working with early ML tooling, and the pattern was always the same. The people who shipped well weren't the fastest coders. They were the ones who knew which corners you could cut and which ones would collapse the floor. AI coding tools just made that gap wider. Way wider.
The Abstraction Addiction
Here's a thing Kitze nails that I keep coming back to: LLMs don't care about repetitive code. We do. We're the ones who see three similar functions and lose sleep until they're abstracted into one elegant utility that nobody can read six months later.
LLMs don't care about repetitive code. And I've been seeing this since 2017 that we care too much about repetitive code and we abstract too early. — Kitze, Sizzy
I didn't expect to agree this strongly, but he's right. With Composer One and similar real-time steering tools, you can reach the right abstraction faster. You can also reach the wrong one faster. And the wrong abstraction, shipped confidently, is worse than no abstraction at all (and this sounds obvious in retrospect).
The instinct to abstract is a human compulsion, not an engineering requirement. LLMs just write the code. We're the ones adding the complexity tax.
Who Gets the Keys
This is where Kitze gets blunt, and I think he's earned it:
Do not give AI tools to your interns and juniors. That's the dumbest idea. But if you take your skeptical senior and you convince them to do vibe engineering, you're going to get 10x results. — Kitze, Sizzy
Pull request count up. Code quality unmeasured. Juniors shipping features that work in demo and break in production. I've watched this pattern play out in enterprise deployments—and here's the part most teams miss—the damage isn't in the bad code itself. It's in the confidence. A junior with an AI tool thinks they've shipped production-ready code. They haven't. But nobody catches it until the incident.
The hype says give everyone AI tools and watch productivity soar. My experience says otherwise. Give AI tools to people who already know what good looks like, and you get a genuine multiplier. Give them to people who can't judge quality, and you get technical debt at a rate no team can service.
I could be wrong about the absolute split. My sample size is limited to enterprise contexts, not indie dev or startup culture. But the direction feels right.
The "Good Enough" Skill
Here's the question I keep coming back to: what exactly makes someone good at vibe engineering?
Kitze's answer surprised me. It's not prompt engineering. It's not knowing which model to use. It's the ability to look at generated code and say "this is good enough for the job it's doing" and move on. That's it.
Cleanish. Good enough for agents to keep building on. Not perfect, but functional and maintainable enough that the next iteration doesn't hit a wall.
This connects to something I noticed in my own work that I didn't fully understand until now. The best engineers I've worked with weren't the ones who wrote the most elegant code. They were the ones who knew where elegance mattered and where it was wasted effort. AI doesn't change that skill. It makes it the only skill that counts.
The Bottom Thins Out
Kitze makes a prediction that's uncomfortable but hard to argue with: AI is thinning the workforce from the bottom. Juniors and interns are the first roles where companies say "we can just use an agent for that."
Shopify apparently runs vibe coding leaderboards where employees who burn the most tokens rank higher. I'm still thinking about what that actually incentivizes. Token burn isn't output. Activity metrics lie. We know this. And yet the leaderboard exists.
The counterargument is that juniors still need to learn somehow. If AI replaces the entry-level reps, where do tomorrow's seniors come from? Nobody has a clean answer to that yet.
Why This Matters
The real cost function here isn't about tools or models. It's about judgment distribution across your team. An organization where judgment is concentrated in a few seniors and everyone else is pressing "accept" is fragile. One resignation, one reorg, and your quality floor collapses.
Kitze frames this as vibe coding versus vibe engineering, but the deeper pattern is older than AI. It's the gap between doing things fast and knowing what to do. AI just compressed the feedback loop from weeks to minutes—which means mistakes compound faster too.
From enterprise AI deployments, I've seen teams invest in context systems, rules files, and structured prompts that encode their senior engineers' judgment into the tooling itself. That's the real unlock. Not making individuals faster, but making the team's collective judgment available to every agent session.
What Works
Treat AI coding tools as amplifiers, not replacements. They multiply whatever judgment already exists on the team. If your seniors are strong, the gains are real. If they're not, the problems accelerate.
Build context packages—rules, patterns, constraints—that encode your engineering standards into every agent session. Don't rely on individual prompt skill.
Use real-time steering over batch generation. Kitze's point about Composer One resonates: watching the agent work and catching mistakes mid-generation is fundamentally different from reviewing a finished output. Back in the driver's seat.
Don't hand AI tools to people who can't judge the output. Train them first. The skill isn't prompting. It's knowing where the "good enough" line sits for each piece of code.
This works when your team has enough senior judgment to go around. Most teams don't. And there's no shortcut for building that judgment—with or without AI.
Full talk: Watch on YouTube