Logo
Published on

From Vibe Coding to Vibe Engineering

Authors
  • avatar
    Name
    Ptrck Brgr
    Twitter

Speed without guardrails is technical debt at 10x velocity. AI generates code fast—maintainable code requires discipline. The difference between scaling and collapsing isn't the model. It's constraints.

Kitze at Sizzy lived this transition in From Vibe Coding To Vibe Engineering—raw generation speed (vibe coding) versus managed quality (vibe engineering). Context control and real-time steering separate outcomes. Watch: https://www.youtube.com/watch?v=JV-wY5pxXLo.

Constraints are the difference between sustainable speed and technical debt acceleration. Teams that define where AI applies—and where it doesn't—maintain quality as velocity increases. Those that give juniors unchecked AI access or skip architectural oversight inherit fragile systems that break under load.

Vibe Coding: Fast and Fragile

Rapid generation. Vague prompts. Minimal review. Works for throwaway prototypes. Breaks for production systems.

The problem: developers who can't assess AI output quality. They accept whatever the model generates. Poor abstractions spread through the codebase. Technical debt compounds.

Do not give AI tools to your interns and juniors… That's the dumbest idea. — Kitze, Sizzy

Junior developers lack the pattern recognition to catch bad code. AI makes plausible-looking mistakes. Without experienced oversight, those mistakes ship.

Vibe Engineering: Speed With Structure

Targeted prompts. Reusable context packages. Live steering during generation. Outputs that match architectural standards.

Less about generating syntax. More about shaping the AI's direction while it works. Mid-generation correction beats post-generation cleanup.

The Premature Abstraction Problem

LLMs write concrete code. No premature abstractions. This avoids over-engineering—a win. It also locks in repetitive patterns—a loss.

LLMs don't care about repetitive code and we abstract too early. — Kitze, Sizzy

Humans abstract too soon. LLMs never abstract. The sweet spot: concrete code first, abstract when patterns emerge, not before.

Knowing when "good enough" is actually enough? That's judgment AI doesn't have. Yet.

Context: The Productivity Multiplier

Precise rules, documentation, domain constraints in prompts? Maintainable output. Vague instructions? Inconsistent quality.

The gap isn't small. Context-rich prompts produce code that fits existing patterns. Context-poor prompts produce code that "works" but doesn't belong.

Real-Time Steering Changes Everything

Batch generation: write prompt, wait, review output, iterate. Slow. Disengaged.

Real-time steering: guide the AI while it generates. See problems forming, correct immediately. Fast. Engaged.

AI stops being a code printer. Becomes a collaborative tool you direct in real-time.

Technical Considerations

  • Segment tasks by risk to decide between vibe coding and vibe engineering
  • Build prompt templates with domain rules for consistent agent output
  • Use live steering tools to adjust generation midstream
  • Track model capability changes to adapt workflows quickly
  • Avoid over-reliance on AI for core logic without human validation

Business Impact & Strategy

  • Shorten delivery cycles by applying vibe engineering to high-value features
  • Reduce rework costs by catching poor abstractions early
  • Shift senior engineers into AI oversight roles for higher use
  • Expect reduced need for repetitive coding roles as agents take over
  • Use token burn metrics carefully—encourage experimentation without waste

Key Insights

  • Speed without oversight leads to fragile code
  • Context-rich prompts improve accuracy and maintainability
  • LLMs avoid premature abstraction but can cement bad patterns
  • Senior engineers can be 10x more productive with disciplined AI use
  • Real-time steering transforms AI from passive to collaborative
  • Clean code is evolving toward “clean enough” for agents to build upon

Why This Matters

Quality shifted from typing to judgment. Experienced engineers who shape AI output well multiply their impact. Junior engineers who blindly accept AI output multiply their mistakes.

The role changes. Senior engineers stop writing boilerplate, start directing AI systems. Junior engineers without oversight become liability magnitudes worse than before.

Vibe coding for throwaway work? Fine. Core systems? Oversight is non-negotiable. Speed without structure ships technical debt at scale.

Actionable Playbook

  • Segment Work by Risk Profile: Assign low-risk tasks to vibe coding; reserve core systems for vibe engineering with strict review
  • Develop Context Packages: Bundle rules, libraries, and constraints into prompts for consistent, aligned output
  • Adopt Real-Time Steering Tools: Intervene during generation to correct course early
  • Upskill Senior Staff in AI Oversight: Train on evaluating and integrating AI outputs for maximum use
  • Monitor Model Changes: Adjust workflows when provider updates shift capabilities

What Works

Segment work by risk. Low-risk tasks? Vibe coding with minimal review. Core systems? Vibe engineering with strict oversight and architectural constraints.

Build context packages. Reusable prompts with rules, patterns, constraints. Consistent output that fits your architecture.

Adopt real-time steering. Guide AI during generation, not after. Catch problems as they form.

Train senior staff on AI oversight. Evaluating and integrating AI output is a different skill than writing code from scratch. Both matter.

Monitor model updates. Provider changes shift capabilities. Your workflows need to adapt or break.

The discipline matters more than the technology. Experienced engineers who can enforce constraints? They scale with AI. Teams without that expertise ship fast and break things—the wrong kind of fast.

Full discussion: https://www.youtube.com/watch?v=JV-wY5pxXLo.