Published on

The Transistor Analogy

Authors
  • avatar
    Name
    Ptrck Brgr
    Twitter

Most Fortune 500 companies won't adapt fast enough. That's not a prediction from an analyst. That's Sam Altman betting on it.

Altman and Vinod Khosla discuss where AI is heading at Khosla Ventures Summit—from the ChatGPT origin story to why the next trillion-dollar company won't be an AI lab.

The conversation covers a lot of ground, but one thread keeps surfacing: the gap between what AI can do and what organizations are ready to absorb. From enterprise deployments, I've seen this exact dynamic. The technology arrives faster than the org can rewire. Most don't even start until it's too late.

The Software Collapse

Altman's near-term prediction is specific. Software gets disrupted first. Not gradually. Not in a decade. Soon.

If you want to do something you can just like type something into an AI chatbot and get a great piece of software built. — Sam Altman, OpenAI

Instead of buying SaaS products, you describe what you need and it gets built on demand. No procurement cycles. No vendor negotiations. Just runtime. That kills the current SaaS model.

The physical world lags—supply chains, manufacturing, logistics move slower than bits. But 2035 to 2050 is long enough for atoms to catch up too.

The ChatGPT Accident

Here's the part most people miss. ChatGPT almost didn't happen.

OpenAI spent four and a half years as a research lab with no products. When they finally needed revenue, they tried GPT-3 as an API. The entire world found exactly one use case: copywriting apps. That was it.

But the playground—a simple prompt-testing tool—showed something unexpected. A small number of users just chatted with it all day. Retention was atrocious, but the users who stuck around used it more over time.

An important learning is if you have a product that has any retention at all, you're actually in really good shape. The default is almost always all the way down straight line to zero. — Sam Altman, OpenAI

They almost held the launch because retention was so bad. Most product teams would have killed it. Instead, they shipped a chat interface and found latent demand that nobody predicted—not even them.

My PhD work in autonomous systems showed a similar pattern. Breakthroughs didn't come from better algorithms. They came from changing the interface between system and user. Different inputs unlock behaviors that look nothing like what you designed for.

Stop Chasing the Last Winner

Altman's advice to investors is blunt: spend zero percent of your time trying to back another AI research lab. Zero.

The next multi-trillion-dollar company will not be another AGI research lab. It will probably be the thing that got built because AGI now existed as a new technology. — Sam Altman, OpenAI

He draws the transistor analogy. The companies that built transistors mostly disappeared. What survived and thrived were the companies that used transistors to build something new—personal computers, the internet, mobile phones. We don't call an iPhone a "transistor device."

Same pattern applies. OpenAI, Anthropic, Google—they're the transistor companies. The massive value creation comes from whoever figures out what AGI enables that doesn't exist yet.

I'm skeptical of one piece though. Altman says current founders are the best he's ever seen. Maybe. But most enterprise AI projects still fail—not because founders are bad, but because organizations deploying the technology aren't ready. The gap between startup demo and enterprise absorption remains enormous.

The 10x Assumption

For builders, Altman offers one planning heuristic: assume models get 10x better every year across every dimension. Better, cheaper, faster. Don't predict which dimension improves. Assume all of them.

Don't try to outsmart yourself, not try to say, well, is it going to get better at this little thing or that one. — Sam Altman, OpenAI

The compounding matters. Better algorithms, bigger computers, more data, and AI systems that improve AI research itself. Altman calls it "messy joint acceleration"—humans and AI both contribute, and net output keeps climbing regardless of who deserves credit.

Enterprise teams should take this seriously. Planning for current model limits means your architecture is outdated by the time you ship. At ENVAIO, I learned this the hard way—by the time we optimized for one hardware generation, the next one made our trade-offs irrelevant. AI capability curves are steeper.

The Deflationary Promise

Vinod Khosla pushes on the economics. He expects a hugely deflationary economy in the 2030s. Altman agrees.

Free AGI for billions of users. Medical advice, adaptive education, on-demand software—all at zero cost. Altman frames this as technology doing what it's always done: making scarce things abundant.

The optimism is genuine, but I'd add a caveat. Deflation in AI-generated goods doesn't mean deflation everywhere. Compute could become the scarce resource—Altman admits this. And deciding which problems to solve with massive compute clusters is a governance challenge nobody has answered.

The dotcom comparison holds. Real infrastructure got built. Real value was created. But the timing between investment and return was brutal for companies that showed up too early (your mileage may vary).

Why This Matters

The strategic window is narrowing. Altman sees the 2030s as a period where change exceeds most organizations' ability to adapt. Fortune 500 incumbents that don't rewire now won't get a second chance.

Software disruption comes first. AI engineers, automated customer support, AI-driven outbound sales—these aren't future capabilities. They're shipping now. The real opportunity isn't another foundation model. It's figuring out what AGI makes possible that wasn't before.

What Works

Plan architectures for 10x annual capability jumps. Don't optimize for today's limits. Build for portability across model generations.

Watch interface signals, not just capability signals. ChatGPT's breakthrough was a chat box, not a better model. The next unlock might be equally unintuitive.

Invest in what AGI enables, not AGI itself. The transistor analogy holds. The platform companies get big. The companies that use the platform change the world.

Redesign workflows before deploying AI. Altman and Nadella agree here—process change matters more than tool deployment. The tech works. The org usually doesn't.

Start enterprise AI adoption now. The teams deploying AI software engineers and automating support today are building compounding advantages. Waiting for "better models" is the wrong optimization.

This works for teams that can move fast and tolerate uncertainty. Large enterprises with 18-month planning cycles face a structural disadvantage—and no amount of AI budget fixes that.

Full talk: Watch on YouTube