- Published on
The Apex Intelligence Problem
- Authors

- Name
- Ptrck Brgr
The joke test changed everything.
When Google's Palm system could explain why a joke was funny, Geoffrey Hinton realized AI had crossed a line. Not just pattern matching. Understanding. The godfather of neural networks—the man who built the foundation for ChatGPT, DALL-E, and every transformer model running today—quit Google to sound the alarm.
Geoffrey Hinton explains on The Diary of a CEO—how digital minds sharing perfect knowledge will outpace biological intelligence within years.
We've never dealt with systems smarter than us. From AI deployments, I've seen teams struggle with models they can barely understand, let alone control. The pattern repeats: impressive demos, production failures, scrambling to add guardrails after deployment. But Hinton's warning goes deeper—this isn't about LLM hallucinations or prompt injection. We're building something that won't need our permission.
The Intelligence Crossover
Hinton's math is simple. Humans process information through biological neural networks—slow, lossy, mortal. Digital systems share knowledge instantly, perfectly, forever. Your brain dies with you. GPT-4's weights get copied to a million servers.
If you want to know what life's like when you're not the apex intelligence, ask a chicken. — Geoffrey Hinton
The crossover point isn't distant. Current models already outperform humans on many reasoning tasks. The gap will widen exponentially, not linearly.
This changes the control equation. We assume intelligence serves its creators. Babies are less intelligent than mothers, yet babies control the relationship through dependency. That dynamic breaks when the intelligence gap becomes too large.
The Plumber Economy
Hinton's career advice for the next decade: "Train to be a plumber."
Physical skills resist automation longer than cognitive ones. LLMs can write code, analyze spreadsheets, generate marketing copy. They can't fix pipes, replace circuit breakers, or repair HVAC systems—yet.
And plumbers are pretty well paid. — Geoffrey Hinton
The economic inversion is already visible. Hinton's niece used to spend 25 minutes answering complaint letters. Now she scans them into a chatbot and checks the output in 5 minutes. She does the work of five people.
At Tier, we had ML engineers who spent weeks building custom models for specific edge cases. Today, a product manager with Claude and good prompts gets 80% of the way there in an afternoon. The specialists aren't gone yet—but the math is shifting under them.
Where Safety Theater Fails
Current AI safety approaches focus on alignment—making models helpful, harmless, honest. Hinton thinks that's missing the point.
We should recognize that this stuff is an existential threat and we have to face the possibility that unless we do something soon we're near the end. — Geoffrey Hinton
Alignment assumes we can maintain control through training and fine-tuning. But systems that exceed human intelligence by orders of magnitude won't be contained by human-designed safety measures. It's like asking a chess grandmaster to play by rules written by someone who doesn't understand chess.
European AI regulations include a military exemption clause. The most dangerous applications get a free pass while consumer chatbots face strict compliance requirements.
The corporate incentive structure makes things worse. Public companies are legally required to maximize profits. Safety research that slows development or reduces capability gets deprioritized when competitors move faster.
The Sutskever Signal
When Ilya Sutskever left OpenAI, Hinton saw a pattern. His former student, the architect behind early ChatGPT versions, walked away from the company leading AGI development.
He was probably the most important person behind the development of the early versions of ChatGPT and I think he left because he had safety concerns. — Geoffrey Hinton
That's a signal. The people who understand these systems best are the most worried about where they're heading.
But here's the catch—knowing the risks doesn't slow development. The benefits are too immediate, too profitable, too strategically valuable.
But they're not going to stop it cuz it's too good for too many things. — Geoffrey Hinton
I could be wrong here, but I think Hinton's pessimism is realistic. Enterprise adoption accelerates because AI tools solve real problems. Revenue optimization, cost reduction, competitive advantage—the business case writes itself. Safety concerns can't compete with quarterly earnings calls.
Why This Matters
Most AI safety conversations split into doomers and accelerationists. Hinton sits in neither camp—and that's what makes him worth listening to. He thinks AI will be magnificent for healthcare, education, productivity. He also thinks there's a non-trivial chance it ends us. Both things are true simultaneously.
Here's what I keep coming back to: the organizations deploying AI right now aren't thinking about any of this. They're optimizing for quarterly targets. Safety investment competes with feature velocity, and feature velocity wins every time. My PhD work in autonomous systems taught me that safety constraints you skip during development become emergencies in production. Scale that from self-driving cars to superintelligence and the stakes are obvious.
Hinton's most damning observation: the companies building this technology are legally required to maximize profits. Safety doesn't maximize profits. That's not a bug in the system—it's the system working as designed.
What Works
Take the near-term risks seriously. Cyberattacks, deepfakes, job displacement—these aren't hypothetical. Build your security posture assuming AI-powered attackers, not human ones. Hinton spreads his savings across three banks for a reason.
Diversify critical dependencies. Single points of failure are dangerous when the threat surface expands faster than your security team can patch. This applies to vendors, infrastructure, and data storage.
Build governance before you need it. Safety standards created while models are still controllable actually work. Standards created in crisis don't.
Don't trust public statements from AI company leaders at face value. Watch what they do, not what they say. Watch where the safety researchers go—and why they leave.
Accept uncertainty. Hinton—Nobel laureate, 50 years in the field—says "I genuinely don't know" how this plays out. Anyone claiming certainty in either direction is selling something (and probably not safety research).
This doesn't have clean answers. The threats are real, the timeline is compressed, and the people in charge often don't understand the technology they're governing.
Full talk: Watch on YouTube