The Kinetics of the Takeoff: Hard vs. Soft Takeoff
Part 6 of Series "Exploring Superintelligence". The transition to superintelligence might not be a gradual climb. It could happen in minutes or days (Fast Takeoff) driven by recursive self-improvement and hardware overhang. This kinetics analysis suggests a "winner-takes-all" scenario.
This is the 6th article of the series exploring hard vs. soft takeoff.
Previously, we looked at the cheat codes of Whole Brain Emulation, see part 5. Now, we examine the speedometer. Will the rise of AI be a slow industrial revolution or a sudden explosion?
The Kinetics of the Takeoff: Hard vs. Soft Takeoff
In data science, we are used to linear or polynomial growth. We retrain models, they get slightly better. But Nick Bostrom’s Superintelligence forces us to confront a different kind of curve. The question isn't just when human-level AI will arrive, but what happens the morning after.
Will it be a Soft Takeoff (decades of gradual integration) or a Hard Takeoff (a transition from human-level to god-like intelligence in days, hours, or even minutes)?
The Core Concept: Recursive Self-Improvement
The primary engine of a hard takeoff is Recursive Self-Improvement. Unlike a biological brain, which is stuck with its wetware, an AI can rewrite its own source code.
Once an AI reaches a "crossover point" where it becomes better at AI research than its human creators, a feedback loop initiates: the AI improves its intelligence, which makes it better at improving its intelligence, which makes it even better....
This is compounded by Hardware Overhang. Suppose software progress lags behind hardware progress (which is currently the case). When the algorithmic breakthrough for AGI finally happens, we might already possess enough computing power to run not just one, but millions of copies of it. The AI doesn't need to wait for better chips; it just needs to be copied to existing server farms, allowing for immediate, explosive scaling.
The Data Science/Math Perspective: The Optimization Equation
Bostrom formalizes this intuition with a simple but profound equation for the rate of change in intelligence ($dI/dt$):
$$ \text{Rate of Change} = \frac{\text{Optimization Power}}{\text{Recalcitrance}} $$
- Optimization Power: The effort applied to improve the system. Initially, this is just the human research team ($O_{project}$). Later, the AI contributes its own cognitive labor ($O_{system}$). Once the AI becomes superintelligent, $O_{system}$ dwarfs human efforts.
- Recalcitrance: How hard the system is to improve. In standard software projects, recalcitrance usually goes up (low-hanging fruit is picked). But for an AI, recalcitrance might drop because the system can find algorithmic shortcuts humans missed or exploit the hardware overhang.
Mathematically, if Optimization Power grows proportional to Intelligence (the smarter the AI, the better it optimizes), and Recalcitrance remains constant or drops, the solution to the differential equation is an exponential function with a vertical asymptote. The graph doesn't just go up; it goes vertical.
The Strategic Perspective: Winner-Takes-All
The speed of the takeoff dictates the geopolitical outcome.
- Slow Takeoff: If the transition takes decades, there is time for regulation, international treaties, and the gradual adjustment of the economy. Multiple projects might launch simultaneously, creating a multipolar world.
- Fast Takeoff: If the transition happens in hours or days, human reaction time is too slow. The first project to launch successfully will likely complete the transition before competitors even realize what is happening.
This creates a Winner-Takes-All scenario. The first superintelligence obtains a "decisive strategic advantage" with sufficient power to suppress all competitors and establish a "singleton". This single decision-making agency controls the world, whether a government or an AI.
Takeaway
As engineers, we love optimization. But we must respect the dynamics of the systems we build. If we build a system capable of recursive self-improvement, we aren't just building a tool; we are lighting a fuse. If the takeoff is hard, we won't get a chance to patch the bugs.
Next
In the final article of this series, we will confront the ultimate bug: Perverse Instantiation. How do we type "be good" into the terminal without the AI destroying the world to maximize the "goodness" variable?
Series Parts
- The Orthogonality Thesis: Why Smart Models Can Have "Dumb" Goals
- Instrumental Convergence: The Universal Sub-Goals of AI
- The Treacherous Turn: When Validation Sets Fail
- Oracles, Genies, and Sovereigns: Choosing the System Architecture
- Whole Brain Emulation: The "Cheating" Path to AI
- The Kinetics of the Takeoff: Hard vs. Soft Takeoff
- Perverse Instantiation: Why Hard-Coding Values Fails; coming soon