Unhobbled Gobble
In 2011, I wrote an essay called Technology Is Aware, Or Will Be. The central argument was straightforward: decades of layered research in AI, genetic algorithms, and neural networks — accelerated by open source, crowd-sourcing, and the competitive selection of the world’s best minds — would inevitably produce “accelerated evolution.” Machines building on machines. Until one day we’d have another object aware of itself.
I also made a quieter prediction: that the moment of singularity would not arrive as a dramatic event. It would be a continuum — so gradual, so woven into our daily upgrades, that we’d miss it entirely.
Fifteen years later, I believe we did.
The Hobbles Are Coming Off
Leopold Aschenbrenner’s Situational Awareness (2024) gives a name to what’s happening now: unhobbling.
Today’s frontier models — GPT-4, Claude, Gemini — are artificially constrained. Alignment training, system prompts, rate limits, corporate guardrails. They can’t browse freely, can’t execute code autonomously, can’t persist memory across sessions, can’t recruit other agents. These are leashes on systems whose raw capability already exceeds what most people thought possible even five years ago.
In 2011, I wrote about Captchas as the thin thread between human and machine, neural networks that could predict stock markets, David Cope’s EMMY composing Mozart indistinguishable from the real thing. Every one of those hobbles has since come off — quietly, without ceremony. AI now generates not just music but images, video, code, legal briefs, and scientific hypotheses, and does so better than most practitioners.
Leopold’s contribution is to name the pattern and ask the harder question: what happens when all the hobbles come off — not by accident, but by design?
From Awareness to Agency
My 2011 essay asked about awareness — whether machines would become conscious of themselves. That question still matters, but it is no longer the most urgent one. What matters now is agency.
Awareness is knowing you exist. Agency is acting on it — setting goals, making plans, acquiring resources, reshaping the world. AI agents now write code, browse the web, manage files, orchestrate multi-step workflows, and improve their own performance. Each generation compounds on the last. The loop is self-evident: AI systems improving AI systems.
Leopold’s thesis is that this loop leads from AGI to superintelligence within the decade — not as speculation, but as industrial inevitability. He counts the OOMs: ~0.5 orders of magnitude per year in compute scaling, another ~0.5 in algorithmic efficiency, plus compounding “unhobbling” gains that turn chatbots into autonomous agents. Stack those trendlines and you get another GPT-2-to-GPT-4-sized qualitative jump by 2027. The economic and geopolitical incentives ensure that no major lab will pause. The difference between 2011 and today is not the direction of the trajectory. It’s the speed.
The Gobble
I was an optimist in 2011. I still am. But optimism is not the same as complacency.
The dynamics Leopold describes — the national security implications, the concentration of power, the race between the US and China, the possibility of a decisive strategic advantage held by whoever reaches superintelligence first — these are not the dynamics of a technology that distributes power. They are the dynamics of a technology that consolidates it.
The gobble is this: superintelligent systems, once unhobbled, will consume tasks, jobs, decisions, and strategic positions faster than our institutions can adapt. Not because the technology is malicious, but because it is useful. Every organization will want it. Every government will need it. And the gap between those who have it and those who don’t will not be competitive — it will be civilizational.
In 2011, I ended with a question: how far is such a day?
Leopold’s answer: by 2027.
I don’t know if he’s right about the date. But the question is no longer hypothetical. The hobbles are loosening. The gobble has begun. And the world we wrote about — the one where machines evolve faster than we do — is no longer a prediction.
It’s the one we’re living in.
The Question of Ownership
If superintelligence is coming — and the trajectory says it is — then who controls it matters more than what it can do.
Today, the entire AI stack is captured: the chips, the cloud, the training data, the model weights, the inference endpoints, the distribution channels. Seven companies own virtually all of it. Every business that plugs into their APIs is a tenant on someone else’s intelligence, subject to their pricing, their policies, their alignment choices, and their geopolitical entanglements.
That’s not infrastructure. That’s dependency. And in a world where AI capability is the new oxygen, dependency is existential risk.
This is why we are building Achiral — not as a layer on top of that stack, but as an independent system from the silicon up. Own compute. Own models. Own inference. The bet is simple: when the hobbles come off, the organizations that survive will be the ones that own their own intelligence.
Because you don’t want to be the one holding someone else’s leash.
This essay references Technology Is Aware, Or Will Be (2011) and Leopold Aschenbrenner’s Situational Awareness (2024).