Here's what's wild. We are arguably near the end of the most consequential exponential in human history — the scaling of AI capabilities — and most of the world is still arguing about the same tired political issues from a decade ago.
That's the view from Dario Amodei, CEO of Anthropic. Not some fringe take from a Twitter doomer or a starry-eyed accelerationist. This is the person running one of the three labs actually building frontier AI systems, staring at the training curves every day, and placing multi-billion-dollar bets on what comes next.
His core claim: we are one to three years away from what he calls a "country of geniuses in a data center." Not a metaphor. A literal collection of AI systems with intellectual capabilities matching or exceeding Nobel Prize winners across domains.
Let's unpack what that actually means, why he believes it, and where the cracks in the thesis might be.
The Big Blob of Compute
Amodei's worldview goes back to a document he wrote in 2017 or 2018, before GPT-1 was even a thing. He called it "The Big Blob of Compute Hypothesis." The idea is deceptively simple: all the clever techniques, all the architectural innovations, all the "we need a new method" arguments — they don't matter very much.
What matters is a short list of things.
Seven Things That Actually Matter
Raw compute. Quantity of data. Quality and distribution of data. Training duration. An objective function that scales to the moon (pre-training loss, RL rewards). And the numerical plumbing — normalization, conditioning — that keeps the giant blob flowing in a laminar way instead of exploding.
Rich Sutton published "The Bitter Lesson" a couple of years later, making essentially the same point. The lesson is bitter because researchers keep rediscovering it: general methods that leverage computation always win in the end. Hand-engineered features, domain-specific architectures, clever heuristics — they all eventually get crushed by scale.
Amodei says he hasn't seen anything that contradicts this hypothesis since he wrote it. Pre-training scaling laws were one instance. And now RL scaling is showing the same log-linear improvements across a wide variety of tasks — not just math contests, but coding, reasoning, and beyond.
RL Is Not Different From Pre-Training
There's a common narrative that RL represents some fundamentally new paradigm. That we're "teaching models skills" through reinforcement learning in a way that's categorically different from the next-token prediction of pre-training.
Amodei thinks this is a red herring.
Look at the history. Pre-training started narrow. GPT-1 trained on fanfiction — a tiny slice of text that didn't generalize well. It was only when you trained on broad internet scrapes that generalization kicked in. The transition from GPT-1 to GPT-2 was exactly this: go wide enough and the model starts doing things it was never explicitly trained on. Give it a list of house prices and square footages, and it completes the pattern. Linear regression, basically. Not great, but it does it — and it's never seen that exact task before.
RL is following the same trajectory. We started with narrow tasks — math competitions, specific coding challenges. Now we're broadening to many different RL tasks. And generalization is starting to emerge, just like it did with pre-training five years ago.
The goal of RL training isn't to teach the model every possible skill, just like pre-training didn't try to expose the model to every possible arrangement of words. The goal is to get enough breadth that generalization happens on its own. We're watching the same movie again, just on a different screen.
The Sample Efficiency Puzzle
There is a genuine puzzle here, and it's worth sitting with.
Humans don't see trillions of tokens. Our brains aren't blank slates — we come pre-loaded with priors from millions of years of evolution. Entire brain regions are wired up to specific inputs and outputs before we ever open our eyes. Language models, by contrast, start as random weights. They need orders of magnitude more data than a human child to reach competence.
But here's Amodei's reframe: pre-training isn't analogous to human learning. It sits somewhere between human evolution and human learning. We get many of our priors from evolution — pattern recognition, intuitive physics, social cognition. The models have to learn all of that from scratch. So of course it takes more data.
The phases of LLM development map onto — but don't perfectly align with — human cognitive timescales
The interesting thing is that once trained, these models are remarkably good at in-context learning. Give a model a million tokens of context and it can adapt, learn patterns, pick up on structure. A million words is weeks of human reading compressed into seconds. So the system starts data-hungry but becomes sample-efficient once the pre-training foundation is laid.
Whether we need true "continual learning" — a single model persistently updating its weights on the job — is an open question. Amodei's bet is that we probably don't. Pre-training generalization plus long context windows might just be enough.
The Spectrum of Software Engineering
Let's talk about something concrete: code.
About eight or nine months ago, Amodei predicted that AI models would be writing 90% of the lines of code within three to six months. That happened. At Anthropic and across many teams using their models, most code is now AI-generated.
But here's the thing everyone misunderstood: 90% of lines written is not 90% of software engineers replaced. These are worlds apart. There's a whole spectrum between them.
90% of code written by AI. 100% of code written by AI. 90% of end-to-end SWE tasks handled by AI — including compiling, setting up environments, testing, writing documentation. 100% of today's SWE tasks done by AI. And even then, new higher-level tasks emerge that humans can do, managing and directing the AI systems.
Amodei's claim: we're traversing this spectrum very quickly. Not instantly, but fast. The current productivity improvement from AI coding tools is something like 15-20% total factor speedup, up from maybe 5% six months ago. That number is accelerating.
Inside Anthropic, the productivity gains are unambiguous. They're under extreme commercial pressure, trying to maintain 10x annual revenue growth while also doing more safety work than competitors. There is, as Amodei puts it, "zero time for bullshit." If the tools were secretly reducing productivity, they'd know. They see the end output every few months in the form of model launches.
There's an interesting counterpoint though. A major study last year had experienced developers try to close pull requests in repositories they were familiar with. The developers reported feeling more productive with AI assistance. But when you measured their actual output — what got merged — there was a 20% downlift. They were less productive.
How do you square this with Anthropic's internal experience? Probably selection effects and the pace of improvement. The models are getting better fast enough that results from six months ago are already outdated. And within an AI lab, the feedback loop between model capability and usage is tighter than anywhere else.
Fast, But Not Infinitely Fast
This is Amodei's central theme, and it's worth understanding precisely because it sits in an uncomfortable middle ground that neither doomers nor accelerationists fully occupy.
There are two poles. One says AI progress is slow, diffusion will take forever, nothing really changes. The other says we'll get recursive self-improvement, exponential takeoff, Dyson spheres around the sun within nanoseconds of AGI. Both are caricatures, but both have their adherents.
The actual data sits in between, and it's striking.
Anthropic's revenue: 10x growth per year, three years running. "Obviously that curve can't go on forever."
Anthropic went from zero to $100 million in 2023, $100 million to $1 billion in 2024, and $1 billion to $9-10 billion in 2025. In January 2026 alone, they added several billion more to annualized revenue. That's a 10x-per-year curve, sustained over three years. It's one of the fastest revenue ramps in the history of business.
But diffusion isn't instant. Large enterprises move slower than individual developers. There's legal review, security compliance, procurement processes, change management. A developer at a Series A startup might adopt Claude Code months before the same tool gets rolled out to 3,000 developers at a pharmaceutical company. Even when enterprises are moving faster than they normally would — and they are — there's a lag.
Amodei's projection: AI will diffuse much faster than any previous technology, but not infinitely fast. Think 10x a year revenue growth, not overnight transformation of the global economy. When someone says "diffusion is cope," he pushes back. It's real. Not as an argument that AI doesn't matter, but as a constraint on how quickly trillions of dollars in value get captured.
The Compute Buying Problem
Now here's where it gets really interesting. If you genuinely believe a country of geniuses is one to three years away, you should be buying as much compute as humanly possible. The TAM of a system that can actually do everything a Nobel Prize winner can do is measured in trillions. So why isn't Anthropic buying $10 trillion worth of GPUs?
The answer is that being off by even one year can be fatal.
Data centers take a year or two to build out. You have to commit capital now for capacity that comes online in 2027 or 2028. If you assume revenue will be $1 trillion by then and it's actually $800 billion, there is — in Amodei's words — "no force on earth, no hedge on earth, that could stop you from going bankrupt."
So you end up in this bizarre situation where you're 90% confident that transformative AI is coming within a decade, you have a 50/50 hunch it's one to two years away, and yet you can't bet the farm on it because the difference between being right by a year and wrong by a year is the difference between generational wealth and bankruptcy.
The industry is building roughly 10-15 gigawatts of compute this year. That grows about 3x per year. Each gigawatt costs on the order of $10-15 billion per year to operate. By 2028-2029, the industry as a whole will be spending multiple trillions annually on compute. The individual companies are each making their own bets on where demand will land within that trajectory.
The Profitability Illusion
Here's a counterintuitive take on AI lab profitability that I haven't seen articulated this clearly elsewhere.
Amodei's model: in the AI industry, profitability isn't really a measure of choosing to spend less. It's what happens when you underestimated demand. Losses happen when you overestimated demand. That's it.
Think about the toy model. You buy $100 billion in compute. Roughly half goes to training next-generation models, half to serving inference. The inference side has gross margins above 50%. If you predicted demand correctly, you're profitable — maybe $50 billion in profit on that spend. If demand comes in lower than expected, you have excess training compute and you're unprofitable. If demand comes in higher, research gets squeezed but you're very profitable.
The fundamental insight is that with log-linear returns to scale in training, there's a natural equilibrium where you spend "of order one fraction" of revenue on research — not 5%, not 95%. Diminishing returns kick in. Each additional dollar of training compute past a certain point gives you less than it costs. And the gross margins on inference are high enough to support profitable operations.
So the AI business model might actually be fine. The chaos is in the demand prediction, not in the underlying economics.
Moats and Cournot Equilibria
If AI research gets automated by AI, doesn't everything become a commodity? Doesn't every moat dissolve?
Maybe eventually. But Amodei draws an analogy to cloud computing. There are three, maybe four major cloud players. Not because of network effects (that's how you get monopolies like Meta), but because the cost of entry is enormous. You need massive capital, deep expertise, and years of operational knowledge. Nobody walks in with $100 billion and says "I'm disrupting AWS" — the effect of entering is just that margins go down, while the incumbents keep their structural advantages.
AI models are even more differentiated than cloud. Everyone knows Claude is good at different things than GPT, which is good at different things than Gemini. It's not just crude category differences — models have different styles, different strengths within coding, different approaches to reasoning. That differentiation supports pricing power.
What Amodei describes is basically a Cournot equilibrium: a small number of firms, each behaving rationally, where profits don't get competed to zero. Not astronomical margins, but not zero either. Something like cloud, but with more differentiation.
The one scenario that breaks this is if AI models themselves learn to build AI models, and that capability diffuses broadly. Then you're not just commoditizing AI — you're commoditizing the entire economy at once. Amodei acknowledges this as a possibility, but suggests it's "kind of far post-country of geniuses in the data center."
Governance at the Speed of Exponentials
Now let's talk about the part that keeps Amodei up at night.
If AI capability is an exponential curve, and economic diffusion is a second exponential (steep but lagging), then governance is running on a third curve that's currently flat. Legislatures operate on year-long cycles. International agreements take decades. The technology doesn't care about any of that.
Amodei's worry isn't that regulations will stop AI progress. Markets are too powerful for that — when there's real money to be made, regulation rarely holds. His worry is the opposite: that we won't put safeguards in place fast enough for the dangerous stuff while simultaneously allowing ridiculous restrictions on beneficial stuff.
Exhibit A: the Tennessee legislature introduced a bill making it an offense to train AI to provide emotional support through open-ended conversation. Amodei calls it dumb. Legislators who clearly had no idea what AI models can and can't do, reacting to vibes rather than substance.
But here's the twist. Anthropic opposed a federal moratorium on state AI laws — even though state laws like Tennessee's are clearly harmful. Why? Because the moratorium would have banned all state regulation for 10 years with no federal plan to replace it. Given the biosecurity and autonomy risks Amodei writes about in his "Adolescence of Technology" essay, a decade of zero regulation is far more dangerous than a patchwork of dumb state laws.
If we had 100 years for this to happen all very slowly, we'd get used to it. We've gotten used to the presence of explosives in society. We would develop governance mechanisms. My worry is just that this is happening all so fast. — Dario Amodei
His preferred approach: start with transparency standards. Monitor the risks. As specific dangers become more concrete — bio-terrorism, AI autonomy, offensive cyber — act fast with targeted regulation. Don't try to regulate everything at once. But don't pretend the risks don't exist either.
The Geopolitical Knife-Edge
This is where things get genuinely uncomfortable.
Amodei believes there will be distinguished points on the AI capability curve — moments where one country or coalition achieves a decisive advantage. Not permanent, the other side is always catching up. But enough of a window to set the "rules of the road" for a post-AI world order.
He wants democratic nations holding the stronger hand when that negotiation happens.
This means export controls on chips to China. Not selling data centers to authoritarian states. Preserving a lead that gives democracies leverage. It's a position that has obvious costs — less trade, less diffusion of benefits to Chinese citizens, potential instability from the power imbalance itself.
But Amodei is genuinely worried about a specific failure mode: an authoritarian government with powerful AI creating a surveillance state so comprehensive and self-reinforcing that it becomes essentially permanent. A digital totalitarianism with no off-ramp.
There's a more radical hope buried in his thinking, though. What if AI technology has properties — or could be built to have properties — that inherently dissolve authoritarian structures? What if you could give every citizen in an authoritarian country their own AI model that defends them from surveillance, and there's no way for the state to crack down without losing the technology's benefits entirely?
We hoped the internet would do this. It didn't. Social media was supposed to be the great democratizer and turned out to be a tool for both liberation and oppression. But Amodei wonders if we could try again with AI, armed with everything we learned from the internet's failures.
It's speculative. He acknowledges that. But in a world where feudalism became obsolete with industrialization, maybe authoritarianism becomes morally and practically obsolete with AI. Maybe the crisis itself forces new thinking about how to protect freedom with the new technology.
Or maybe not. The honest answer is we don't know.
The Thing They'll Miss
When someone eventually writes the history of this era, what will be hardest to reconstruct from the record?
Amodei's answer is revealing. Three things.
First: the sheer disconnect between the people inside the exponential and everyone else. When you're one to two years from something this consequential and the average person on the street has no idea — that's a historically strange situation. Anything that actually happened looks inevitable in retrospect, but the people making bets on it right now are operating under genuine uncertainty.
Second: the speed. Everything happening at once. Decisions that you might think were carefully calculated, but actually someone walks into your office and says "you have two minutes, A or B?" You pick one. It turns out to be the most consequential decision ever. You didn't even know it at the time.
Third: the insularity. A small group of people at a small number of companies, making decisions that affect the entire species, while most of the species is busy with other things. That's not unprecedented in history — it's happened with nuclear weapons, with the early internet — but the scale of what's at stake makes it feel different this time.
Some very critical decision will be some decision where someone just comes into my office and says, "Dario, you have two minutes. Should we do thing A or thing B?" Someone gives me this random half-page memo. I'm like, "I don't know. I have to eat lunch. Let's do B." That ends up being the most consequential thing ever. — Dario Amodei
Where Does This Leave Us?
Amodei is at 90% confidence that we'll have "a country of geniuses in a data center" within 10 years. His hunch — more like 50/50 — is that it's one to two years away, maybe three. He's at near-certainty on verifiable tasks like coding. The residual uncertainty is on things that are hard to verify: planning a Mars mission, making a fundamental scientific discovery, writing a great novel.
His thesis is soft takeoff. Not instantaneous transformation, not a slow crawl, but steep exponentials in both capability and economic adoption that play out over years rather than days or decades.
The obvious question: is this the view from inside the bubble? Every generation of technologists has believed their thing was the most important thing. And yet, the revenue numbers are real. The capability improvements are measurable. The coding productivity gains are felt by millions of developers today, not in theory.
Maybe the most honest summary of where we are is this: the exponential looks real. The economic adoption looks real but lagging. The governance structures are nowhere close to ready. And we're running out of time to figure out whether that last part matters.
Whether you find that exciting or terrifying probably depends on how much you trust that a small number of people making fast decisions under uncertainty will get the important ones right. History suggests they'll get some right and some wrong. The question is which ones fall into which category.
We're about to find out.