AI · Economics · Career

Sand Into Thought

Why AI arrived exactly when the world needed it, the Mexican standoff between PMs, engineers, and designers, and why the superpowered individual is the future.

There are three dots about AI that most people are looking at separately. I can't stop thinking about what happens when you connect them.

Dot one: AI is working. Like, actually working. Not the "maybe someday" version. The version where the world's best programmers are saying, over the 2024 holiday break, that AI is now coding better than they can.

Dot two: the world is depopulating. Birth rates are collapsing across the west, across China, across most of the developed world. If you project forward a century, many countries will have half their current population.

Dot three: we've actually had almost no real technological progress in the economy for 50 years. It feels like we have. We haven't. The data says otherwise.

Put those together and something wild emerges. We're not heading into a dystopia where AI takes all the jobs. We're heading into a world where AI arrives precisely when we need it to prevent the economy from eating itself. The timing has worked out miraculously well.

Here's what stuck with me.

The Philosopher's Stone

Here's a frame I haven't been able to shake. Isaac Newton spent decades obsessed with alchemy. The transmutation of lead into gold. Common into rare. He never figured it out. Nobody did.

AI is the philosopher's stone.

We now have a technology that transforms the most common thing in the world — sand — into the most rare thing in the world — thought.

Sand. Silicon. Chips. Compute. Intelligence. That's the pipeline. And the thing at the end of that pipeline — thought, reasoning, problem-solving — has been the most scarce resource in the history of human civilization. Every royal family knew it. Every aristocratic class hoarded it. Every university gatekept it.

Now it's being manufactured from sand.

This isn't just a nice metaphor. It's a structural observation about what's happening to the economics of intelligence itself. When something goes from scarce to abundant, everything downstream changes. Pricing collapses. Access democratizes. New fields emerge that weren't economically feasible before.

We're watching this happen in real time with coding, with legal analysis, with medical diagnosis. The question isn't whether it'll happen in other domains. It's how fast.

The Stagnation Nobody Talks About

Here's the thing that genuinely surprised me. We all walk around feeling like we're living through rapid technological change. Smartphones, social media, cloud computing. It feels fast.

It's not. Not in the way that matters economically.

Economists measure the rate of technological change through productivity growth. And productivity growth for the last 50 years has been running at roughly half the pace of the 1940-1970 era, and a third the pace of 1870-1940.

1870 – 1940
Highest productivity growth. Electrification, automobiles, radio, assembly lines.
1940 – 1970
Post-war boom. Jet engines, nuclear power, interstate highways, television.
1970 – Today
The era we think was fast. Internet, mobile, cloud. Statistically? Slow.

Look around. The bridge you drive over was built in 1930. The dam was built in 1910. The city was founded in 1880. Where are the new cities? Where's the California high-speed rail? What have we actually built?

This is essentially Peter Thiel's argument, and I think he's been proven right. We've had lots of progress in bits. Very little in atoms. The built world is not that different from 50 years ago. Compare that to the 1870-1930 delta and it's embarrassing.

Why does this matter for AI? Because AI isn't entering a world that's already moving fast. It's entering a world that's been economically stagnant for half a century. The headroom is enormous. Even if AI only triples productivity growth, it just takes us back to normal levels of economic dynamism. The level where people felt the world was "awash with opportunity."

The Demographic Cliff

Here's the second piece that most AI discourse ignores entirely.

Birth rates are collapsing. Not gradually. Rapidly. The US is below replacement rate. China is below replacement rate. Most of Europe is below replacement rate. Many countries will literally depopulate over the next century.

Here's the thing nobody talks about: if we didn't have AI, we'd be in a panic right now. Because depopulation without new technology means the economy shrinks. Fewer workers, fewer consumers, fewer taxpayers, fewer innovators. You get a civilization that's slowly euthanizing itself.

The timing argument

AI and robots are arriving precisely when we actually need them. The remaining human workers are going to be at a premium, not at a discount. Combine declining population with less immigration with faster productivity growth, and the "dystopian no-jobs" scenario probably never materializes.

This completely reframes the job loss conversation. Everyone's worried about AI taking jobs. The actual macro picture suggests we're going to have a labor shortage, not a surplus. AI isn't replacing workers who would otherwise be abundant. It's filling gaps that demographics are creating.

I'm not saying individual displacement doesn't happen. It does. But at the macro level, the math points in a direction most people aren't looking.

Task Loss, Not Job Loss

This distinction matters more than anything else for people thinking about their careers.

The atomic unit of work isn't the job. It's the task. A job is a bundle of tasks. And tasks change much faster than jobs do.

The Task Rotation

The Executive and the Secretary

In 1970, executives never typed. They dictated memos to secretaries. When email arrived, secretaries would print emails, bring them to the executive, take the executive's handwritten reply, and type it back into the computer. Today, executives do all their own email. The secretary job still exists, but the tasks shifted to travel planning, event coordination, and operational management. Both roles survived. The tasks rotated completely.

This is exactly what's happening with coding right now. The job of "programmer" isn't disappearing. But the task of "writing code by hand" is being abstracted away — just like scripting languages abstracted away memory management, which abstracted away assembly, which abstracted away machine code, which abstracted away rooms full of human calculators doing math by hand.

AI coding is just the next layer of abstraction. The best programmers today aren't writing code. They're orchestrating ten coding bots running in parallel, arguing with them, debugging their output, shifting between terminals. The job title hasn't changed. Every single task inside it has.

The people who get crushed aren't the ones whose jobs disappear. They're the ones who refuse to update their task bundle.

The Mexican Standoff

Here's maybe the most visceral way to describe what's happening right now. Think of what's going on between product managers, engineers, and designers as a John Woo three-way Mexican standoff. Guns in both hands. Each one pointing at the other two.

Engineer
Now believes
"I don't need a PM or designer. AI can do product strategy and design. I'll build the whole thing myself."
Product Manager
Now believes
"I don't need an engineer or designer. AI can write code and generate designs. I'll ship the product myself."
Designer
Now believes
"I don't need a PM or engineer. AI can code and do strategy. I'll design and build the whole experience."

The punchline? They're all kind of correct.

AI is now genuinely good at all three of those jobs, or at least at a large percentage of the tasks within each job. Which means the silos between these roles are dissolving faster than anyone expected.

And then there's the real irony: all three of them are eventually going to realize that AI can also be a better manager. So they'll end up aiming the guns up the org chart. But that's the next phase.

This isn't a problem. It's an opportunity. And it leads directly to the most important idea here.

The Superpowered Individual

AI takes people who are good at something and makes them very good. That's the average case. Useful but not world-changing.

The interesting case is the really great people becoming spectacularly great. The best coders I know are saying it: "I'm not twice as good as I used to be. I'm 10 times as good."

The mechanism is what Scott Adams always talked about: the additive effect of being good at two things is more than double. Being good at three things is more than triple. You become a hyper-relevant specialist at the intersection.

Adams himself was the proof. Not the world's best cartoonist. Not the world's best business thinker. But a cartoonist who understood business? That's Dilbert. One of the most successful comics in history. Nobody else could have made it because nobody else had that exact combination.

The E-shaped career

Forget T-shaped. The new model is E-shaped — deep expertise in one domain with functional capability across two or three others. A coder who can also do design and product management. A designer who can also ship code and run strategy. The silos dissolve. The combinations compound. And AI is what makes the lateral expansion possible.

Larry Summers' version of this is simpler: "Don't be fungible." Don't be a cog that can be swapped in and out. If you're "just" a designer or "just" a PM, you're replaceable. If you've got the E-shape going, you're one of the only people in the world who can do that combination. And your value goes through the roof.

Here's the part I think most people are sleeping on: AI is the greatest career training tool ever built. You can literally say "teach me product management" and it will. It'll give you problems, evaluate your answers, adjust difficulty. The Bloom two-sigma effect — where one-on-one tutoring moves a student from the 50th percentile to the 99th — used to be reserved for aristocrats. Now it's available to anyone with an internet connection.

People are spending all their time asking AI to do work for them. Not enough people are spending time asking AI to teach them how to do things themselves. That second use case might be more valuable in the long run.

Why You Still Need to Understand the Stack

Here's what I think is an underrated take. If you just let AI write the code and accept whatever it gives you, you'll be a mediocre coder. AI will happily generate infinite amounts of mediocre code. No problem.

But if you want to be one of the best software people in the world? You still need to understand the full stack. All the way down to assembly and machine code. Not because you'll be writing assembly by hand. But because when the AI gives you something broken, you need to understand why.

His analogy is sharp: scripting language developers still needed to understand how microprocessors work. Not because they were doing memory management. But because when their Python code was slow, understanding what was happening at the hardware level is what separated good from great.

Same principle applies now. AI coding is the next layer of abstraction. The people who understand every layer below it will be the ones who can actually make the AI coding tools sing. Everyone else is just pressing buttons and hoping.

He's teaching his 10-year-old the same thing. The kid is on Replit doing vibe coding, building Star Trek simulators, arguing with Claude for two hours at dinner. But the message is: learn to understand what the AI is giving you. Don't just consume the output. Understand the machine.

Nobody Knows Where the Moats Are

Here's maybe the most honest thing you can say about AI right now: nobody knows where the moats are. Not the VCs, not the founders, not the big labs.

The case for moats in AI models seems strong: billions to build, rare talent, massive compute requirements. Should be an oligopoly or monopoly.

The case against? Within a year of GPT-3, there were open-source equivalents running on a fraction of the hardware. Within three years of ChatGPT, five American companies, five Chinese companies, and open source all had roughly equivalent products. DeepSeek came out of a hedge fund in China and basically replicated American frontier lab research.

Even at the app layer, the evidence cuts both ways. Claude Code was spectacular. Then Anthropic built Co-Work with Claude Code in a week and a half. That's impressive. But it also took a week and a half. How defensible can something be that was built in 10 days?

The smartest people in the field, when you get them off the record, will tell you there really aren't any secrets among the big labs. They all have the same information. They lap each other on a regular basis, but there's not a lot of proprietary anything at this point.

His conclusion: this is a complex adaptive system. The technology, the regulation, the entrepreneurial decisions, the economics, the availability of capital — they all interact in ways we can't predict. Anyone claiming to know the industry structure five years out is probably wrong. The history of the internet proves it. Almost every confident prediction from 1993-2005 turned out to be quite badly wrong.

The right posture isn't prediction. It's flexibility.

IQ 200 and Beyond

Here's a subtle take on AGI worth sitting with. The "cosmic" definition — the singularity, the self-improvement loop, machines making decisions beyond human comprehension — probably isn't the world we live in. And the "prosaic" definition — AI can do every economically valuable task as well as a human — understates what's actually going to happen.

Because here's the thing: why would human-level be the ceiling?

Human IQ tops out around 160. That's Einstein. That's Feynman. Biological hardware imposes hard constraints. There's only so much you can fit in a skull.

HUMAN RANGE 105 Accountant 110 Manager 130 Lawyer 140 Scientist 160 Einstein HUMAN CAP AI RANGE (NO CEILING) 180 250 300+ NO BIOLOGICAL LIMIT CURRENT AI MODELS ~131-140

Human intelligence caps at ~160 (biological constraint). AI has no theoretical ceiling.

Current AI models test around 131-140 on equivalent benchmarks. They'll hit 160 soon. Then 180. Then 200. Then numbers that have no human equivalent because no human has ever been that intelligent.

Think about it: would the world be better or worse with more Einsteins? Obviously better. So why wouldn't we want machines that exceed Einstein? The "human-level AGI" milestone is going to be a footnote. Just Tuesday in 2026. The more interesting question is: what do we get to do with intelligence that exceeds anything biology has ever produced?

That's genuinely exciting. Not in an abstract utopian way. In a practical, what-does-physics-look-like-when-you-have-IQ-300-working-on-it way.

Why It Won't Happen Overnight

Lest this all sound too utopian, let's talk about the brakes. The Thiel critique applies here too: the real world is wrapped in bureaucratic process, cartels, monopolies, regulations, and political structures that actively resist change.

Healthcare is the perfect example. AI is almost certainly a better diagnostician than your doctor today. But AI can't get a medical license. Can't prescribe medications. Can't perform procedures. The medical system — doctors, nurses, hospitals — functions as a set of interlocking cartels. And cartels don't welcome disruption.

This is why the "everything changes overnight" narrative doesn't hold up — from either the utopians or the dystopians. The technology is moving fast. The institutional structures it needs to penetrate are moving at the speed of bureaucracy. Which is to say, barely.

The optimistic view: maybe AI is compelling enough that we actually revisit decades-old assumptions about how industries should be structured. Maybe we finally ask whether the current arrangements are serving anyone other than the incumbents. But that's a political question as much as a technological one.

Indeterminate Optimism

There's an investment philosophy here that doubles as a philosophy of how to operate in uncertain times.

Thiel's framework: there are determinant optimists (I will build the electric car, I will go to Mars) and indeterminate optimists (something good will happen, I just can't tell you exactly what). Thiel has historically been critical of indeterminate optimism as wishful thinking.

But here's the counter: indeterminate optimism isn't wishful. It's structural. The great thing about the capitalist system, about Silicon Valley, is you don't need one Elon. You need thousands of determined optimists running thousands of experiments. The indeterminate optimist's job is to fund and enable as many of those as possible. Not because any single one is guaranteed. But because the portfolio of experiments, in aggregate, produces the future.

Nine major technology platforms have emerged from Silicon Valley. Not because someone planned them. Because the ecosystem was flexible enough to morph from semiconductors to PCs to internet to mobile to cloud to AI. Nobody sat down in 1990 and said "in the 2020s we'll do AI." It emerged from the system.

That's the lesson for individuals too. You don't need to predict the future perfectly. You need to be flexible enough to adapt as it reveals itself. Deep skills, lateral capability, willingness to update your task bundle as the tools change.

What I'm Taking Away

A few things I'm going to sit with.

The timing argument is the most compelling macro frame I've heard for AI optimism. Not "AI will be great because AI is cool" but "AI arrives at the exact moment when demographic collapse would otherwise cause economic contraction." That's not cheerleading. That's structural analysis.

The Mexican standoff is real and I'm seeing it play out in every product team I talk to. The silos between PM, engineering, and design are dissolving. The people who lean into the dissolution — who expand laterally instead of defending their stovepipe — will compound their value. The ones who cling to role definitions will get compressed.

And the biggest one: AI as a learning tool is still dramatically underrated. Everyone's asking it to do work. Not enough people are asking it to teach them. The Bloom two-sigma effect — elite tutoring available to everyone — is maybe the most consequential thing happening in AI right now, and it gets almost no attention.

We're building AI wearables at NeoSapien. This framing reinforced something I already believed but now feel more strongly: the future isn't AI replacing human intelligence. It's AI augmenting it. Making the good people great and the great people spectacularly great. The philosopher's stone works. The question is who's going to pick it up.

What are you using AI to learn right now? Not to do. To learn.