The Purchasing Power Math That Makes AI Doomers Look Foolish

A speculative fiction memo crashed $100bn in market cap. The real story isn't the panic — it's the $7,000 per household nobody's modelling.

37 min read

37 min read

Published 26 February 2026

Blog Image

A Fictional Recession That Crashed Real Markets

On Monday, a piece of speculative fiction wiped over $100 billion in market capitalisation off the stock market. IBM cratered 13% — its worst single day in 25 years — because Anthropic published a blog post about COBOL. Not because COBOL suddenly stopped working. Not because banks started migrating their core systems overnight. Because Anthropic mentioned it.

But the real catalyst was a Substack post by investment research firm Satrini, which published a fictional macro memo set in 2028. The scenario was clean: AI capabilities keep compounding, companies rationally cut white-collar headcount to protect margins, displaced workers spend less, the consumption hit cascades through mortgages and credit, and ultimately contaminates the entire financial system. In their fictional scenario, the S&P drops 38% from its 2026 highs. Unemployment hits 10.2%. The world is very, very bad.

Here is what I found most interesting about the past week: the market didn't crash because something happened. It crashed because someone wrote a compelling story about what might happen. And the difference between those two things is the entire argument.

Steel-Manning the Bear Case (Because It Deserves It)

Credit where it's due — the Satrini memo is well-constructed. The mechanism it describes is internally consistent and, frankly, pretty easy to follow even without an economics degree.

White-collar workers make up roughly half of US employment and drive three-quarters of discretionary consumer spending. The top 20% of earners account for about 65% of consumer spending — the people buying second homes, cars, holidays, private school tuition. If AI structurally impairs their earning power, the consumption maths get ugly fast. A 2% decline in white-collar employment could translate into double that — a 4% hit on discretionary spending.

Satrini describes what they call an "intelligence displacement spiral" — essentially a negative feedback loop. AI improves, companies cut payroll, savings flow into more AI, AI improves further, and there is no natural brake on the cycle. The financial contagion chain is plausible too, particularly through private credit, which grew from $1 trillion in 2015 to over $2.5 trillion by 2026. Private equity rolled up SaaS companies at valuations that assumed perpetual revenue growth. Those assumptions are dying in real time.

The most haunting line in the piece: "In 2008, the loans were bad on day one. In 2028, the loans were good on day one. The world just changed after the loans were written."

I get why it went viral. The scenario is vivid, simple, well-argued, emotionally resonant, and plausible. It activates the same dread that made The Big Short a cultural touchstone — the feeling that the system is fragile, that nobody in charge sees what's coming, and that the smart money is already headed for the exits.

But here's the thing about doom narratives: they are dramatically more viral than their counterfactuals. Not because of their analytical rigour, but because of one of the most robust findings in human psychology — negativity bias. A headline reading "AI could crash the economy" generates 10 to 50 times more engagement than "AI-driven deflation could raise real purchasing power for the median household." Both describe potential futures. Only one gets millions of views.

The $7,000 Per Household Nobody Is Modelling

Alex Mas, an economist at the University of Chicago Booth School of Business, read the same intuitive arguments about AI-driven demand collapse that Satrini formalised into fiction. Then he did something the Substack crowd rarely bothers with: he built a model.

When you model the conditions Satrini describes — where labour's share of the economy declines rapidly, where there's no consumption recovery after prices fall, where wealthy capital owners don't increase spending, where interest rates hit the floor, and where there's zero policy response — yes, you get something resembling Satrini's scenario. But Mas argues that the conjunction of all those conditions existing simultaneously, with no policy response, is somewhere between implausible and laughable.

As someone who lived through 2008 with a divided government where everyone was fighting tooth and nail, I can confirm: when things get bad enough, governments do respond. Not because they're altruistic, but because they want votes. The mechanism is entirely selfish and entirely reliable.

But the policy argument is the boring one. The interesting argument is about services.

Most consumer spending is in services — mortgage services, tax preparation, insurance brokerage, travel booking. These are all tasks that AI agents plausibly make dramatically cheaper today, because they're fundamentally functions of complexity, not legacy infrastructure.

If AI agents compress costs in these service categories by 40–70%, that plausibly returns $4,000 to $7,000 in annual gain per median household. Tax-free. No legislation required. People simply keep more of their money because the margin structures of middleman services collapse.

The immediate doomer response is: "But that money just evaporates because fewer people are employed in those services." Does it, though? If you pay £8,000 less in estate agent commissions because an AI handled the complexity, that £8,000 goes into furniture, renovations, or savings. It doesn't disappear. It re-enters the economy through different channels. The Satrini memo treats displaced spending as a loss. In practice, it's a transfer.

The US Census Bureau reported 532,000 new business applications in January 2026 alone — up over 7% from December, continuing an accelerating trend since 2021. One-person businesses have more power than they've ever had. Lower overhead, broader reach, and tools that give a single person the output capacity of a small team. This is not theoretical. I know more people than I can count on two hands who have gone from not coding at all to running real businesses, making real money, within the past 18 months.

Of course, the bears have a response: "This time it's different because AGI replaces everything at once." Fine. That brings us to the part of this conversation that nobody is having.

The Capability-Dissipation Gap: Why Both Sides Are Wrong About Speed

Whether or not AI displaces labour the way the bears describe depends entirely on one variable: whether the speed of labour displacement outpaces the speed of societal adaptation. And that variable is catastrophically underrepresented in every analysis I've read — bear or bull.

Both narratives make the same hidden assumption: that AI capabilities translate rapidly into economic impact. The doomers assume everyone gets fired quickly. The boomers assume society reorganises around AI quickly. Both assume the conversion rate from "AI can technically do this" to "the economy has restructured" is extremely fast.

It is not. And the reason it isn't is the most underappreciated force in the entire AI discourse: social inertia.

Let me be specific, because "inertia" on its own is a hand-wave. There are at least four distinct forces at work:

Regulatory inertia. Financial services firms that want to use AI for compliance need approval from regulators who haven't finished writing the rules. Healthcare organisations navigate HIPAA, FDA clearance, and institutional review boards. Government agencies run procurement cycles measured in years. The COBOL systems that Anthropic is talking about modernising run an estimated 95% of ATM transactions in the US — hundreds of billions of lines running in production across finance, airlines, and government. Nobody is migrating those to a new codebase because a startup published a blog post, even if that startup is Anthropic.

Organisational inertia. The Satrini scenario assumes companies cut headcount "rationally and rapidly." Companies are not rational actors in practice. Headcount decisions are filtered through HR policies, employment law, union agreements, severance obligations, institutional knowledge preservation, management politics, and the simple fact that most executives have never managed an AI transition. The gap between "Claude can technically do parts of this job" and "we've reorganised our workflows, retrained our staff, built QA processes for AI output, and confidently reduced headcount" is enormous. I've seen multiple cases where large-company pilot programmes were abandoned because the AI capability they were piloting moved past them before they finished procurement.

Cultural inertia. Most people still don't use AI in their daily work. When Toby Lütke — one of the most technically fluent CEOs on the planet, running a company whose entire business is technology — had to issue a company-wide mandate in April 2025 saying "reflexive AI usage is now the baseline at Shopify" and built it into performance reviews, that tells you something important about how slowly even high-performing organisations change their cultural behaviours.

Trust inertia. Enterprises do not and should not trust AI output by default. The cost of building formal verification systems is substantial. Moving your workforce from "I do this work" to "I verify AI does this work" is a competency shift that most organisations lack the capital, the stomach, and the institutional patience to execute.

Two Curves, One Chart, and the Opportunity Nobody Sees

Picture two curves on the same chart.

The first is AI capability — model intelligence, reasoning depth, agentic endurance. This curve goes up fast. Gemini doubled its reasoning performance in three months. Pick any benchmark; they all accelerate.

The second curve is societal dissipation — the rate at which those capabilities actually permeate the economy and change how work gets done, how money flows, how institutions operate. This curve is far, far flatter. It compounds over time, but it starts from a low base and it moves slowly, governed by the four inertia forces above.

The gap between these two curves is where we all live today. And it explains almost everything that seems confusing about the current moment:

  • Why AI capabilities are stunning but economic disruption remains modest

  • Why the stock market cannot make up its mind — simultaneously pricing incredible ROI and incredible disaster

  • Why both the doom and boom narratives sound compelling

  • Why a blog post can crash a stock while the underlying business is unchanged

But here's the bit that matters most: the gap is the opportunity.

If AI capabilities were irrelevant, there'd be no advantage to adoption. If adoption were instant, there'd be no competitive moat. It's precisely because the tools are powerful and unevenly distributed — understood by very few, integrated by even fewer — that asymmetric economic returns exist for those operating at the frontier.

And because social inertia is so strong, this advantage doesn't erode quickly. It persists. It compounds. Every model release makes the existing foundation of practical understanding more valuable, not less, because each new capability lands on a base of real-world integration experience that takes genuine time to develop.

The Shopify Playbook (And Why Almost Nobody Can Copy It)

Shopify's approach under Lütke is worth studying precisely because it illustrates both the opportunity and the difficulty.

Lütke's mandate isn't "use AI when it's convenient." It's "demonstrate why AI can't do this before you're allowed to ask a human to do it." When he makes a junior employee test their project against an AI tool, he's not expecting the AI to succeed. He's building organisational muscle memory. He's ensuring that when the next model release drops, Shopify has a pre-built evaluation framework that immediately reveals what's newly possible.

Lütke described his personal approach on the Acquired podcast: he maintains a folder of prompts that he runs against every new model release, systematically probing capabilities like a QA engineer running unit tests. He calls them "Toby Evals." He's not just using AI — he's building an institutional competency for evaluating AI, which is a fundamentally different and more valuable skill.

But here's the catch: Lütke is a one-percenter. He's a technically fluent founder-CEO of a technology company with complete organisational authority. The question isn't whether his approach works. It's whether a mid-market manufacturing firm in the Midlands can replicate it. Whether a 200-person insurance brokerage in Manchester will develop their own eval framework. Whether a regional government department will build the cultural infrastructure required to move from "AI exists" to "AI is integrated into our workflows."

The answer is overwhelmingly no. Not because they lack intelligence or motivation, but because cultural and organisational inertia are real forces that operate on timescales the tech industry consistently underestimates.

Multiply that mid-market firm by a million. That's the dissipation curve.

Size Determines Strategy, Not Outcome

The capability-dissipation gap plays out differently depending on where you sit.

Large firms are positioned to win on every dimension except one — and it may be the one that matters most. They have capital advantage (they can spend £20,000 a month on an AI agent without blinking). They have data advantages (decades of proprietary information). They have distribution (existing customer relationships that create deployment surface area). They can build verification and compliance infrastructure.

But they carry the full weight of organisational inertia. Every new AI workflow survives procurement, legal review, security audit, pilot programme, and executive committee approval. It can take 18 months from "this tool will save us £10 million a year" to actually saving the money. The only exception is a deeply involved founder — the wild card that can make a large company move like a small one.

Small firms and individuals have the opposite profile. They lack capital, data, and distribution. But they have the one thing large companies don't: speed. The capability-dissipation gap creates an asymmetric advantage for anyone who can collapse the integration timeline.

A solo consultant who integrates AI into their workflow today operates at the capability frontier while their competitors are still running quarterly planning meetings. The practical heuristic that separates AI-native operators from everyone else is temporal: they think in hours, not weeks. "Get it done by end of day" versus "let's revisit next quarter."

This is not motivational advice. It's structural economics. The gap between capability and integration in your specific domain is a measurable, exploitable inefficiency. Every month it stays wide is a month you're leaving returns on the table.

Lütke made an observation on the Acquired podcast that encapsulates this perfectly: the best chess game every year for the past 20 years has been played by machine versus machine. Nobody watches those games. But everyone in chess knows who Magnus Carlsen is. We don't care about the chess. We care about the humans playing it. The tools are instruments to be played — not replacements for the player. The craft still matters. The judgement still matters. What changes is the ceiling of what a skilled player can achieve.

This is the fundamental error in the doom narrative. It models AI as a replacement for human economic activity rather than an amplifier of it. The Satrini memo assumes a world where AI does work instead of humans. The reality that's emerging is a world where AI does work through humans — where the value of a skilled operator with AI tools dramatically exceeds the value of either alone.

Twenty-six years in ecommerce taught me something about technological disruption: the people who survive it are never the ones who predicted the exact trajectory of the technology. They're the ones who stayed close enough to the frontier to adapt as it moved. The mobile revolution didn't play out the way anyone predicted in 2007. Neither did cloud computing, or social commerce, or marketplace dynamics. What mattered was proximity to the capability frontier and speed of adaptation.

The same principle applies now, but with higher stakes and a wider gap.

What This Actually Means For Your Career and Business

Stop consuming AI discourse as entertainment. The doom-and-boom cycle is a spectator sport that generates engagement, not insight. Here's what to do instead:

Re-contextualise the stock market activity. The AI scare trade is creating mispriced assets and organisational chaos. Some of the companies getting hammered are going to face the trends the doomers describe — but the market isn't asking the right questions. What do companies do with the savings from a 40% reduction in software costs? What happens to the £35 billion that gets redirected from financial intermediary commissions to consumers? Nobody's modelling that. The doom narrative has no place for it.

Calibrate the doom narrative correctly. It's useful as a policy warning — we should absolutely be thinking about supporting job transitions and broadening capital ownership. But it is not useful as a career planning framework or an investment thesis. It's a meme that's 10 to 50 times more viral than the counter-evidence. Calibrate accordingly.

Map the capability-dissipation gap as it applies to you. This is the highest-value activity available to you right now. Are you operating at the capability frontier? Testing new models regularly? Integrating AI into your actual workflows? Building evaluation frameworks for your domain? Or are you operating at the dissipation rate — aware that AI exists, using it occasionally, but fundamentally working the same way you did two years ago?

The gap between those two positions is where economic value concentrates over the next two to three years. And because social inertia is so strong, the gap isn't closing as quickly as anyone thinks.

The person who spent the last year building genuine AI fluency in their domain hasn't just learned a tool. They've built an asset that compounds. Each model improvement makes that asset more valuable, not less, because each new capability lands on a foundation of practical understanding that takes real time to develop.

The career move right now is to become the person who can walk into a room of panicking executives — and there are many panicking executives right now — and say with genuine authority: "I've tested this. Here's what AI can actually do in our workflow. Here's what it cannot do. Here's the implementation plan. Here's the budget. Here's the timeline."

That person does not exist in most organisations. The technical people understand the models. The business people understand the workflows. The consultants understand the frameworks. Almost nobody bridges all three.

If you can, the gap is yours.

Explore Topics

Icon

0%

Explore Topics

Icon

0%