Back to blog
AI StrategyMarch 31, 2026 4 min read

Cory Doctorow Predicts the AI Collapse. He's Half Right.

AP
Angelo Pallanca
Digital Transformation & AI Governance

"AI can't do your job. But an AI salesman can convince your boss to fire you, replace you with AI that can't do your job, and when the bubble bursts and the foundation models go offline, you'll be long gone — retrained, retired, or 'discouraged' out of the labor market."

Cory Doctorow wrote that in March 2025 on his blog Pluralistic. He's been repeating it, with variations, for over a year. And he's right. Mostly.

The part he gets right is the mechanism: the AI hype cycle doesn't need to deliver actual performance to cause real damage. It just needs to convince enough decision-makers that performance is coming. That's enough to trigger layoffs, hiring freezes, and strategic pivots built on speculation.

The part he gets wrong, or at least underspecified, is the timeline. He frames the collapse as a future event. The enterprise data from the first quarter of 2026 makes a different case.


The Gap Nobody Is Advertising

A survey of 650 enterprise technology leaders, published this month, found that 78% of large organizations now have at least one AI agent pilot running. That number gets cited a lot. The one that doesn't get cited nearly as much: only 14% have successfully scaled an agent to organization-wide use.

Sit with that gap for a second. Sixty-four percentage points between "we have something in a sandbox" and "it actually functions in production." Gartner projects that 40% of agentic AI projects will be scrapped before 2027 — not because the models failed technically, but because the organizations couldn't operationalize them.

This is Doctorow's mechanism running in reverse order. The collapse isn't waiting for the foundation models to go dark. It's happening inside enterprise architectures, every time an IT team discovers that a pilot working in controlled conditions falls apart on contact with legacy systems, real data volumes, and the specific chaos of a Monday morning.

The bubble isn't inflating toward a single pop. It's deflating, enterprise by enterprise, in the gap between demo and deployment.


Fired for a Promise

There's a finding from Harvard Business Review, January 2026, that deserved more attention than it got. A survey of over 1,000 executives found that companies are reducing headcount in anticipation of AI's future performance, not because of what it's delivering today.

Leading CEOs at Ford, Amazon, Salesforce, JP Morgan have publicly stated that AI will eliminate significant white-collar roles. The layoffs and hiring freezes that followed weren't triggered by AI demonstrably doing those jobs. They were triggered by the expectation that it would. Soon. Definitely. The salesman said so.

The result is predictable in retrospect: 55% of employers who made AI-driven workforce reductions now report regretting them. The AI didn't perform. The workers aren't coming back.

The AI bubble doesn't need to burst to do damage. It just needs to be believed long enough for the decisions to become irreversible.

Doctorow's scenario assumes the disaster arrives when the foundation models go offline. But the evidence from early 2026 suggests the sequence is already underway, just distributed across thousands of individual management decisions, each one plausible in isolation, collectively adding up to something nobody planned.


The Question He Didn't Ask

Doctorow ends his collapse thesis on a constructive note: what can be salvaged from the wreckage? Cheaper compute, unemployed engineers, a buyer's market for applied statisticians. He gave a lecture at Cornell suggesting universities should prepare to absorb the productive residue after the bubble bursts.

That's useful thinking for the long term. But there's a shorter-term question he didn't ask: what happens when the collapse is diffuse enough that nobody calls it a collapse?

The dot-com bust had a date. A chart. A moment you could point to. The Nasdaq halved and everyone knew what had happened.

The AI enterprise failure is spreading through Q4 earnings calls, abandoned vendor contracts, quietly shelved transformation roadmaps. It doesn't arrive with a crash. It arrives as a line item. A "strategic reprioritization." A "return to core competencies."

Doctorow's prediction assumes someone will notice when the bubble bursts. The data from March 2026 suggests the more likely outcome: the bubble deflates, slowly, and the quarterly reports describe it as pivoting toward sustainable AI implementation.

The workers fired for AI's potential won't be mentioned in the press release.

Want to discuss this further?

Book a discovery call