There's a number that keeps showing up in my research, and almost nobody in the AI conversation is engaging with it seriously.
0.66%.
That's Daron Acemoglu's estimate for how much AI will increase total factor productivity over the next ten years. Not per year. Total. Over a decade.
Acemoglu is not a doom-and-gloom technophobe. He's an MIT economist, Nobel laureate, and one of the most rigorous thinkers on the relationship between technology and economic growth. His NBER working paper "The Simple Macroeconomics of AI" is not a polemic. It's a careful model-based estimate. The number is small because the mechanisms are specific: AI currently automates a subset of tasks, and most of those are the easy-to-verify ones. The hard-to-learn tasks, where context matters and outcomes are ambiguous, remain stubbornly human.
Three Signals, One Direction
Here's what makes this number interesting: it doesn't arrive alone.
Signal one. OpenAI expects to lose $14 billion in 2026, on top of $5 billion in 2025. Across the AI value chain, Ed Zitron mapped it out in painful detail: NVIDIA makes money, data centers run on debt, model makers bleed cash, and startups mostly die. "Generative AI is not a functional industry," he wrote, "and once the money works that out, everything burns." These are not anti-tech talking points. They're the financial statements.
Signal two. MIT Project NANDA tracked 95% of organizations deploying generative AI reporting zero measurable return. Not "below expectations." Zero. The RAND Corporation put the broader AI project failure rate at 80.3% in 2025, with companies losing an average of $7.2 million per failed initiative. The average ROI timeline is 4.2 years, against the 1.8-year projections executives pitched to boards.
Signal three. Acemoglu's 0.66%.
Three sources. Three different methodologies. Three different entry points into the same problem. They all point in the same direction: the aggregate economic impact of AI, at least over the next decade, will be much smaller than the market cap of the companies selling it would suggest.
What This Doesn't Mean
This is the part where the argument usually gets hijacked.
"Pan says AI won't change anything." No. That's not the claim.
The claim is about distribution. AI will have real, measurable impact, but concentrated in specific places, specific industries, specific use cases. NVIDIA already made $60 billion in revenue in 2024. The AI bubble, if it is one, doesn't have to burst to leave 95% of enterprise deployments with nothing to show. Both things can be true simultaneously.
The printing press changed everything. But the monks who copied manuscripts by hand mostly just stopped being monks. The gains went to printers, to publishers, to a new class of readers who could afford books. Not to the institution that had built its entire identity around the old model.
The question has never been whether AI will have impact. The question is who captures it.
Why Nobody Talks About This
Because 0.66% is inconvenient for almost everyone.
It's inconvenient for the investors who put $300 billion into AI infrastructure in 2024. It's inconvenient for the vendors selling transformation. It's inconvenient for the consultants charging day rates to build pilots that, statistically, 80% of the time will fail or be abandoned.
It's not that people don't know about the Acemoglu paper. It's been cited widely enough. The problem is that once you take it seriously, the economics of almost everything in the current AI wave need to be renegotiated.
I'm not predicting collapse. I'm observing convergence. When three independent lines of evidence, an economic model, a financial audit of the industry, and an enterprise deployment survey, all point at the same gap between narrative and reality, that gap is worth naming.
The gap between what AI promises in aggregate and what it delivers in aggregate is 0.66%. I'd suggest writing it somewhere visible before the next vendor tells you this technology is "transformative."