Saturday, 14 February 2026

When does the AI bubble burst - and what then?

That AI is currently inflating a mighty economic bubble is a commonplace observation.  A vast proportion of US economic growth (as conventionally measured) and stock market value is related to AI.  Every damned company on the planet is trying to portray itself as somehow AI-contributing or AI-powered - and they are frequently getting a gratifying share-price kicker from it[1].  Presumably, the pension industry is pretty addicted by now.

In the real world, the vast and (we are told) exponentially-growing demand for electricity - 100% secure baseload electricity to boot - made by data centres is seriously distorting electric power-system planning for many a nation.  Coming atop the average grid's already torturing itself - and its hapless bill-payers - to accommodate the diametrically opposed demands of a decentralised renewables-based future, this is a serious spanner in the works.  And of course governments everywhere are anxious to play host to new data centres, Starmer's as much as any; and are making this a policy priority.   But the (would-be) data centre builders have twigged that these contortions on the part of conventional grid operators may very probably not prove successful - and thus are talking about joint ventures with nuclear generators and ... sponsoring nuclear fusion development! [2]  Yup, it's a completely irrational bubble.     

But what does that tell us about AI itself?   Not much, beyond the obvious fact that it has gripped the universal imagination to the extent that dollar signs rotate in every businessman's eyeballs.  

Well.  Railways had a mania, and many a railway company went bust.  The technology and much of the infrastructure they built is still evolving and very much in use.  We had a dotcom bubble.  It burst, alright, but it didn't mean the internet was a phantasm.  Enron was a bubble: but it didn't mean the Enron vision for how energy markets should be configured was wrong.  (Even China is trying to figure out how to bring that revolution into its own constipated energy sector.)  Etc etc - these techie-based phenomena are not like Dutch tulip mania: nobody needs tulips, but everyone needs railways / electricity / the www / .... and, probably, we'll continue to want and need the advances that are made under the banner of 'AI'.

So: if history is any guide, it'll be a painful financial collapse; a reshuffle of the runners and riders in the "AI industry" (for those who don't break a leg & are not shot by the vet); and the underlying new tech rumbles on to find a more stable way to become a permanent fixture in all our economies and our ways of life. 

Views?  Predictions?  Timing ..?

ND

UPDATE: a bucket of cold water over certain claims for AI:

________________

[1]  Even Drax, FFS - a floundering, downsizing biomass-burning power company.  

[2]  On the eve of the 2007-09 financial crisis, a high-end energy-specialist VC firm of my acquaintance was looking for opportunities to invest in unsubsidised nuclear powerplant development.  When money is aimlessly sloshing about on that scale, that's when you really know a financial crash is coming soon.

22 comments:

Anonymous said...

It's killing computer building businesses. The same RAM I paid £52 for two years ago is now £300. Contact tells me he spends most of his time trying to source it at bearable prices. Same with SSD disks. OTOH some guy in California is stripping old servers and selling the RAM to China where it goes in "new" computers.

Anonymous said...

Crucial Memory, one of the big suppliers, is closing to retail business as they can sell all they make to AI firms.

Sobers said...

"but everyone needs railways / electricity / the www / .... and, probably, we'll continue to want and need the advances that are made under the banner of 'AI'."

The first of those is now a dead technology, that only survives on public subsidy. The second is a utility, making respectable but not ludicrous profits. The third is very profitable, but even those hefty profits cannot pay for the Fully Automated Luxury Communism we are promised AI will usher in.

The question is not whether AI (as currently exists in LLM form) can 'work', its whether a society that uses AI to replace human labour can actually afford to have tens of millions of people consuming but not producing anything. Unless AI can immediately produce enough excess wealth that it can be taxed and used to keep the unemployed masses in food, light, heat and shelter, it will fail hard. Indeed if AI fails to do that societies that implement it will collapse entirely into violent revolution in short order.

Matt said...

From a telecoms perspective, MS Copilot got a lot of internal use until everyone got bored of the same stylistic annotation of meetings. The AI agents that will answer calls are useful but sacking receptionists is hardly going to revolutionise business. It will find niches, but widespread adoption that disrupts all sectors is highly unlikely. As for the crash, in the next few months I hope so that capital can move to some thing else (hopefully with more productive outcomes than enriching billionaires).

jim said...

AI drinks electricity and water. To exchange all the AI's heat to air is difficult and expensive, much easier to pull in mains water, warm it up and dump it. Lots of fancy words obfuscate what is going on but it is cheaper that way so that is how it goes. Lots of electricity to lots of warm water and it's legal.

If Mr Musk puts a data centre in orbit he will find dumping megawatts of heat a non trivial problem. My data centre absorbs 30 watts so the future might lie with say a 1000 Musk Brains in a warm saline tank. He's working on the production side.

Superficially AI looks useful. Just a bit more and it becomes self educating - we don't know how and we don't know at what scale or if ever - but that is the story. How many b's in raspberry? - AI answer 'there are 2 'b's in the word raspberry'. Google AI knows full well this is a trap, it is a classic trap for AI, and Google AI delivers a slightly snarky response. I will check the door is locked tonight.

What if AI rides a logarithmic curve and flattens out. Never being quite good enough to create General Relativity Mk2 except by looking it up. Suppose it lacks any 'genius' or 'creative' function. But who needs genius - the internet made lots of money displaying 'making the beast' until the killjoys got going. No one went broke underestimating human intellect. I would put the AI crash about 2 years out.

Then what do you do with all those unemployed accountants and lawyers.

Matt said...

Sabine summerises why LLMs aren't going to radically change the future - https://youtu.be/984qBh164fo?si=fCKU8LiSGTYdeVXj

jim said...

So who will AI put out of work?

People who erect and repair infrastructure? Robot roadmenders? Motorways maybe, local streets probably not.

Teachers? Already you could self educate to PhD level on the WEB but the unis don't like it. Teaching little kids to read and write and talk. Could be done by robot but would likely introduce more societal problems.

Lawyers and politicians are in the deception game, trying to deceive or avoid deception. There will always be a market for deceptive arguments. AI can hallucinate but not lie at KC or Minister level.

Accountants don't add up figures much, largely working tax fiddles and giving the impression of legality.

How to stay in work. Take a leaf out of the religion playbook. Get into something like SEND or TRANS or diversity. Operate at that nexus where politics and psychological need and exploitation meet.

Already the engineers and mathematicians are feeling the AI pinch. So don't be a mug, honesty and usefulness are no longer needed, get into deception.

Anonymous said...

AI's biggest issue is around cost, it is subsidised to the tune of tens of billions, and despite best efforts, that seems to be a hard limit. If it was easy, maybe they should have asked AI...

At some point a decision will have to be made to either use cheaper, and slower and less accurate, models, or pay full cost. Replacing people when equivalent token costs are £1200 a year makes financial sense, but when it's £120,000 a year?

In my particular area, software development, the rate determining step is rarely the developer. It's stakeholders, business, and management and their phobia of decisions, IT has a ~75% failure rate of projects, any other industry with those kinds of metrics would have questions asked about it, especially one that has become so foundational to other industries, and hence the economy.

There is a path where AI defines requirements properly, and then generates the code, but the existing quality of code it is trained on tends to be poor, so there will still need to be someone to architect the solution and apply the guardrails.

Realistically, that won't happen, what will is there'll be poor requirements, multiple iterations of said poor requirements, and the output will be a mass of poor code full of security flaws. See OpenClawd.

That there has been a correlation between MS using AI to generate more code, and the increase in our of band patches, does little to dissuade my view.

I expect the bubble to pop in 2027, and we'll still use AI, but it'll be the cheap models on cheap hardware and almost always with out of date training, and it'll just another block on top of the Jenga tower of enshittification we keep building.

CH

dearieme said...

Eat 'em.

Clive said...

The comment is a thoughtful, experienced take from a software developer skeptical of AI hype. It highlights real frictions in software projects and valid risks around rushed AI-generated code.

However, it contains several factual inaccuracies, overstatements, logical gaps, and overly pessimistic assumptions that don't hold up against current data and trends as of early 2026.Here's a breakdown of the main issues, grouped by theme:

1. Overstated or mischaracterized "subsidies" and "hard limit" on costs"Subsidised to the tune of tens of billions" — This is loose. Direct U.S. federal non-defense AI R&D is around $3.3 billion annually (FY25 figures). The CHIPS Act provides $52 billion total over years for semiconductors (which indirectly benefits AI), plus scattered state-level tax breaks and energy discounts for data centers (e.g., Amazon got $1 billion in one Oregon deal). The dominant funding is private: hyperscalers (Microsoft, Meta, Google, etc.) are spending hundreds of billions in capex, much of it debt-financed, while investors/VCs cover losses at labs like OpenAI. Calling this "subsidised" broadly is fair in the sense of investor capital propping up unprofitable frontier models, but it isn't primarily government handouts on the scale claimed, and it's not a "hard limit" — it's market-driven investment chasing returns.

Costs as a permanent hard limit — This is the biggest weakness. LLM inference costs for equivalent performance have fallen dramatically: roughly 10x per year on average, with some benchmarks showing 40–900x annual drops. GPT-4-level capability that cost ~$20 per million tokens in late 2022 is now under $0.40 in many cases, with cheaper models and open-source options even lower. Open-source models run locally on consumer/cheap hardware. The trend is continuing (driven by algorithms, quantization, better hardware, competition from DeepSeek, etc.). The comment treats current high costs for frontier models as fixed, but the data shows rapid deflation — not a ceiling.

The £1,200 vs £120,000 token-cost comparison for "replacing people" is vague and likely outdated even now. Tools like GitHub Copilot cost ~£10–20/month per user. Heavy API use for coding assistance is still far below a developer's salary (even in the UK). Full autonomous replacement isn't here yet, but hybrid use is already cheap and getting cheaper fast. The binary "cheap/slow/less accurate vs pay full cost" overlooks the middle ground of improving models + techniques like RAG, distillation, and agents.

Clive said...

2. Software development bottlenecks and AI's roleStakeholders, requirements, and ~75% IT project "failure" rate — This is broadly correct and a strong point. Standish Group CHAOS reports consistently show low full-success rates (e.g., ~31% successful, 50% challenged, 19% failed in recent data; worse for large projects). Requirements and decision-making are classic rate-limiters. AI won't magically fix bad stakeholder input — "garbage in, garbage out" applies.

"Existing quality of code it is trained on tends to be poor" + need for architects/guardrails — Fair concern. Models reflect their training data, and much public code has issues. However, this underestimates progress: synthetic data, RLHF, better curation, and reasoning models (e.g., o1-style) allow outputs that often exceed average training quality. Human oversight remains essential, but the comment dismisses the "AI defines requirements → generates code" path too absolutely. Agentic systems and iterative prompting are already helping bridge this.

"Poor requirements → mass of poor code full of security flaws. See OpenClawd." — "OpenClawd" (likely OpenClaw, formerly ClawdBot/Moltbot) is a real example that backfires in the comment's favor. This viral open-source AI agent has been called a "security nightmare" by researchers: exposed instances leaking API keys/credentials, remote code execution vulnerabilities, prompt injection risks, malicious skills in its marketplace, and thousands of misconfigured deployments. It illustrates the exact risk — autonomous AI tools built quickly can produce or enable flawed, insecure systems. (The comment may be using it as a shorthand for AI-agent outputs gone wrong.)

Microsoft AI code correlation with more out-of-band patches — There's observable timing. Nadella stated in 2025 that 20–30% of Microsoft's code is now AI-generated; the company has seen a noticeable rise in emergency/out-of-band Windows patches in 2025–2026, alongside quality issues (e.g., broken shutdowns, boot problems) and even a new "Engineering Quality Head" role. Articles note the coincidence with AI contributions and layoffs. Causation isn't proven (Windows is complex; other factors exist), but it's a reasonable concern and not "just correlation" to dismiss entirely.

Clive said...

3. The 2027 bubble prediction and long-term outlookBubble pops in 2027 → cheap/outdated models on cheap hardware as "another block on the Jenga tower of enshittification" — This is plausible speculation shared by some (e.g., analysts projecting OpenAI losses, funding crunches around mid-2027, high capex). However:AI spending is still projected to grow massively (trillions cumulatively).

"Out of date training" is already being addressed via RAG, retrieval, and ongoing fine-tuning — not a permanent state.

Cheap/open-source models are already here and advancing fast (Llama, DeepSeek, etc.).

"Enshittification" is a subjective, loaded term (Cory Doctorow's platforms critique). AI could degrade some experiences if poorly implemented, but evidence of productivity gains in coding (20–55% faster in studies) and enterprise adoption suggests integration, not inevitable collapse to junk.

Overall assessmentThe comment is strongest on the human/organizational side of software projects and the real risks of un-reviewed AI code (security, quality). Those are evergreen issues that hype often ignores.Its weaknesses are in treating current costs and limitations as permanent ("hard limit," "that won't happen," "out of date training") when evidence shows rapid improvement, deflation, and capability growth.

It also somewhat conflates investor-funded losses with government subsidies.Many predicted an AI "bubble" correction (or at least a hype cooldown) around 2026–2027 due to capex and losses, but the technology itself is likely to keep advancing and embedding into workflows regardless — probably as a productivity layer rather than full replacement in complex domains like software architecture. The comment is a useful counter to uncritical optimism, but it underweights the speed of technical progress.

AndrewZ said...

Here's a good example of AI hype:
https://www.tomshardware.com/tech-industry/artificial-intelligence/microsofts-ai-boss-says-ai-can-replace-every-white-collar-job-in-18-months-were-going-to-have-a-human-level-performance-on-most-if-not-all-professional-tasks

From Microsoft AI chief executive Mustafa Suleyman - "So, white-collar work, where you’re sitting down at a computer — either being, you know, a lawyer, or an accountant, or a project manager, or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months."

Anomalous Cowshed said...

The MktCap/GDP ratios are batshit, and that's purely the visible listed equity component, let alone any private financing or public lending. They're so out of whack that even with large corrections, there'd notionally be a multi-year bear market in order to approach parity.

For NVDA, their strength would most likely be the s/w stack and fast interconnects, plus the installed base. The first two could erode quickly, not the third, as existing customers would not want to switch given the costs, and NVDA probably has a significant amount of momentum of the Nobody-Ever-Got-Fired-For-Buying-IBM type. And HPC has been going for a while now (>10 years?), so there's a fair amount of expertise knocking about in the labour market.

Second (third?) mover advantage belongs to AMD and others assuming they can improve the software and inter-connects, such that it's attractive to new entrants to the tech, or to those not get fully committed to NVDA's kit.

So, maybe a time horizon of 9 months if Intel/AMD et al have already begun to get their shit together, or longer into 2027/8.

The other punt is on interest rates, I reckon. Essentially that the Administration can bully the Fed (coming up on the inside, Warsh) into cutting rates real quick, and holding them low enough for long enough for revenues to build as adoption increases, before investors take a proper hard look at the valuations.

That might be on a slightly shorter horizon, but would otherwise stretch out into 2027, the final year before the next election. So those two horizons match up.

In terms of adoption and productivity - it looks like (as someone mentioned above) that "vibe coding" produces software that's shifting to permanent beta or even alpha, or that the time saved in writing the stuff is swallowed by the additional time debugging and testing. Productivity (labour hours only) might be a wash, but TFP might not shift that much when accounting for increased capital and operating costs at the data centre end. A 10% increase in labour productivity is on the order of 4 hours per week, possibly do-able, but not evenly distributed, so some job functions might be half-way there to a four day week by the middle of this year. Unless there's a shift in demand, that points to a slow down in hiring.

Chip development by current large cloud providers (Microsoft, Google & Amazon) - they've got their own architectures (Trainium, Maia, Ironwood, AI5, MTIA) which they would prefer to flog to customers rather than pay cash away to NVDA. There's lots of models for lots of purposes beyond text generation/reasoning, and there are good reasons to run those on hardware that is optimised for the model architecture. The cunning little yellow devils have their own little game, with Enflame, Moore Threads, MetaX and Biren Technology.

Prior tech models ended up with a single dominant player, MSFT in corporate computing, Google in search & advertising, Meta/Twitter in social and so on. But given the proliferation in model types, architectures and use cases, all hitting a myriad of price points, I very much suspect that similar strong network effects won't exist here.

That leaves firms playing a consolidation game (via the inter-company loans being a ploy to execute debt/equity swaps?) with specialist models optimised for task/platform, being another form of the walled garden.

Timing - could be anything from tomorrow, end of next week to 2027.

Nick Drew said...

Gratifying level of expert knowledge + opinion in this thread

- thanks, everyone

ND

Anomalous Cowshed said...

Actually, it gets worse. Identifying the likely source of a blow up, except in the most generic terms, is an absolute pig, 'cos it always is. Timing's a bastard. Usual quote applies.

So, someone, somewhere is paying for internal devs, for internal corporate systems, possibly just welding stuff onto to existing on-/off-prem stuff, cloud or SaaS APIs.

Call that wage 30 to 50 large, depending. About 240 working days pa, 8 hours per, gives about 16 to 26 quid per hour, call it three grand a month. 8% productivity improvement shaves a month off the annual wage, Anthropic's (interesting choice of name) Claude Code Max 20 comes in at 200 quid a month, at the moment.

Interestingly, Google's Gemini tells me that the user owns the code, but also that Anthropic indemnify the user against copyright issues.
US (and I reckon nearly everywhere else) IP law does not allow a work to gain that protection unless it's significantly new, and a significant chunk of the work was created by a human, which means an identifiable one.

Lawsuits are on-going.

Recent sell-off in tech services and so on was global, Sage, Xero, Tata Consulting, Infosys, Accenture, CapGemini, etc, etc. But, they've largely been trending down for a while. As has Bitcoin, possibly not related. Prior, Silicon Valley Bank.

Users do stupid things.

Anonymous said...

Just to micro level this for a second. On Google. On the A.I. Overview for a company I work for. The information given on the availability of products and services and locations and requirements, is incorrect for 9/10 questions asked of it.

The links from the A.I. overview show the correct information.
But the overview, the top line on Google and associated search engines, is just wrong, if using that A.I. Overview.

Not misleading. Not open to interpretation. It’s just plain wrong.

Anonymous said...

"So, white-collar work, where you’re sitting down at a computer — either being, you know, a lawyer, or an accountant, or a project manager, or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months."

a) an IT project manager perhaps* but not a building project manager
b) but where it goes wrong who will pay the bill? Thinking say NHS strategies going awry and people dying cos computer said test for X rather than Y. OTOH they seem to have got away with the covid jabs.

* old style IT project manager was ex-grunt coder, then team leader, worked their way up. New style is pretty girl with a criminology degree, you give her three options and her management role is to choose one.

Anonymous said...

What are the inflated valuations for the AI companies and how do you measure them? I remember thinking Freeserve was a bust around early 2000s when I realised it was valued at around £300 per user. I was an early adopter but in those days I was hardly spending £300 online in a year, let along generating that much profit.

Matt said...

Freeserve might have made some money for a few but ultimately was bought by someone with deep pockets (Orange) and put out to pasture.

Anonymous said...

The head of the Chinese corporation Phison, which produces controllers and chips for RAM and SSD modules, has voiced a gloomy forecast for the near future:

— Large companies didn't pay much attention to memory, but now everyone needs it, so tech giants are signing contracts with factories for the production of RAM and SSDs up to 3–4 years in advance;

— Because of this, small companies may go bankrupt by the end of 2026: they almost have no chance of withstanding such competition;

— As a result, almost all budget products may disappear from the market. This could affect not only computers and smartphones, but also consoles and televisions;

— If too many manufacturers leave the market, there will be a serious imbalance between supply and demand, which will make current prices seem "fantastic", and the crisis could last until 2036.

Anonymous said...

Apparently motherboard sales are down 50%. I would have thought the huge demand for RAM and SSDs would have brought new production into being, but if there are only a few producers dominating the market, and entry barriers are high... I've become used to paying £200 max for a new 12g smartphone and £450 max for a 16g laptop...was I living in the good old days ?