The Rational AI Bubble and Its Cockroaches

The Rational AI Bubble and Its Cockroaches

If you hang around markets long enough, you start hearing a certain shrug in the voices of the people who frequent them. Of course it ends in tears, they say, about whatever’s currently up and to the right. The only questions are where the tears will land and how far the splash will travel.

Mohamed El-Erian has taken that shrug and pointed it at AI. In his talk at Yahoo Finance’s Invest conference, and as Nick Lichtenberg notes in Fortune, he describes the current AI frenzy as a rational bubble. There is real, substantial value on the table, enough that, in venture‑portfolio terms, it makes sense to overallocate to the space as a whole. You wager on many experiments, because a few of them, if they pay off, will more than cover the rest. And then comes the line that matters: there will be tears.

That line really sticks with me, the idea that some investors have wandered past what they’re comfortable with and beyond what they can actually check. They're riding on loose credit and a strong economy. Then his 'cockroaches' metaphor, which describes credit mishaps scuttling around while the system's core stays largely intact, provides a clear prompt: map out how a mostly rational boom ends up generating very specific, very irrational damage at the edges.

A stylized illustration of business professionals being guided towards a portal labeled 'AI Investment'.

Start with the core. At the center of this setup are a handful of foundation‑model companies and hyperscale infrastructure players that, by all accounts, sit on plausible cash‑flow trajectories. They’re expensive the way dominant platforms tend to be: fat margins today, optionality on new products, and quasi‑monopoly positions in key layers of the stack. If you buy the thesis that AI could be as transformative as electrification or the internet, then assigning high multiples to a few of these firms isn’t crazy. That’s the rational part of the bubble.

The trouble lies in how that rationality gets filtered through institutional constraints. Big asset managers can’t simply buy a vague basket labeled “AI progress.” They’re governed by mandates, benchmarks, and quarterly reporting. If a few names push the index higher, you’re left with a choice: follow the index or risk being fired for underperforming. That creates forced participation: you own AI because not owning it is career risk. Once that dynamic takes hold, many allocators shift from asking whether this is truly valued in some deep sense to wondering how they’ll justify the holding at the next investment committee meeting.

Now layer in El-Erian’s observation that loose financial conditions and a strong economy encouraged people to stretch for return. Years of cheap money trained everyone, from pension funds to retail traders, to expect that risk would be rewarded and drawdowns would be brief. When rates finally moved up, the adjustment was uneven. Safer assets started to look attractive again, but the story premium around AI was strong enough that capital still poured into anything with "AI" in the deck. Fortune notes the familiar dot-com pattern here: companies slapping the label on their operations to attract capital, just as firms once added ".com" to their names and watched their stock tick up.

At the fragile edge, you’ll find entities whose very existence depends on that story premium. Pre-profit startups that raised valuations far beyond their current reality on thin business models. Small public companies that pivoted to AI-enabled whatever to catch the wave. Credit funds offering generous leverage on AI-adjacent bets because they believed the underlying assets were liquid and the narrative was strong. This is where the cockroaches live. They are not system-ending termites, in El-Erian's framing, but they are numerous and they travel in groups.

So, who, in concrete terms, is actually overextended?

A stylized illustration of glowing, interconnected server racks in a data center.

Founders who signed aggressive growth covenants and liquidation preferences during the cheap-money era are now burning cash to meet top-line targets in a market where customer budgets are tighter than the hype suggests. Late‑stage funds have stepped into quasi private‑equity roles at tech multiples, using leverage on the assumption that an IPO window would reopen in time. Corporates are quietly baking AI obligations into their balance sheets: long‑term cloud commitments, rushed M&A for small AI shops, and expensive internal builds pitched to boards as 'we can’t miss this wave.' Each move makes sense locally. Each, though, carries the risk of a credit accident once the music slows.

Then there’s the household side Fortune highlights under the banner of a K-shaped economy. The upper tail of income and wealth is doing fine, both on paper and in realized gains. The bottom tail sits near recession: maxed-out credit cards, heavy debt, and a growing fear of AI-adjacent job loss. Those lower-income households aren’t buying AI equities on margin, but they’re part of the environment that made the bubble feel safe. Robust aggregate demand makes everyone more comfortable with risk. If that demand falters because the bottom of the K snaps, you get a negative feedback loop: weaker consumption, tighter margins for companies, cuts to speculative AI projects, and layoffs that further erode household balance sheets.

All along the chain, incentives pull in the same direction: commit early, commit big, and sort winners from losers later. Executives are paid in stock and options that reward short-to-mid-term price gains. Venture capital funds are judged as much by access to hot deals as by eventual distributions. Bankers earn fees based on deal volume rather than the long-term survival of borrowers. Inside large firms, product managers are told to bring an AI strategy to the next board offsite; no one wants to be the one who says we should wait and see. Under those conditions, going beyond what you can truly vet is not an aberration. It’s simply the path of least resistance.

A stylized illustration of an unstable stack of 'AI' blocks on a platform edge, with small insect-like figures at the bottom.

So what happens when the correction arrives? El-Erian’s “cockroaches not termites” line matters here. He isn’t calling for a 2008-style systemic crash. The picture is messier and more fragmented. A marginal AI-labeled lender blows up, exposing sloppy underwriting and pockets of fraud. A few AI infrastructure projects get canceled or scaled back dramatically, leaving suppliers with stranded capacity. Late-stage private valuations are marked down, triggering covenant breaches and emergency recapitalizations. Some public “AI pivot” companies miss guidance and suddenly find themselves cut off from both the equity and debt markets.

Each of these issues on its own is manageable. But together they create noise that makes everyone tighten standards just when the weaker players could use patience. That’s the second-order problem: it isn’t only that some actors took risky bets, but that their visible failures shift how others behave, amplifying the damage. Even if your bank avoided the frothiest AI loans, you still answer to regulators and shareholders when peers report losses. The easy story to tell is that we’re pulling back from the whole segment, even if that leaves viable projects starved of capital.

There’s also a diffusion angle that El-Erian flags. He notes that the United States still doesn’t have a clear policy for getting AI into workplaces in a coordinated, orderly way; unlike countries such as China or the UAE that are actively steering deployment. Without a sane diffusion process, the tech’s potential won’t translate into broad productivity gains. What you end up with are a lot of speculative claims about future AI-driven growth, while actual progress shows up slowly and unevenly. That gap between promised and realized productivity is exactly where bubbles deflate.

There are scenarios where diffusion stalls and the cost narrative dominates. In El‑Erian’s terms, AI becomes a cost minimizer rather than a productivity enabler. When that happens, the first casualties in a downturn are the long‑horizon AI bets. Cost‑cutting programs get repackaged as 'AI initiatives,' headcount is trimmed, and contractors are let go. The people hit hardest tend to be lower‑income households already near the edge of recession. Their reduced spending then feeds back into weaker top‑line results for the companies that were supposed to be the big beneficiaries of AI. It all makes sense step by step, yet the cumulative effect is destructive for the network.

A stylized illustration of business professionals being guided towards a portal labeled 'AI Investment'.

The idea of a rational bubble can feel oddly reassuring. It suggests you’re not crazy to believe this technology will matter or to invest in it. But what you’d be foolish to ignore is how human incentives tilt the odds around that bet. As El‑Erian points out from the dot‑com era, real long‑term value and painful short‑term setbacks can exist side by side. Amazon survived; Pets.com didn’t. Many investors who were right about the internet’s potential still ended up losing money because they were overleveraged, mistimed, or stuck in the wrong kinds of bets.

The lesson I take from his cockroaches warning isn’t to avoid the AI build-out altogether, or to wait for some mythical moment of clarity. It’s about mapping where I’m relying on other people’s risk controls and narratives, and assuming that, on balance, those narratives will overshoot. Somewhere on your personal or your organization’s balance sheet there’s a line where you’ve quietly crossed beyond your comfort zone and your ability to do due diligence. That’s the zone where you’ll feel the tears.

The system will probably hold, and the foundation may stay intact. But when you flip on the kitchen light after years of easy credit and broad AI optimism, you’ll see a bunch of small vulnerabilities scattering for cover. Each one reminds you that a bet that seems rational when it’s scaled up and handed off to many people can look irrational when you’re standing on the shop floor.

Sources include Fortune’s article about El‑Erian’s warning on the AI bubble and the cockroaches metaphor, a YouTube video, and AOL Finance’s coverage of the topic.

Sources: