LLMs Are the New Yahoo: Why the Agentic AI Implosion Is Coming — And Who Will Survive It

Last week, Anthropic CEO Dario Amodei said we might be “6–12 months away from models doing all of what software engineers do end-to-end.”

Let that sink in.

If that’s true — if agentic AI can really do everything a software engineer does — then replicating Anthropic itself is just a prompt away. Anyone could build Cowork in their basement. Why would you pay a $60 billion company for something you can bootleg with their own tools?

That’s the paradox that should keep every AI investor up at night.

Either agentic AI is as powerful as the pitch decks claim — in which case the companies selling it are commoditizing themselves. Or it’s not — in which case the trillion-dollar valuations are built on fantasy.

You cannot have it both ways.

The Yahoo Parallel Nobody Wants to Hear

In 1999, Yahoo was the internet. Market cap: $125 billion. Every investor, analyst, and journalist agreed: Yahoo was the future. It had the users, the brand, the traffic, and the portal. The world ran on Yahoo.

Then the infrastructure underneath it — search, email, hosting — got commoditized. Cheaper. Better. Open. Google ate search. Gmail ate Yahoo Mail. WordPress ate Yahoo GeoCities. The “platform” everyone thought was irreplaceable turned out to be a thin wrapper over generic technology.

By 2016, Verizon picked up the carcass for $4.8 billion — a 96% discount from its peak.

Now replace “Yahoo” with “OpenAI.” Replace “portal” with “agentic AI platform.” Replace “search getting commoditized” with “LLMs getting commoditized.”

The pattern is identical.

OpenAI had a massive head start. ChatGPT was the fastest-growing consumer app in history. Sam Altman was on every magazine cover. The moat looked enormous.

Then DeepSeek showed you can train a frontier model for a fraction of the cost. Llama went open-source. Claude matched GPT on most benchmarks. Gemini caught up. Mistral emerged. Dozens of open-weight models flooded the market. Every quarter, the performance gap between models shrinks while the cost per token collapses.

LLMs are converging toward a commodity faster than anyone predicted. The model layer — the very thing these companies are built on — is heading toward marginal cost, just like search did in the early 2000s.

The Commoditization Paradox of Agentic AI

Here’s the part that truly breaks the narrative.

The current scare story goes like this: Agentic AI will eat all software. Jira is dead. Salesforce is dead. Every SaaS tool will be replaced by an AI agent that just does the work.

Sounds terrifying — until you ask one simple question:

Agentic AI is software, too.

Every “agent” is fundamentally the same thing: an LLM connected to tools via APIs, wrapped in some orchestration logic, with a user interface on top. That’s it. There is no deep, proprietary magic. There is no secret sauce. The MCP (Model Context Protocol) and similar standards are making tool integration plug-and-play. The models themselves are interchangeable. The orchestration is well-documented engineering.

If Anthropic’s Cowork can automate software development, then by definition, someone can use that exact same capability to build a Cowork competitor over a long weekend. The tools to displace the disruptor are the disruptor itself.

This is not a theoretical argument. We’ve already seen it happen. OpenClaw — a solo developer project — replicated most of what the big AI labs were pitching as their next billion-dollar product. OpenAI didn’t acquire the technology. They didn’t buy the company. They hired the guy. Because the technology was trivially replicable. The human judgment behind it was not.

The One Person Who Already Figured This Out

While Sam Altman is chasing a $500 billion IPO for a company that sells commodity software, and Dario Amodei is telling the world his agents will replace all engineers (thereby making his own product worthless — see above), one person has quietly made the move that reveals he understands everything in this article.

Elon Musk.

On February 2, 2026, SpaceX acquired xAI in a $1.25 trillion all-stock merger — the largest in history. SpaceX is valued at $1 trillion. xAI at $250 billion. On paper, this looks like another Musk ego trip. In reality, it’s the most strategically coherent move in the entire AI industry.

Here’s why: Musk is the only AI player who understood that AI alone is worth nothing.

Think about what SpaceX actually owns. Reusable rockets that no competitor has replicated at scale. Starlink — 9,000+ satellites in orbit, 9 million subscribers, and billions in defense contracts with NASA and the Department of Defense. A literal company town in Texas. $15 billion in revenue and $8 billion in profit. These are physical, hard-to-replicate assets that took over two decades of engineering, explosions, and near-bankruptcies to build.

xAI’s Grok, on the other hand? A chatbot. A good one, sure — but fundamentally the same commodity as GPT, Claude, Gemini, and the rest. By itself, Grok is heading toward the same zero-margin future as every other LLM.

But Grok bolted onto SpaceX’s rocket infrastructure, Starlink’s global network, and planned orbital data centers? That’s a vertically integrated stack that no pure-play AI company can touch. OpenAI can’t launch satellites. Anthropic doesn’t have rockets. Perplexity doesn’t own a communications network.

Musk isn’t betting on AI. He’s betting on the things AI cannot replace — and then using AI as the add-on, not the foundation. That’s the exact opposite of what OpenAI and Anthropic are doing, and it’s the exact thesis of this article.

The irony is thick. The man the tech press loves to mock may be the only AI CEO who has actually internalized the logic of commoditization. Everyone else is building castles on sand — premium-priced software layers that are racing to zero. Musk is building on bedrock: rockets, satellites, physical infrastructure, and a distribution network that can’t be “prompted into existence.”

Will it work? The orbital data center idea is still science fiction in many ways — radiation, cooling, launch costs, the sheer audacity of it. But the strategic direction is unmistakably correct. Even if the space data centers never materialize, SpaceX + Starlink + defense contracts is a $1 trillion hardware business. xAI is the cherry on top, not the cake.

Burry Is Early — But He’s Not Wrong (And Not Entirely Right, Either)

Michael Burry — the “Big Short” investor who famously predicted the 2008 housing collapse — has put roughly $1.1 billion in notional put options against Nvidia and Palantir. He’s also been shorting Oracle and publishing detailed analyses of how hyperscalers are inflating their earnings by stretching GPU depreciation from 3 years to 6 years, potentially overstating earnings by $176 billion between 2026 and 2028.

The market laughed at him initially — just like in 2007. As of February 2026, his Palantir puts are up 35%. Oracle has fallen 51% from its Q3 2025 peak. The broader S&P Software & Services Index has dropped 19% in a single month. Burry’s thesis is starting to print.

But here’s where it gets interesting. His Nvidia bet hasn’t paid off yet: the chips are still selling, demand is still real, and at ~24x forward earnings, NVDA isn’t priced like a bubble. Burry himself admitted his NVDA bet is “the most concentrated way to express a bearish view on the AI trade” — a sector bet, not a company bet.

I think Burry sees the disease correctly, but is aiming at the wrong organ.

Right: The AI investment cycle is overheated. Trillion-dollar capex commitments for data centers look eerily similar to the fiber-optic boom of 2000, where less than 5% of US telecoms capacity was ever used. Depreciation accounting is masking real costs. Many pure-play AI companies will implode. Palantir at 200x earnings was never going to hold. Oracle’s AI pivot was always more PowerPoint than product.

Potentially wrong: He’s shorting infrastructure (Nvidia, the picks-and-shovels provider) when history suggests the infrastructure layer is often the last to fall — and sometimes doesn’t fall at all. During the Gold Rush, Levi Strauss got rich. During the dot-com crash, Cisco got hammered but survived to become a $200+ billion company today. The server farms that powered the “useless” dot-com companies became the backbone of cloud computing.

Here’s the deeper irony: Musk just showed the market exactly where the real value is — physical infrastructure, vertical integration, things that can’t be cloned with a prompt. Burry is betting against the AI bubble, and he’s right about the bubble. But the optimal short isn’t Nvidia (which sells real hardware to real customers). The optimal short is the pure-software layer — the OpenAIs, the Anthropics, the Palantirs — whose valuations depend on maintaining pricing power in a market heading toward commodity.

Burry may be losing money on his Nvidia puts while being philosophically correct. The tragedy of being early is that it feels exactly like being wrong — until it doesn’t.

The Three Layers of AI Value — And Where It Goes to Zero

To understand who survives, think of the AI stack in three layers:

Layer 1: The Model (LLMs) This is heading to a commodity. Full stop. GPT, Claude, Gemini, Llama, DeepSeek, Mistral — the performance gaps are narrowing every quarter. Open-weight models are closing the gap with proprietary ones. The cost per token is in free fall. Within 2–3 years, the model itself will be like electricity: essential, ubiquitous, and priced at marginal cost.

Companies at risk: OpenAI (targeting a $500B+ IPO), Anthropic ($350B valuation for… a chatbot and some agents), Cohere, AI21, and anyone whose primary value proposition is “we have a good model.” Musk understood this, which is exactly why he bolted xAI onto SpaceX instead of trying to IPO Grok as a standalone company. The model is the add-on. The rockets are the business.

Layer 2: The Agent Wrapper This is already a commodity. Cowork, Operator, Devin, and their dozen clones — these are LLM + API + orchestration + UI. There is no defensible moat in wiring a model to a set of tools. Any competent engineering team can (and will) build equivalents. The OpenClaw story is proof: one developer matched what the big labs were pitching as their next billion-dollar product in a few weeks.

Companies at risk: Any startup whose pitch is “we built an agent that does X.” Venture capital in this space is peak euphoria, late 2021 crypto vibes.

Layer 3: Data, Distribution, and Infrastructure This is where durable value lives. It splits into three sub-categories:

  • Irreplaceable data: Atlassian’s Teamwork Graph (100+ billion objects of institutional knowledge across 350,000 companies), Salesforce’s customer data, and Bloomberg’s financial data. The agent is replaceable; the data it operates on is not. This is the real moat.
  • Infrastructure (picks and shovels): Nvidia (GPUs), Broadcom (custom ASICs/XPUs), TSMC (fabrication), the hyperscalers (AWS, Azure, GCP) — and, yes, SpaceX with its rockets, Starlink network, and orbital ambitions. Every AI company, regardless of which one wins, needs chips, power, connectivity, and cloud. This is the Levi Strauss play. It’s also the Musk play — and it’s why SpaceX at $1 trillion makes more strategic sense than OpenAI at $500 billion, even though OpenAI gets all the headlines.
  • Distribution at enterprise scale: Companies embedded in mission-critical workflows with brutal switching costs — 80% of the Fortune 500 runs on Atlassian; virtually every enterprise runs on Microsoft. Ripping Jira out of a 10,000-seat deployment isn’t a weekend project. It’s a multi-year, multi-million-dollar nightmare.

Where Should the Smart Money Go?

If you believe — as I do — that the model and agent layers are heading toward commodity, the investment implications are clear:

Avoid companies whose entire value proposition is “we have a good model” or “we built a cool agent.” That means extreme caution on OpenAI (if it IPOs), Anthropic, and the dozens of AI agent startups currently raising at absurd valuations. These are the Yahoo and Excite of this cycle.

Be selective with infrastructure. NVIDIA is still printing money, but at some point, custom silicon from Google (TPUs), Amazon (Trainium/Inferentia), and Broadcom’s XPUs will erode margins. The question is timing, not direction. Short-term bull, long-term cautious.

Favor the data and distribution moats. Companies like Atlassian — currently down 57% from its highs and trading at roughly 8x forward revenue — own something no agent can replicate: the institutional memory of hundreds of thousands of organizations. Their Teamwork Graph is not a feature. It’s a flywheel that gets more valuable as more agents connect to it (via MCP). Paradoxically, the rise of agentic AI may make Atlassian more valuable, not less, because the agents need the data layer to function.

Don’t forget physical scarcity. One of the most underappreciated implications of AI commoditization is that software value compresses while hardware and energy value do not. Defense companies, energy infrastructure, semiconductor fabrication — these cannot be “prompted into existence.” Claude is not disrupting a Rheinmetall tank or a Siemens Energy turbine.

The Endgame

Here’s what I think happens:

  1. 2026–2027: The AI hype peaks. More money pours into model companies and agent startups. Valuations get even more absurd. OpenAI targets a $500B+ IPO. Anthropic raises at $350B+. SpaceX/xAI goes public at $1.5 trillion — but unlike the others, it has $15 billion in revenue and $8 billion in profit from real hardware. Everyone believes this time is different.
  2. 2027–2028: Reality bites. Model commoditization becomes undeniable. Open-weight models match proprietary ones on virtually every benchmark. Price-per-token approaches zero. Agent wrappers proliferate — there are 500 Cowork clones. Enterprise customers realize they don’t need to pay premium prices for what is essentially a commodity utility.
  3. 2028–2029: The shakeout. Pure-play AI companies that couldn’t build real moats get acquired at massive discounts or shut down. The pattern of the dot-com bust repeats: the technology was real, the revolution was real, but 90% of the companies built on it were not.
  4. What survives: The infrastructure layer (Nvidia/Broadcom, though with compressed margins), the data moats (Atlassian, Salesforce), the hyperscalers (who will provide AI like they provide cloud today — as a utility), the vertically integrated hardware-AI plays (SpaceX/xAI, if the execution holds), and the physical-world companies that AI simply cannot commoditize.

Michael Burry is betting on the crash. I think he’s right about the what, but potentially wrong about the where. The model layer will implode. The agent layer will commoditize. But the picks-and-shovels layer and the data-moat layer will survive — and in some cases, thrive.

The winners of the AI revolution won’t be the companies building AI. They’ll be the companies that own what AI cannot replicate: data, trust, physical infrastructure, and the human judgment to use it all wisely. Musk seems to get it. Burry half-gets it. The rest of the market? Still chasing the Yahoo dream.

As I wrote in my earlier piece on the AI Abundance Trap: LLMs don’t eliminate work; they give us 10× speed to develop everything else. The competitive edge in the coming decade belongs to those who refuse to let fast AI make them dumber.

So: cultivate your critical thinking. Invest in what can’t be prompted into existence. And prepare for the implosion that even Sam Altman’s pitch deck can’t prevent.

The dot-com crash didn’t kill the internet. It killed the pretenders.

The AI implosion won’t kill artificial intelligence. It will kill the Yahoos.

And if you want to know who survives? Look for the rockets, not the chatbots.

 | Website |  + posts

I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.