The AI Abundance Trap: Trillion-Dollar Valuations, AI Job Scare—And How We Can Still Grow the Pie

Last year, as music is my hobby, I spent an evening creating professional-sounding songs with Suno. They sounded great, and I felt really good about myself—until I realised that tens of thousands of people are doing the exact same thing every day, and their Suno-creations sound brilliant. Suddenly, a product that used to take months, real talent, and real money is now worth next to nothing.

I’ve been thinking about this a lot lately: if someone using AI can deliver the exact same quality of work in just a few days that used to take months, how should that work be valued? Do we still pay the old rate, or is the entire pricing model broken?

That simple question exposes a quiet, open flaw in the entire AI narrative: what happens when intelligence itself becomes abundant and cheap?

Are LLMs Good Enough?

LLMs are continuously improving, but they remain fundamentally fast-thinking pattern matchers — exactly as Daniel Kahneman describes in his book Thinking, Fast and Slow. In it, he distinguishes two modes of human thinking: System 1 (“fast thinking”—quick, intuitive, pattern-matching) and System 2 (“slow thinking”—deliberate, logical reasoning required for complex, high-stakes work).

Current LLMs are pure System 1 machines. They simply predict the next token based on the previous ones. That’s why they still hallucinate at 10-20% across many real-world tasks. In that sense, they are not “intelligent” in the human meaning of the word.

For many routine tasks that do not require a predetermined outcome quality, this is often sufficient. But for anything that truly matters—tax advice, legal contracts, or safety- and security-critical automotive development—the risk is simply too high. You can outsource the first draft to an LLM, but thorough human verification and validation (true System 2 thinking) remain indispensable.

“Free”—at a Price

In that sense, for many tasks, current LLMs are already “good enough.” The real question is: what is such cheap content actually worth?

When output becomes infinite and near-free, the old pricing model collapses. “Agentic” AI like Claude Cowork can now develop complete software for pennies. Yet here is the bizarre paradox: pure software companies like Anthropic have valuations in the tens of billions, even though they are selling the very tools that will commoditize the software layer itself.

As a lateral example, SaaS (Software-as-a-Service) is being commoditized as we speak — the easy, promptable layers are turning into near-zero-cost commodities. If anyone can recreate something like OpenClaw in their basement, why would companies continue paying premium prices for what is quickly becoming a utility?

The trillion-dollar pitch decks assumed AI would capture huge rents from automated labour. Instead, raw intelligence itself is heading toward full commoditization.

But the problem runs deeper than just economics. Our heavy reliance on these fast-thinking systems is already creating a more subtle but serious issue: cognitive offloading. Recent studies, including a 2025 MIT Media Lab EEG experiment, show that users who lean heavily on LLMs exhibit significantly reduced brain engagement, lower critical thinking, and measurable “cognitive debt” over time.

In other words, while we happily offload more and more work to LLMs—even as they still hallucinate left and right—the users themselves are beginning to lose the ability to spot those hallucinations. That is not a good sign for the future of an LLM-driven AI industry.

Surviving the LLM Implosion

Despite all the shortcomings of current LLMs, not everything will be devoured by agentic AI. Many LLM-powered tasks already appear “good enough” in the sense that they can be completely automated, but we must focus on what survives commoditization: proprietary data, customer relationships, distribution, personal brand, and—most importantly—the irreplaceable human inspiration.

However, the ability to make educated decisions will become even more important as automation progresses rapidly. The decisive competitive factor for the next decade will be Effective Critical Systems Thinking (ECST). This slow, deliberate, System-2-level reasoning turns cheap AI from a crutch into a 10× multiplier. Companies and indie builders who deliberately cultivate ECST will pull ahead, while those who just prompt-and-pray fall behind.

In addition, some software tools are unlikely to become commoditized anytime soon. Certain infrastructure layers will remain extremely valuable. For instance, the Atlassian platforms (e.g., JIRA) that guarantee data persistence, compliance, auditability, and deep integration cannot be easily replicated with a prompt. Software that protects the high-trust environment— the rule of law, honest integration, open inquiry, and long-term value creation—will remain in every company’s war chest.

Otherwise, software becomes, in general, a “commodity”: relatively easy to develop, maintain, and extend at low cost. Systems development, on the other hand, products that, in addition to software, require custom hardware and mechanical parts, will remain in the “scarcity” camp: not commoditizable, expensive, and labor-intensive.

Thinking longer term, when fusion energy finally arrives (see my earlier piece on the megatrend of cheap, clean energy here), the whole game changes again: energy becomes nearly free, supercharging the abundance for those who kept their thinking sharp. Once this day arrives (likely not before 2030), all bets will be off anyway, because with sufficient energy, iterating everything (including physical infrastructure) until the result is satisfactory will be a non-issue.

Keep Calm and Carry On

Most of the hype money is still betting on raw LLM models, even as they are fast approaching their own commoditization.

AI is not approaching the mythical AGI anytime soon. Serious analysis shows the productivity miracle is smaller and slower than pitched, especially while LLMs remain unreliable System 1 fast thinkers. In other words, true AGI will literally never be possible as long as we rely on today’s “System 1” software.

While some fear that humans will be eliminated and that AI will do everything, this fear is understandable but misplaced. LLMs produce cheap content, not accountability. For many years to come, clients will always need a human “throat to choke” when millions are on the line. The real danger is not replacement — it’s becoming so dependent on LLMs that we lose our own deep thinking ability.

Let’s Grow the Pie

The “great commoditization” of software (including LLMs) is a revolution—and, as the saying goes, revolutions often devour their children. Many currently hyped companies will disappear and be remembered only by the same people who still remember the “Boo.com” disaster. That said, this revolution is real, and the trillion-dollar AI fairy has reached a scale that is becoming “too big to fail.” The often-cited comparison with the dot-com crash should not be taken lightly—the current AI hype may indeed end up in a similar crash. Once the dust settles, we will likely be surprised by what emerges from this chaos.

In the meantime, the fear that the economic pie will shrink and leave millions living on a “universal basic income” can come true—if we as human beings refuse to adapt. If we don’t adapt, the near future will lead to a tumultuous transition to the “brave new world.”

On the other hand, this transition doesn’t have to be as painful as some assume.  The potential horrors of “everyone gets fired by the AI” rest on a fixed-pie assumption: that work would shrink, and the rest of us would have to fight over the same slice. In my view, that’s a horrible misconception. There will be many changes in the workforce, as mostly boring “box checkers” and bureaucrats may be sent packing home; however, most of us won’t miss them anyway. Instead, the remaining productive engineers and scientists will gain AI superpowers, thereby steeply increasing economic output (a.k.a. “added value”).

In other words, instead of being overly anxious that jobs are supposedly being destroyed, let’s grow the pie.

LLMs don’t just eliminate work; they give us 10× speed to develop everything else—including fusion reactors, new materials, and better medicine. The real competitive edge in the coming decade will belong to those who refuse to let fast AI make them dumber. Cultivate Effective Critical Systems Thinking. Protect open inquiry. Build on solid ground.

For indie builders, consultants, and companies worldwide, this is liberating: we never needed to rent our future from Big Tech anyway. The real game is building sovereign, honest, long-term things while the technology gets cheaper every month.

That’s what technology has always been about—and it’s why I’m genuinely optimistic about the decade ahead.


References

 | Website |  + posts

I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.