AI Is Devouring the World? Not So Fast.

Remember Marc Andreessen’s line about software eating the world? Today, the “software” hype has been replaced by the new “AI” hype.

The amount of money that is being spent on AI solutions is genuinely staggering. Worldwide corporate investments in AI are assumed to have reached at least 150 billion dollars annually. The cumulative global investment in AI has probably crossed 1 trillion US$—and yet, it appears that all we get is fluffy cat videos and creative images. Popular generative AI tools like ChatGPT and co. can generate images, music, videos, charts, and more.

However, the results—while entertaining—are still limited to non-deterministic content: the outcomes are often surprising and creative, but they don’t have to be “correct.” Correctness, however, is what is desperately needed: only correct results are accepted in real-life corporate and legal settings. Sometimes, a single wrong figure in a presentation can ruin your day—or even your entire professional career. Especially in large, bureaucratic organizations, correctness is the be-all and end-all of daily business.

Before the generative AI boom, I assumed that genuine creativity—or at least what most of us accept as such—would be the very last victim of AI devouring the world. I was wrong. Quirky presentations, a catchy tune, or a viral meme? AI delivers, often better than a room full of brainstormers. Content generated by LLMs is fun, flashy—and arguably cost-effective (which is a euphemism for “cheap”). Artists, writers, marketers, etc. appear to become disposable. Those roles could eventually disappear as generative AI pumps out endless variations on demand. It may be “cheap” (as assumed here), but it can pass as “good enough” for a vast number of creative tasks.

Here is the thing: AI excels at creativity precisely because creativity usually doesn’t demand correctness (a.k.a. “accuracy”). Funny cat videos are always, well, funny and entertaining, but let’s take a closer look at a task that is on the opposite end of the creativity spectrum: a tax accountant. Everyone (well, most people who create any commercial value, at least) must pay taxes. In most developed countries, the tax code is overly complicated. A simple limited company (e.g., a German GmbH) must obey dozens of often confusing rules and regulations. A small business can’t run without an exceedingly expensive tax advisor. There are no shortcuts; any deviations from the strict tax code can cause trouble or even land one in jail.

The AI industry is admittedly struggling with “AI reasoning.” The reason for that is that generative AI is not intelligence; it is simply a brute-force extrapolation of the training data that can only approximate the data used to train LLMs. LLMs arguably offer nothing in terms of reasoning. They can smell like it, sound like it, walk like it, look like it—but they cannot actually reason, no matter how much data is thrown at a multi-layer neural network.

Tasks requiring “correctness” can be nearly impossible to solve using brute-force heuristics like LLMs. A disruptive innovation in LLM use could involve combining LLMs with reasoning models. I am convinced that many companies and computer scientists are working on such innovative hybrid “reasoning AI models,” but—at least as of this writing—there is no sign of any such innovations to be seen.

That’s a frustrating state of AI affairs: a correct tax advisor that could replace a German tax consultant who could run even a small German GmbH is nowhere to be seen. All tax advisors I have asked whether AI can replace them soon keep laughing at me: “No way, not anytime soon” is their usual response.

We have to accept that generative AI is not intelligent at all. It is funny, helpful—but not smart in a human sense. The entire discussion about “AGI” (artificial general intelligence) seems to be a smoke screen for the NVIDIA investors.

As long as AI is not correct, it cannot be called “intelligent.”

Alan Turing thought that a “Turing Test” could prove intelligence if an AI entity could not be distinguished from a human being. My impression is that a generative AI agent can fool a human into thinking that an AI entity (e.g., an AI phone agent) is a human. To me, it does not prove much. The real “Post-Turing Test” should be able to reason correctly without the usual AI “hallucination” from which all generative AI tools suffer.

A “correct” AI entity that can pass the “Post-Turing Test” and act indistinguishably from a trained and experienced tax advisor is currently not in sight. It may happen sometime around 2030. Until then, we will continue enjoying AI-powered cat videos and see how long big tech can sustain throwing billions and trillions of dollars at ever more complex LLMs.


Let’s start a conversation on LinkedIn or X.com (formerly Twitter).

 | Website |  + posts

I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.