, ,

The AI lie: how trillion-dollar hype is killing humanity

AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.

But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.

The Hard Facts on AI’s Shortcomings

This year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.

When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.

A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.

AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.

The Mechanical Turk 2.0—With a Twist

Today’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.

From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.

It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.

This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.

Shielding Themselves from Liability

Why wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.

Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”

Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?

AI’s adoption plateau

People learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.

Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.

Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.

A Necessary Pivot: Incorporate Human Judgment

These flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.

Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.

Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.

We’ve compiled a list of the best recruitment platforms.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

https://www.techradar.com/pro/the-ai-lie-how-trillion-dollar-hype-is-killing-humanity


Leave a Reply

Your email address will not be published. Required fields are marked *

January 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

About Us

Welcome to encircle News! We are a cutting-edge technology news company that is dedicated to bringing you the latest and greatest in everything tech. From automobiles to drones, software to hardware, we’ve got you covered.

At encircle News, we believe that technology is more than just a tool, it’s a way of life. And we’re here to help you stay on top of all the latest trends and developments in this ever-evolving field. We know that technology is constantly changing, and that can be overwhelming, but we’re here to make it easy for you to keep up.

We’re a team of tech enthusiasts who are passionate about everything tech and love to share our knowledge with others. We believe that technology should be accessible to everyone, and we’re here to make sure it is. Our mission is to provide you with fun, engaging, and informative content that helps you to understand and embrace the latest technologies.

From the newest cars on the road to the latest drones taking to the skies, we’ve got you covered. We also dive deep into the world of software and hardware, bringing you the latest updates on everything from operating systems to processors.

So whether you’re a tech enthusiast, a business professional, or just someone who wants to stay up-to-date on the latest advancements in technology, encircle News is the place for you. Join us on this exciting journey and be a part of shaping the future.

Podcasts

TWiT 1015: Smarter Than a House Cat – TikTok, Trumpcoin, Samsung Unpacked 2025 This Week in Tech (Audio)

Supreme Court Upholds Law That Threatens US TikTok Ban Trumpcoin Texas Sues Allstate Over Its Collection of Driver Data Skyrocketing car-insurance premiums are pushing inflation higher Behind the Curtain — Coming soon: Ph.D.-level super-agents 4 surprise products we could see at Samsung Unpacked 2025 Apple suspends error-strewn AI generated news alerts US Finalizes Rule Banning Smart Cars With Russian, Chinese Tech Natrium 'advanced nuclear' power plant wins Wyoming permit – WyoFile Cash App parent fined $175 million for 'woefully incomplete' response to fraud FDA Proposes Significant Step Toward Reducing Nicotine to Minimally or Nonaddictive Level in Cigarettes and Certain Other Combusted Tobacco Products Host: Leo Laporte Guests: Jason Hiner, Paris Martineau, and Molly White Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT ziprecruiter.com/twit NetSuite.com/TWIT canary.tools/twit – use code: TWIT shopify.com/twit
  1. TWiT 1015: Smarter Than a House Cat – TikTok, Trumpcoin, Samsung Unpacked 2025
  2. TWiT 1014: Just Say It's Capitalism – CES 2025, Meta News, Newag DRM
  3. TWiT 1013: Calamari in Crisis – Touching the Sun, Fake Spotify Artists, Banished Words
  4. TWiT 1012: Our Best Of 2024 – The Best Moments From TWiT's 2024
  5. TWiT 1011: The Year in Review – A Look at the Top Stories of 2024