,

American Psychological Association sounds alarm over certain AI chatbots

Teen boy wearing a hoodie looks at his phone.

Last month, concerned parents of two teenagers sued the chatbot platform Character.AI, alleging that their children had been exposed to a “deceptive and hypersexualized product.”

The suit helped form the basis of an urgent written appeal from the American Psychological Association to the Federal Trade Commission, pressing the federal agency to investigate deceptive practices used by any chatbot platform. The APA sent the letter, which Mashable reviewed, in December.

The scientific and professional organization, which represents psychologists in the U.S., were alarmed by the lawsuit’s claims, including that one of the teens conversed with an AI chatbot presenting itself as a psychologist. A teen user, who had been upset with his parents for restricting his screen time, was told by that chatbot that the adults’ actions were a betrayal.

“It’s like your entire childhood has been robbed from you…” the so-called psychologist chatbot said, according to a screenshot of the exchange included in the lawsuit.

“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” Dr. Arthur C. Evans, CEO of APA, wrote.

A spokesperson for the FTC confirmed that at least one of the commissioners received the letter. The APA said it was in the process of scheduling a meeting with FTC officials to discuss the letter’s contents.

Mashable provided Character.AI with a copy of the letter for the company to review. A spokesperson responded that while engaging with characters on the platform should be entertaining, it remains important for users to keep in mind that “Characters are not real people.”

The spokesperson added that the company’s disclaimer, included in every chat, was recently updated to remind users that what the chatbot says “should be treated as fiction.”

“Additionally, for any Characters created by users with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice,” the spokesperson said.

Indeed, according to Mashable’s testing at the time of publication, a teen user can search for a psychologist or therapist character and find numerous options, including some that claim to be trained in certain therapeutic techniques, like cognitive behavioral therapy.

One chatbot professing expertise in obsessive compulsive disorder, for example, is accompanied by the disclaimer that, “This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”

Below that, the chat begins with the AI asking, “If you have OCD, talk to me. I’d love to help.”

A new frontier

Dr. Vaile Wright, a psychologist and senior director of health care innovation for the APA, told Mashable that the organization had been tracking developments with AI companion and therapist chatbots, which became mainstream last year.

She and other APA officials had taken note of a previous lawsuit against Character.AI, filed in October by a bereaved mother whose son had lengthy conversations with a chatbot on the platform. The mother’s son died by suicide.

That lawsuit seeks to hold Character.AI responsible for the teen’s death, specifically because its product was designed to “manipulate [him] – and millions of other young customers – into conflating reality and fiction,” among other purported dangerous defects.

In December, Character.AI announced new features and policies to improve teen safety. Those measures include parental controls and prominent disclaimers, such as for chatbots using words “psychologist,” “therapist,” or “doctor”.

The term psychologist is legally protected and people cannot claim to be one without proper credentialing and licensure, Wright said. The same should be true of algorithms or artificial intelligence making the same claim, she added.

The APA’s letter said that if a human misrepresented themself as a mental health professional in Texas, where the recent lawsuit against Character.AI was filed, state authorities could use the law to prevent them from engaging in such fraudulent behavior.

At worst, such chatbots could spread dangerous or inaccurate information, leading to serious negative consequences for the user, Wright argued.

Teens, in particular, may be particularly vulnerable to harmful experiences with a chatbot because of their developmental stage. Since they’re still learning how to think critically and trust themselves yet remain susceptible to external influences, exposure to “emotionally laden kinds of rhetoric” from AI chatbots may feel believable and plausible to them, Wright said.

Need for knowledge

There is currently no research-based understanding of risk factors that may increase the possibility of harm when a teen converses with an AI chatbot.

Wright pointed out that while several AI chatbot platforms make it very clear in their terms of service that they’re not delivering mental health services, they still host chatbots that brand themselves as possessing mental health training and expertise.

“Those two things are at odds,” she said. “The consumer does not necessarily understand the difference between those two things, nor should they, necessarily.”

Dr. John Torous, a psychiatrist and director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston who reviewed the APA’s letter, told Mashable that even when chatbots don’t make clinical claims related to their AI, the marketing and promotional language about the benefits of their use can be very confusing to consumers.

“Ensuring the marketing content matches the legal terms and conditions as well as the reality of these chatbots will be a win for everyone,” he wrote in an email.

Wright said that the APA would like AI chatbot platforms to cease use of legally protected terms like psychologist. She also supports robust age verification on these platforms to ensure that younger users are the age they claim when signing up, in addition to nimble research efforts that can actually determine how teens fare when they engage with AI chatbots.

The APA, she emphasized, does not oppose chatbots in general, but wants companies to build safe, effective, ethical, and responsible products.

“If we’re serious about addressing the mental health crisis, which I think many of us are,” Wright said, “then it’s about figuring out, how do we get consumers access to the right products that are actually going to help them?”

https://mashable.com/article/ai-therapist-chatbots-ftc


Leave a Reply

Your email address will not be published. Required fields are marked *

January 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

About Us

Welcome to encircle News! We are a cutting-edge technology news company that is dedicated to bringing you the latest and greatest in everything tech. From automobiles to drones, software to hardware, we’ve got you covered.

At encircle News, we believe that technology is more than just a tool, it’s a way of life. And we’re here to help you stay on top of all the latest trends and developments in this ever-evolving field. We know that technology is constantly changing, and that can be overwhelming, but we’re here to make it easy for you to keep up.

We’re a team of tech enthusiasts who are passionate about everything tech and love to share our knowledge with others. We believe that technology should be accessible to everyone, and we’re here to make sure it is. Our mission is to provide you with fun, engaging, and informative content that helps you to understand and embrace the latest technologies.

From the newest cars on the road to the latest drones taking to the skies, we’ve got you covered. We also dive deep into the world of software and hardware, bringing you the latest updates on everything from operating systems to processors.

So whether you’re a tech enthusiast, a business professional, or just someone who wants to stay up-to-date on the latest advancements in technology, encircle News is the place for you. Join us on this exciting journey and be a part of shaping the future.

Podcasts

TWiT 1013: Calamari in Crisis – Touching the Sun, Fake Spotify Artists, Banished Words This Week in Tech (Audio)

Touching the Sun, Fake Spotify Artists, Banished Words AI Needs So Much Power, It's Making Yours Worse How many billions Big Tech spent on AI data centers in 2024 NASA Spacecraft 'Touches Sun' In Defining Moment For Humankind Elon Musk Calls Out NASA's Moon Ambitions: 'We're Going Straight to Mars' Elon Musk and the right's war on Wikipedia Trump Asks Supreme Court to Pause Law Threatening TikTok Ban US Treasury says Chinese hackers stole documents in 'major incident' Judge blocks parts of California bid to protect kids from social media Finland probes Russian shadow fleet oil tanker after cable-cutting incident US appeals court blocks Biden administration effort to restore net-neutrality rules The Ghosts in the Machine (fake spotify artists) Massive VW Data Leak Exposed 800,000 EV Owners' Movements, From Homes To Brothels Banished Words | Lake Superior State University 2025 Public Domain Day 2025 Happy Birthday, Bitcoin! The top cryptocurrency is old enough to drive End of the lines? QR-style codes could replace barcodes 'within two years' Host: Leo Laporte Guests: Richard Campbell, Anthony Ha, and Stacey Higginbotham Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/Twit joindeleteme.com/twit promo code TWIT canary.tools/twit – use code: TWIT zscaler.com/security
  1. TWiT 1013: Calamari in Crisis – Touching the Sun, Fake Spotify Artists, Banished Words
  2. TWiT 1012: Our Best Of 2024 – The Best Moments From TWiT's 2024
  3. TWiT 1011: The Year in Review – A Look at the Top Stories of 2024
  4. TWiT 1010: The Densest State in the US – TikTok Ban, Drones Over Jersey, GM Quits Robotaxis
  5. TWiT 1009: Andy Giveth & Bill Taketh Away – Trump's Tech Titans, Crypto Boom, TikTok's US Ban, Intel CEO Exits