, , , , , , , ,

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech

When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster.

But the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.

While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.

Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.

Make trust and safety a top priority

While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?

For all of its power and potential, generative AI makes it easy, fast and accessible for bad actors to produce harmful content.

Over decades, social media platforms have developed a new discipline — trust and safety — to try to get their arms around thorny problems associated with user-generated content. Not so with other industries.

For that reason, companies will need to bring in experts on trust and safety to talk about their implementation. They’ll need to build expertise and think through ways these tools can be abused. And they’ll need to invest in staff who are responsible for addressing abuses so they are not caught flat-footed when these tools are abused by bad actors.

Establish high guardrails and insist on transparency

Especially in work or education settings, it is crucial that AI platforms have adequate guardrails to prevent the generation of hateful or harassing content.

While incredibly useful tools, AI platforms are not 100% foolproof. Within a few minutes, for example, ADL testers recently used the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous anti-Jewish pogroms in Europe and a list of nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism against those sites.

While we’ve seen some generative AIs improve their handling of questions that can lead to antisemitic and other hateful responses, we’ve seen others fall short when ensuring they will not contribute to the spread of hate, harassment, conspiracy theories and other types of harmful content.

Before adopting AI broadly, leaders should ask critical questions, such as: What kind of testing is being done to ensure that these products are not open to abuse? Which datasets are being used to construct these models? And are the experiences of communities most targeted by online hate being integrated into the creation of these tools?

Without transparency from platforms, there’s simply no guarantee these AI models don’t enable the spread of bias or bigotry.

Safeguard against weaponization

Even with robust trust and safety practices, AI still can be misused by ordinary users. As leaders, we need to encourage the designers of AI systems to build in safeguards against human weaponization.

Unfortunately, for all of their power and potential, AI tools make it easy, fast and accessible for bad actors to produce content for any of those scenarios. They can produce convincing fake news, create visually compelling deepfakes and spread hate and harassment in a matter of seconds. Generative AI-generated content could also contribute to the spread of extremist ideologies — or be used to radicalize susceptible individuals.

In response to these threats, AI platforms should incorporate robust moderation systems that can withstand the potential deluge of harmful content perpetrators might generate using these tools.

Generative AI has almost limitless potential to improve lives and revolutionize how we process the endless amount of information available online. I’m excited about the prospects for a future with AI but only with responsible leadership.

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech by Walter Thompson originally published on TechCrunch

https://techcrunch.com/2023/06/02/ai-generated-hate-is-rising-3-things-leaders-should-consider-before-adopting-intelligent-technologies/


November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

About Us

Welcome to encircle News! We are a cutting-edge technology news company that is dedicated to bringing you the latest and greatest in everything tech. From automobiles to drones, software to hardware, we’ve got you covered.

At encircle News, we believe that technology is more than just a tool, it’s a way of life. And we’re here to help you stay on top of all the latest trends and developments in this ever-evolving field. We know that technology is constantly changing, and that can be overwhelming, but we’re here to make it easy for you to keep up.

We’re a team of tech enthusiasts who are passionate about everything tech and love to share our knowledge with others. We believe that technology should be accessible to everyone, and we’re here to make sure it is. Our mission is to provide you with fun, engaging, and informative content that helps you to understand and embrace the latest technologies.

From the newest cars on the road to the latest drones taking to the skies, we’ve got you covered. We also dive deep into the world of software and hardware, bringing you the latest updates on everything from operating systems to processors.

So whether you’re a tech enthusiast, a business professional, or just someone who wants to stay up-to-date on the latest advancements in technology, encircle News is the place for you. Join us on this exciting journey and be a part of shaping the future.

Podcasts

TWiT 1007: All the Hotdogs in the World – China's 'Salt Typhoon' Hack, Google on the Chopping Block, Recall AI This Week in Tech (Audio)

In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
  1. TWiT 1007: All the Hotdogs in the World – China's 'Salt Typhoon' Hack, Google on the Chopping Block, Recall AI
  2. TWiT 1006: Underwater Alien Civilizations – Bluesky Growth, Tyson Vs. Paul, AI Granny
  3. TWiT 1005: $125,000 in Baguettes – iPod Turns 23, The $1.1M AI Painting, Roblox
  4. TWiT 1004: Embrace Uncertainty – Political Texts, Daylight Saving Time, Digital Ad Market
  5. TWiT 1003: CrabStrike – Delta Sues Crowdstrike, Hospital AI, Surge Pricing