, , , , , ,

Two Supreme Court cases could upend the rules of the internet

The Supreme Court could soon redefine the rules of the internet as we know it. This week, the court will hear two cases, Gonzalez v. Google and Twitter v. Taamneh, that give it an opportunity to drastically change the rules of speech online.

Both cases deal with how online platforms have handled terrorist content. And both have sparked deep concerns about the future of content moderation, algorithms and censorship.

Section 230 and Gonzalez v. Google

If you’ve spent any time following the various culture wars associated with free speech online over the last several years, you’ve probably heard of Section 230. Sometimes referred to as the “the twenty-six words that invented the internet,” Section 230 is a clause of the Communications Decency Act that shields online platforms from liability for their users’ actions. It also protects companies’ ability to moderate what appears on their platforms.

Without these protections, Section 230 defenders argue, the internet as we know couldn’t exist. But the law has also come under scrutiny the last several years amid a larger reckoning with Big Tech’s impact on society. Broadly, those on the right favor repealing Section 230 because they claim it enables censorship, while some on the left have said it allows tech giants to avoid responsibility for the societal harms caused by their platforms. But even among those seeking to amend or dismantle Section 230, there’s been little agreement about specific reforms.

Section 230 also lies at the heart of Gonzalez v. Google, which the Supreme Court will hear on February 21st. The case, brought by family members of a victim of the 2015 Paris terrorist attack, argues that Google violated US anti-terrorism laws when ISIS videos appeared in YouTube’s recommendations. Section 230 protections, according to the suit, should not apply because YouTube’s algorithms suggested the videos.

“It basically boils down to saying platforms are not liable for content posted by ISIS, but they are liable for recommendation algorithms that promoted that content,” said Daphne Keller, who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, during a recent panel discussing the case.

That may seem like a relatively narrow distinction, but algorithms underpin almost every aspect of the modern internet. So the Supreme Court’s ruling could have an enormous impact not just on Google, but on nearly every company operating online. If the court sides against Google, then “it could mean that online platforms would have to change the way they operate to avoid being held liable for the content that is promoted on their sites,” the Bipartisan Policy Center, a Washington-based think tank, explains. Some have speculated that platforms could be forced to do away with any kind of ranking at all, or would have to engage in content moderation so aggressive it would eliminate all but the most banal, least controversial content.

“I think it is correct that this opinion will be the most important Supreme Court opinion about the internet, possibly ever,” University of Minnesota law professor Alan Rozenshtein said during the same panel, hosted by the Brookings Institution.

That’s why dozens of other platforms, civil society groups and even the original authors of Section 230 have weighed in, via “friend of the court” briefs, in support of Google. In its brief, Reddit argued that eroding 230 protections for recommendation algorithms could threaten the existence of any platform that, like Reddit, relies on user-generated content.

“Section 230 protects Reddit, as well as Reddit’s volunteer moderators and users, when they promote and recommend, or remove, digital content created by others,” Reddit states in its filing. “Without robust Section 230 protection, Internet users — not just companies — would face many more lawsuits from plaintiffs claiming to be aggrieved by everyday content moderation decisions.”

Yelp, which has spent much of the last several years advocating for antitrust action against Google, shared similar concerns. “If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear,” the company argues. “If Yelp had to display every submitted review, without the editorial freedom Section 230 provides to algorithmically recommend some over others for consumers, business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”

Meta, on the other hand, argues that a ruling finding 230 doesn’t apply to recommendation algorithms would lead to platforms suppressing more “unpopular” speech. Interestingly, this argument would seem to play into the right’s anxieties about censorship. “If online services risk substantial liability for disseminating third-party content … but not for removing third-party content, they will inevitably err on the side of removing content that comes anywhere close to the potential liability line,” the company writes. “Those incentives will take a particularly heavy toll on content that challenges the consensus or expresses an unpopular viewpoint.”

Twitter v. Taamneh

The day after the Supreme Court hears arguments in Gonzalez v. Google, it will hear yet another case with potentially huge consequences for the way online speech is moderated: Twitter v. Taamneh. And while the case doesn’t directly deal with Section 230, the case is similar to Gonzalez v. Google in a few important ways.

Like Gonzalez, the case was brought by the family of a victim of a terrorist attack. And, like Gonzalez, family members of the victim are using US anti-terrorism laws to hold Twitter, Google and Facebook accountable, arguing that the platforms aided terrorist organizations by failing to remove ISIS content from their services. As with the earlier case, the worry from tech platforms and advocacy groups is that a ruling against Twitter would have profound consequences for social media platforms and publishers.

“There are implications on content moderation and whether companies could be liable for violence, criminal, or defamatory activity promoted on their websites,” the Bipartisan Policy Center says of the case. If the Supreme Court were to agree that the platforms were liable, then “greater content moderation policies and restrictions on content publishing would need to be implemented, or this will incentivize platforms to apply no content moderation to avoid awareness.”

And, as the Electronic Frontier Foundation noted in its filing in support of Twitter, platforms “will be compelled to take extreme and speech-chilling steps to insulate themselves from potential liability.”

There could even be potential ramifications for companies whose services are primarily operated offline. “If a company can be held liable for a terrorist organization’s actions simply because it allowed that organization’s members to use its products on the same terms as any other consumer, then the implications could be astonishing,” Vox writes.

What’s next

It’s going to be several more months before we know the outcome of either of these cases, though analysts will be closely watching the proceedings to get a hint of where the justices may be leaning. It’s also worth noting that these aren’t the only pivotal cases concerning social media and online speech.

There are two other cases, related to restrictive social media laws out of Florida and Texas, that might end up at the Supreme Court as well. Both of those could also have significant consequences for online content moderation.

In the meantime, many advocates argue that Section 230 reform is best left to Congress, not the courts. As Jeff Kosseff, a law professor at the US Naval Academy who literally wrote the book about Section 230, recently wrote, cases like Gonzalez “challenge us to have a national conversation about tough questions involving free speech, content moderation, and online harms.” But, he argues, the decision should be up to the branch of government where the law originated.

“Perhaps Congress will determine that too many harms have proliferated under Section 230, and amend the statute to increase liability for algorithmically promoted content. Such a proposal would face its own set of costs and benefits, but it is a decision for Congress, not the courts.”

https://www.engadget.com/two-supreme-court-cases-could-upend-the-rules-of-the-internet-150018225.html?src=rss


October 2024
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

About Us

Welcome to encircle News! We are a cutting-edge technology news company that is dedicated to bringing you the latest and greatest in everything tech. From automobiles to drones, software to hardware, we’ve got you covered.

At encircle News, we believe that technology is more than just a tool, it’s a way of life. And we’re here to help you stay on top of all the latest trends and developments in this ever-evolving field. We know that technology is constantly changing, and that can be overwhelming, but we’re here to make it easy for you to keep up.

We’re a team of tech enthusiasts who are passionate about everything tech and love to share our knowledge with others. We believe that technology should be accessible to everyone, and we’re here to make sure it is. Our mission is to provide you with fun, engaging, and informative content that helps you to understand and embrace the latest technologies.

From the newest cars on the road to the latest drones taking to the skies, we’ve got you covered. We also dive deep into the world of software and hardware, bringing you the latest updates on everything from operating systems to processors.

So whether you’re a tech enthusiast, a business professional, or just someone who wants to stay up-to-date on the latest advancements in technology, encircle News is the place for you. Join us on this exciting journey and be a part of shaping the future.

Podcasts

TWiT 999: Bananas and Browsers – CA AI Bill Veto, Meta's Orion, FTC Vs. Fake Reviews This Week in Tech (Audio)

CA AI Bill Veto, Meta's Orion, FTC Vs. Fake Reviews Sam Altman's AI Manifesto News from Meta Connect Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley The Panel discusses CoPilot The Panel debates AGI James Cameron Joins Board of Stability AI in Coup for Tech Firm SAG-AFTRA Calls Strike Against 'League of Legends' Rabbit says only 5,000 people use the R1 daily Orion: True AR Glasses Have Arrived AI smackdown: How a new FTC ruling just protected the free press DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says Firefox Review Checker – Ensure review authenticity in your online shopping New California law requires one-click subscription cancellations The DOJ sues Visa for locking out rival payment platforms NIST proposes barring some of the most nonsensical password rules Some Mad Genius Put ChatGPT on a TI-84 Graphing Calculator 23andMe troubles, company recently settled data insecurity suit for $30 mil Host: Leo Laporte Guests: Denise Howell, Parmy Olson, Daniel Rubino, and Henry Laporte Download or subscribe to this show at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: lookout.com 1password.com/twit shopify.com/twit veeam.com flashpoint.io
  1. TWiT 999: Bananas and Browsers – CA AI Bill Veto, Meta's Orion, FTC Vs. Fake Reviews
  2. TWiT 998: Artisanal Locally-Sourced Dopamine – Amazon Returns to Office, CA AI Bill, Elon Backs Down
  3. TWiT 997: Put an OLED on it – iPhone Event 2024, $700 PS5, AI in AU
  4. TWiT 996: The Quiet Office Crackdown – Starlink Backtracks, AI Royalty Heist
  5. TWiT 995: The Story of Us – AnandTech Shuts Down, Brazil Bans X, Alexa Revamp