Google thinks a US Supreme Court case could radically change the internet
Date:
Fri, 13 Jan 2023 21:04:49 +0000
Description:
Current law Interpretation protects tech companies from being liable for content moderation decisions made by artificial intelligence, but that could be changing.
FULL STORY ======================================================================
Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing a key protection against lawsuits over content moderation decisions that involve artificial intelligence (AI).
Section 230 of the Communications Decency Act of 1996 currently offers a blanket liability shield in regards to how companies moderate content on
their platforms.
However, as reported by CNN , Google wrote in a legal filing that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTubes algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content. Automation in moderation
Being part of an almost 27-year-old law, already targeted for reform by US President Joe Biden , Section 230 isnt equipped to legislate on modern developments such as artificially intelligent algorithms, and thats where the problems start.
The crux of Googles argument is that the internet has grown so much since
1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. Virtually no modern website would function if users had to sort through content themselves, it said in the filing.
An abundance of content means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites.
Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability,
but that this puts the internet at risk of being a virtual cesspool.
The tech giant also pointed out that YouTubes community guidelines expressly disavow terrorism, adult content, violence and other dangerous or offensive content and that it is continually tweaking its algorithms to pre-emptively block prohibited content.
It also claimed that approximately 95% of videos violating YouTubes Violent Extremism policy were automatically detected in Q2 2022.
Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted the rise of ISIS to prominence. Read more
Google is offering your SMB website free anti-terrorism moderation tools
How Reddit turned its millions of users into a content moderation army
Weve also listed the best free web filters right now
In an attempt to further distance itself from any liability on this point, Google responded by saying that YouTubes algorithms recommends content to users based on similarities between a piece of content and the content a user is already interested in.
This is a complicated case and, although its easy to subscribe to the idea that the internet has gotten too big for manual moderation, its just as convincing to suggest that companies should be held accountable when their automated solutions fall short.
After all, if even tech giants cant guarantee whats on their website, users
of filters and parental controls cant be sure that theyre taking effective action to block offensive content. Heres our list of the best VPN with antivirus right now
======================================================================
Link to news story:
https://www.techradar.com/news/google-thinks-a-us-supreme-court-case-could-rad ically-change-the-internet
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)