Security

WEF: Facebook, Twitter must police extremist content to avoid anti-free speech regulations

A new report from the World Economic Forum says anti-freedom of speech government regulation may result from top tech firms being too lax on violent and extremist content.

Top social media firms need to crack down on violent, extremist, and fake content on their sites or they could be facing anti-speech governmental regulation, a report from the World Economic Forum (WEF) said on Monday.

The findings from the Swiss nonprofit come days before general counsel from Twitter, Facebook, and Google are slated to testify before US congressional committees regarding their platforms' involvement in alleged Russian interference during the 2016 presidential election.

The forum's study recommends more self-governance, using more rigid human oversight of content on their platforms to prevent the creation and spread of political misinformation or terrorist content. Failure to do this may lead to fines or freedom of speech-limiting government legislation, the report said.

SEE: Social media policy (Tech Pro Research)

Some countries have already seen laws and leaders support the study's thesis. In June, Germany passed a law allowing them to fine social media networks up to 50 million euros for not removing violent posts fast enough. In Silicon Valley, tech leaders are taking steps to prevent terroristic messages and fake news from spreading on their platforms.

On Oct. 24, Twitter announced new labels for political ads and increased transparency after Russians used the site to influence the 2016 presidential election. The week before, they announced the beginning of a crackdown on racist messages, hate symbols, and unwanted sexual advances, among other things.

The findings bring up the question of what is protected speech and what isn't, a question tech leaders running social media platforms or any site with a comments section may need to seriously discuss. The study's impact may go past the social media companies as a whole, instead asking individual businesses or accounts to self-govern and crack down on comments or other posts on social media.

While the study asks for human oversight, companies may begin to ramp up AI efforts to cover the vast number of messages and posts on a site. In June, Facebook began using AI to identify and remove posts from terrorists instead of waiting for a human to flag a post for removal. The same month, Google said it would begin dedicating more resources to creating AI software to identify extremist content, among other steps to fight the spread of violent material.

The 3 big takeaways for TechRepublic readers

  1. A World Economic Forum report says social media networks need to increase self-governance and combat terrorist and extremist content before government regulation happens.
  2. Businesses and tech companies may need to watch their own content, whether that's on social media or in a comment section, to avoid speech-reducing laws or fines.
  3. The report asks for human oversight, while Facebook and Google have begun developing and using AI to identify and remove violent postings. AI usage may increase due to the report's findings.

Also see

1030download.jpg
Image: iStockphoto/diego_cervo

About Olivia Krauth

Olivia Krauth is a Multiplatform Reporter at TechRepublic.

Editor's Picks

Free Newsletters, In your Inbox