The European Commission also suggests humans oversee any automated content removal efforts.
Building a slide deck, pitch, or presentation? Here are the big takeaways:
- The EU announced websites, including giants like Google and Facebook, will have one hour to remove terroristic content from their site.
- Social media sites or other places where the content is provided by users should review their flagging processes to ensure they can meet the guidelines.
Websites in the EU will have one hour to remove content from terrorist organizations, due to new EU regulations announced Thursday.
If law enforcement or Europol identifies content as terroristic or otherwise illegal, the site will be notified and then have one hour to remove it, the European Commission said. Sites operating in the EU should review their removal processes to ensure they can meet the deadline, if needed.
The rules stem from a few national governments pressuring tech giants like Twitter, Facebook, and Google to be held liable for content, which is mostly user-generated, that appears on their sites, the Wall Street Journal said.
SEE: IT leader's guide to deep learning (Tech Pro Research)
Some companies have been increasing self-regulation of their content, using machine learning to catch extremist videos on YouTube and matching a post's material to known terroristic content using artificial intelligence on Facebook.
While automation can expedite the identification and removal process, humans should oversee the process to make sure takedowns aren't excessive, the WSJ said. One commission official said while these efforts have been somewhat effective, they need to be faster.
The regulation is a soft law and participation from sites is voluntary, so there are currently no reprimands for sites that don't meet the deadline or entirely ignore removal requests. If sites fail to follow the rule, the EU will work towards a formal regulation, the WSJ noted.
While the EU law doesn't currently result in fines, national laws could bare stricter punishments. Passed in June and enacted earlier this year, Germany began fining companies 50 million euros if they don't remove illegal content from their sites. France and the UK may have similar legislation in the future.
The spread of hate speech and terroristic content, along with misinformation from Russian operatives, has caused a debate on the role social media companies play in policing what people upload to their site. Critics have said the rules could restrict free speech on the sites.
The rule comes months before General Data Protection Regulation (GDPR) takes effect in May, which unifies data privacy laws across the EU.
- EU General Data Protection Regulation (GDPR) policy (Tech Pro Research)
- GDPR: A cheat sheet (TechRepublic)
- WEF: Facebook, Twitter must police extremist content to avoid anti-free speech regulations (TechRepublic)
- British PM on terror threat: Let's blame Facebook, not our police cuts (ZDNet)
- Google's war on terror: 4 ways the search giant is fighting extremism online (TechRepublic)