Security

Google's war on terror: 4 ways the search giant is fighting extremism online

In a blog post, Google outlined four steps that it will take to combat terrorist content and redirect users to positive information.

Google is ramping up its efforts in fighting extremist content online, detailing in a Sunday blog post four new efforts the company will be making to eliminate online terrorism.

Online platforms like Facebook, Twitter, and Google have been used by extremists to share videos of executions, promote their ideology, and recruit new members. In the post, Google general counsel Kent Walker said that efforts have been made to curb this activity, but that more remains to be done.

Google is currently working with government and law enforcement agencies to remove this type of content, and the firm has created a few tools already to help in identifying and getting rid of terror-related photos and videos, the post said. Now, the company has committed to continue its efforts in four distinct ways: Increasing technology's ability to discover this content, using more human experts to categorize such content, strengthening its stance on content that violates policy, and redirecting potential Islamic State recruits to anti-terrorist videos, the post said.

SEE: Facebook's secret weapon for fighting terrorists: Human experts and AI working together

1. Identification

The first step in Google's plan is to boost its use of technology in identifying content that may contain terrorist acts or promote terrorist ideals. The issue here, the post said, is that certain content that mentions extremism may simply be a newscast informing viewers of a threat. Google is now using machine learning to analyze the content, and so far, 50% of the offending content that it has removed over the past six months has been identified through this method. Moving forward, the company will continue to train new models on figuring out what is, and what isn't, extremist content.

2. Human experts

Walker noted in the post that technology alone isn't a "silver bullet" in this fight. So, Google will begin to rely more heavily on the independent human experts in its YouTube Trusted Flagger program to help flag the content in question for removal. These experts flag the correct content with a 90% success rate, and Google is planning on adding 50 more professionals to the program to tackle "hate speech, self-harm, and terrorism," the post said.

This is similar to the approach that Facebook has taken to counter extremism as well, in that it is utilizing both AI and human experts. For business leaders, this highlights the need to hire cybersecurity experts you trust to help implement the technology you're using to secure your organization. As it stands, there is still nuance in security that needs a human touch.

youtube.jpg
Image: iStockphoto/Anatoliy Babiy

3. Demote videos

Google will also begin cracking down on videos that it deems a violation of policy, such as "videos that contain inflammatory religious or supremacist content," the post said. The videos will be preceded by a warning, and Google won't let the creator make money off of them. The videos will be more difficult to come across, and won't have as high engagement either, Walker said in the post.

4. Targeted advertising

In an effort to push back against recruiting efforts online, YouTube will employ what is known as the "Redirect Method" more broadly. With this approach, YouTube's targeted advertising will find users who could be potential Islamic State recruits and send them to anti-terrorist videos, in an effort to change their mind about joining. In the past, this process had a high rate of success the post said.

Also see

About Conner Forrest

Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.

Editor's Picks

Free Newsletters, In your Inbox