On Tuesday, October 31, representatives from Facebook, Twitter, and Google testified before a US Senate subcommittee about how their firms tackle extremist content online and the roles their platforms played in alleged Russian meddling in the 2016 US presidential election.

While each representative shared their company’s plan for tackling this content, all three companies are relying on one secret weapon to help them accomplish their goals: Artificial intelligence (AI).

While the potential for consumer applications of AI and machine learning is huge, it is still far from being fully realized. In the enterprise, however, these tools serve a much more practical application in that they enable the type of scale needed to sift through and sort massive amounts of data–exactly the type of data these companies need to look through in order to wipe out the content in question.

SEE: Social media policy template (Tech Pro Research)

When questioned about his firm’s physical efforts in combatting this content, Facebook general counsel Colin Stretch explained that Facebook has more than five million advertisers. And while they have thousands of employees working on the issue, it simply isn’t enough manpower to tackle the problem.

Instead, Stretch said in his written testimony that Facebook is using a combination of human experts and AI tools to combat terrorist content on its platform, an initiative it first announced in June. In his testimony, he also noted that the company is using machine learning to identify ads more readily and to find the advertisers behind those ads and “require them to verify their identity,” if they haven’t already.

Twitter, similarly, is dealing with massive scale issues when it comes to working on these problems. Sean Edgett, acting general counsel for Twitter, said in the subcommittee hearing that Twitter had been pulling employees from other departments to work on these projects as well, but it can only go so far.

In his written testimony, Edgett said that Twitter is using machine learning technologies for three key purposes: To spot spam, to figure out how fake accounts affect users, and to more accurately understand the impact of election-related advertisements.

Google has been investing in machine learning and AI for years. Much like Facebook, the search giant has been using AI and human employees to fight terrorist content online. Richard Salgado, director of law enforcement and information security for Google, wrote in his written testimony that Google had been adding new classifiers to improve this service.

While the results of the Senate Committee on the Judiciary’s Subcommittee on Crime and Terrorism will likely have a major impact on advertising, marketing, and communications, the tools that these firms are using will set the stage for what AI can help businesses accomplish in the future. As massive enterprises seek to scale their business processes to new heights, it will be AI and machine learning that help them get there.

The 3 big takeaways for TechRepublic readers

  1. Representatives from Google, Facebook, and Twitter recently testified before US Senators about how their firms are combatting Russian misinformation and extremist content on their platforms.
  2. AI and machine learning are playing a critical role in all three companies, as they use the technologies to scale their efforts in dealing with problematic content.
  3. These technologies will continue to impact the business world, opening up new levels of productivity and efficiency.