Google released a new tool on Thursday that could help businesses better identify and manage abusive comments and online harassment on their websites. The Google Perspective API uses machine learning to rate how “toxic” a given comment could be to a discussion.

Perspective, announced via a Google blog post, was born out of a Google division called Jigsaw, which ramped up its troll-fighting efforts in September 2016. Google’s Perspective is part of a larger Jigsaw effort which, among other things, is working to study “how computers can learn to understand the nuances and context of abusive language at scale,” its website said.

SEE: Electronic communication policy template (Tech Pro Research)

In the blog post, Jigsaw president Jared Cohen explained that Perspective works by reviewing comments and rating them against other comments that had been labeled as “toxic” by humans who had reviewed them. To train the model, the API studied hundreds of thousands of human-reviewed comments, and it will continue to improve as it reviews more comments, Cohen wrote in the post.

Once Perspective scores a comment, that information is passed along to the publisher to use as they wish. Cohen’s post said that they can use the information to flag or remove comments, alert the commenter to their comment’s toxicity as it is being composed, or to allow readers to sort comments by how toxic they are.

“Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information,” the Perspective website said.

So far, The New York Times, The Guardian, The Economist, and Wikipedia have all used Perspective to moderate comments. The New York Times, for example, currently has a team dedicated to moderating some 11,000 comments per day, and Cohen’s post said that the newspaper is hoping to use Perspective to speed up the process and allow more comments on its site.

Perspective is the latest in a host of machine learning technologies that Google has been making available to developers, alongside such tools as the TensorFlow library and the Cloud Machine Learning Platform. This tool could make it easier for small companies, startups, and media organizations to fight comment trolls without dedicating a bunch of extra resources. However, there are concerns that it could be seen as a form of censorship.

The Perspective website noted that it will be releasing more machine learning models later in 2017. The post mentioned that the team is exploring options in different languages, and tools that may determine if a comment is off-topic as well.

Developers can request API access here.

The 3 big takeaways for TechRepublic readers

  1. Google’s new Perspective API uses artificial intelligence technology to rate how “toxic” a comment could be to a particular discussion.
  2. The new API could be used to help smaller companies combat online harassment, but some could see it as a form of censorship.
  3. The team behind Perspective is working on new tools to examine off-topic comments or toxic comments in different languages.