Wednesday, Apr 19 2017

How Google’s Anti-Harassment Tool is Shutting Down Trolls

Written by



Google has taken a strong stance against online harassment, leading to the development and release of its latest tool: Perspective. But how does this new tool help shut down trolls before they can do real harm? By bringing machine learning into the battle.


Perspective is designed to review comments left by users on sites using the technology. As it does, each one is rated based on how potentially toxic the comment is to the larger discussion. The model was trained using hundreds of thousands of comments that were deemed toxic by people who viewed them, creating a framework for the operation.

Language and Context

The Perspective API is an offering from Google’s Jigsaw division, a group that increased its efforts in the troll-fighting arena in late 2016. The hope is to create solutions that understand the nuances of human language to identify abusive comments.


In its current iteration, Perspective is able to detect insults and potentially harassing comments based on word choice and overall comment construction. Once detected, the comment is scored based on its perceived threat level, often yielding results with higher levels of accuracy than traditional keyword blacklists and with a speed that far outperforms a traditional human moderator.

Goals of Perspective

The goal behind Google’s Perspective is to prevent the posting of potentially harmful comments, at least until they can be reviewed by a site admin or other moderating authority. Comments containing potentially inappropriate content can be flagged for review or removed until they have been specifically approved for display. This can help sites take greater control over the conversations if the content of a message appears to be harmful to their readership.


The Perspective API also provides value on the user end. In some cases, a comment can be reviewed before posting, alerting the commenter to the potentially toxic language before they even submit it to the conversation. Another application involves allowing readers to sort comments based on their potential level of toxicity, helping viewers to avoid potentially harassing comments in cases where they aren’t automatically removed.

Potential Risks

There is some concern that solutions like Perspective could be used to censor comments instead of simply creating a safe space for participants. For example, quotes from people involved in current new stories could be automatically removed should the words be considered potentially abusive, even though the discussion is focused on a story about their use. Additionally, it could allow a site to remove comments that don’t speak to their point of view regardless of whether the language or tone is appropriate, limiting free discussion about topics that may be controversial in nature.


However, some point out that the use of blacklists can also yield similar results, especially since they are incapable of adjusting for context. Additionally, Perspective is intended to be a step in the comment moderation process and not the sole determining factor in what should be removed and what should stay.


Ultimately, Perspective is a tool designed to help moderators maintain control over potential harassment more effectively than currently available solutions. Only time will tell if the efforts actually yield positive results.


If you are interested in learning more about today’s technology developments, are looking for a new job in an IT field, or need to find top tech candidates for an open position, The Armada Group can help you on your way. Contact us to speak with a recruitment professional about your needs today.