How Robots Might Moderate Our Comments
You may have noticed. There are some people on social media and on website comment sections who simply don’t hold back. They leave comments that are abusive, insulting, condescending… if you’re a community manager and you’re trying to keep things ‘fair and balanced’, you might feel a little overwhelmed.
Jigsaw, part of Alphabet (basically, Google), along with Google’s Counter Abuse Technology Team (yes, it exists), released a new tool last week that is being trialled by a host of newspapers and websites, and it’s called Perspective.
The tool rates comments on a scale of toxicity, and gives these sites the ability to highlight (and potentially ban) comments that are abusive or insulting. Which, if you think about it, could be a huge time-saver, given the toxic nature of some of the comments sections in the newspapers these days.
Perspective is interesting from a number of points of view – but let’s first take it for what it is. It’s a machine-learning model, initially to help us ‘host better conversations’, scoring the perceived impact a comment might have on a conversation.
The first model identifies whether a comment could be perceived as ‘toxic’ to a conversation, but the developers are looking at further models using the same API.
fLet’s take some examples. Obviously, there’s the US election – and the following comments are very high on the Toxic scale.
There weren’t many, but there were some that rated very low on the Toxic scale:
You can even have a go yourself. Here’s one I did.
And another…
Quite good.
Comments Are Content
Comments have been around since the early days of the Internet – they’re Web 1.0. They pre-date Friends Reunited. They were one of the first forms of link-spam, and there are still “SEO companies” who think they can spam comment sections to ‘boost their rankings’.
While there are plugins and strategies to fight this, there hasn’t yet been anything to actively moderate comments sections for you.
What we need to bear in mind is that comments are your content. By opening comments up to readers, you allow them to add to your work. You could go from a 1,000-word article to a 2,000-word page thanks to the contributions of your readers.
If the conversation is diluted by a swear-fest, then half of your content becomes either badly written or ‘toxic’.
It wouldn’t therefore be a surprise to see your rankings drop gradually for this specific page as the comments pile up and start to dilute your own content.
Comments are content, and so over time, larger websites have developed processes for moderating comments. Recognising the value of allowing readers to join in the conversation, newspapers have effectively created communities, and that requires community management. One of the first steps newspapers took was to stop taking comments on sensitive issues.
Even the Daily Mail sometimes takes the sensitive step of not allowing comments.
Why not try the New York Times’ comment moderation test to see how you would fare?
https://www.nytimes.com/intera...
The Robots Learn From Us
And here’s where we are now, in 2017, with the robots starting to take over. But in Perspective’s case, the robots are taking human data and are learning from it. Much of the data input into Perspective is from surveys, and machine learning takes it up and runs with it, learning the structures and the arguments we’re inputting and the reactions to our content.
So, as Perspective takes up the toxicity challenge, this is just the first step. The API is simply a method of interpreting language used online.
Imagine, on Twitter, being told that you cannot publish your tweet as 99% of people would rate it as offensive. Imagine, on Facebook, being told that you cannot publish your post, as 99% of the facts contained within it are false.
Toxicity is a filter – but why not veracity? Indeed, why not insight and quality? If Twitter or any other social media or search engines could define which content is least offensive, most factually correct, and most insightful – well, wouldn’t it make sense that this content would be pushed to the top?
From Popularity to Robot-Friendly
And maybe this might shift the attention a little. Instead of targeting popularity and clicks, bloggers and journalists might have to start targeting quality and insight. OK, perhaps that’s utopic, and ignores basic human behaviour (after all, why do we still have Outbrain ads at the bottom of every newspaper article asking us to find out about the daughter Donald Trump doesn’t talk about?).
But if the Robots are forcing us to tone down the rhetoric, and potentially to up the fact-checking and insight – this is a good thing?
Therefore, over the next few years, expect to be developing a deeper understanding of machine learning, and the intricacies of getting your content pushed higher - and not banned - by the robots lurking in the background.