Twitter has put its trolls on notice.
On Tuesday, Twitter released promising preliminary results from a test of its new proactive troll filtering tactic. It wanted to see if filtering (but not deleting) content from accounts that exhibited “trolling behavior” could make Twitter more of a platform for conversation and sharing, less an adversarial cesspool.
Today we are introducing new behavior-based signals into how Tweets are organized and presented in areas like conversations and search.
This is to improve the health of the conversation and improve everyone’s Twitter experience.
— Twitter Safety (@TwitterSafety) May 15, 2018
In March, CEO Jack Dorsey announced that Twitter would work to measure and then improve the “conversational health” of the platform. The initiative came in response to the revelations of how Russian troll farms had used the platform to inflame the American public. Dorsey’s tweet storm announcing the initiative also seemed to say that Dorsey was taking an earnest look in the mirror at what the platform he created had become, and what it had done to the world. That was a welcome change of tune for the same company that had just a month prior continued to obfuscate Russian trolls’ use of Twitter.
Dorsey said that he wanted Twitter to undergo something of a reckoning, in which it had to actually define what it wanted “healthy conversation” to be. At least publicly, the definition of “conversational health” is still something Twitter is working out; in April, David Gasca, Twitter’s product manager for health, said that Twitter had received 230+ responses to its March Request for Proposals on how to best define, measure, and then improve conversational health on Twitter.
Thank you everyone who submitted @Twitter Health proposals! 230+ proposals from institutions globally across topics: echo chambers, bots/misinformation, healthy discourse, info analyses & more. We’ll be reviewing over the coming weeks & will follow up with semi-finalists then! https://t.co/IVffcigEcF
— Gasca 🔥🦉 (@gasca) April 16, 2018
But it appears that the experiment is already underway. For its first improvements to “conversational health,” Twitter decided to see if it could reduce the amount of “disruptive behavior” by trolls.
“Some troll-like behavior is fun, good and humorous,” Gasca, and Del Harvey, VP of trust and safety, wrote in a post announcing the test. “What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search.”
To do so, it says it found a way to identify accounts that exhibit trolling behavioral markers. These markers include not having email verified accounts, a high volume of tweets directed at people the accounts don’t follow, and more.
It then delisted the content posted by these accounts from search. And, since so much trolling takes place in the responses to tweets, responses created by these accounts would only be visible by clicking the “see more responses” option.
Apparently, in its testing markets, Twitter saw a 4% drop in abuse reports from search, and 8% drop from comments. That’s nothing to sneeze at!
The post announcing the test explained the key challenge: how to minimize the voices whose only aim was to inflame or bully, but who weren’t actually posting content or behaving in ways that violated Twitter’s terms of service. Or, as Gasca and Harvey wrote, “how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?”
Filtering rather than deleting content or suspending accounts essentially shrinks the microphone of trolls looking to stir up trouble. They can still post to their heart’s content — so there’s no “censorship” here — but the likelihood that people will see (and engage with) their content is just a bit lower.
Then again, Twitter trolls obviously don’t agree that this isn’t censorship. Ok.
Filtering based on behavioral markers is also a proactive tactic. This addresses a frequent criticism of social and digital media companies: that they react to violate or inappropriate content, instead of preventing it in the first place. Specifically, some asked why so many Facebook group’s ties to Russia weren’t noticed earlier, since they contained obvious markers such as paying for ads about American protests in Roubles. And on YouTube, horrifying videos have made it onto the platform’s kids channels, racking up thousands of views by kids before parents noticed and reported the content.
Of course, proactively preventing abuse without chilling amounts of profiling, or raising cries of censorship, is a difficult challenge. Even in this new experiment by Twitter, trolls could get wise to Twitter’s behavioral flagging, and adjust their behavior to appear more organic. Mashable has asked Twitter if there are additional indicators not mentioned in the post, and whether Twitter will intentionally keep some of its markers private to avoid manipulation by sneaky, determined trolls. We’ll update this post when and if we hear back.
Additionally, abuse reports can certainly reflect whether users are having a bad time on Twitter — but it takes a big, trolling push to get users to actually report an account, instead of just ignore it. It’s not clear yet by what other markers Twitter might measure conversational health.