A mute filter on Twitter is not a solution to rampant abuse
Imagine sitting down and making a list of all of the discriminatory slurs and insults trolls throw at you on Twitter. You try to think like your abusers, anticipating the language they'll use to harass you, adding to the ever-growing Rolodex of hate.
This could be the next weapon for Twitter users to add to their lean anti-troll toolbox. The feature — "muted words" — would let users choose which keywords and phrases they wanted to magically disappear from their feeds. The Next Web pointed it out on Monday after some users spotted the feature on Sunday. It has since been disabled.
Twitter hasn't said whether or not this feature will be rolled out to users on the Twitter app or website, but people are already calling it the next big solution to stopping online abuse.
This is where I disagree. I do believe that features intended to make Twitter a more hospitable platform are valuable, but viewing this "muted words" feature as a solution to unwanted and targeted communication is incorrect for a host of reasons.
First, as previously mentioned, TweetDeck already has this feature, and it hasn't exactly been an abuse-killer. Users aren't flocking to TweetDeck to escape their abusers, because simply muting some keywords and phrases won't ensure a harassment-free experience.
As game developer Brianna Wu pointed out to me on Twitter, you also have to include incorrectly spelled insults. Illiterately spewed hate will find its way through the cracks — so will discriminatory memes (You can't mute Pepe!) and edited images (like a gold star on your chest and a bullet hole through your head).
Calling this a solution to abusive behavior on the platform also sets a dangerous precedent, one that puts the onus on the user to create their own harassment-free experience. Muting words doesn't get rid of threats and it also doesn't hold abusers accountable. There should be more severe consequences for online harassment. If you can't see the abuse, you can't report it. This isn't a solution; it's a set of earplugs.
Calling this a solution to abusive behavior on the platform also sets a dangerous precedent, one that puts the onus on the user to create their own harassment-free experience.
Using a mute feature to drown out abuse will also encourage trolls to get more creative with their language, as we saw with Operation Google — where 4chan users created a new racist code to avoid being flagged by Google's AI tool or Twitter's abuse team — and (((echoes))) — placing three parentheses outside a Jewish surname to identify and target them. We also saw this strategy play out with the recently trending #HillaryForPrision, a hashtag intentionally misspelled to successfully spread the pro-Donald Trump slogan while skirting Twitter's censors.
If it becomes common practice to ignore abuse, trolls might turn to new codes, more image-based abuse, typos, etc. to circumvent the feature.
But hey, it's an awesome feature to avoid spoilers.