Twitter was flagging tweets including the word "queer" as potentially "offensive content"

Impact

On Wednesday, Jessica noticed something strange on Twitter. Another user, a frequent conversation partner, had sent her a tweet, but she hadn't received an alert.

"It didn't show up in my notifications, the only reason I saw it was because I opened up that tweet again and saw the hidden replies," Jessica, who asked that Mic not use her last name, said. Twitter had marked the tweet — which contained the word "queer" — as containing potentially sensitive content. Users must adjust their settings in order to see content like violence, nudity or hate speech.

But the tweets Jessica saw being marked as sensitive didn't contain sexual or violent imagery; just the word "queer." She did a quick search, and discovered that other users were experiencing the same issue. Twitter was putting the word "queer" in the same category as racial slurs. Jessica's discovery spread quickly over the micro-blogging service.

Understandably, some Twitter users were upset. Some saw a deliberate attempt by Twitter to silence LGBTQ voices.

A Twitter spokesperson declined to comment on the record, but the real culprit appears to have been Twitter's algorithm for marking sensitive content. On some time on or before June 21, the algorithm began to treat "queer" as sensitive. Though it's been reclaimed by the LGBTQ community in recent years, "queer" has historically been used as a slur, and the algorithm marked it as such.

When notified about the issue, Twitter said it was working on a fix. It appeared to have been resolved by Thursday afternoon, but a Twitter spokesperson would not say how long it had taken to resolve the issue.

It was a brief but embarrassing flub for Twitter, which has struggled to contain harassment, hate speech and threats of violence on its platform, which it says reaches 313 million active monthly users.

In November of last year, Twitter rolled out a feature allowing users to "mute" certain words or phrases they found objectionable and a streamlined way to report harassment, a move it hoped would make the service more user-friendly. But users discovered that Twitter often didn't take action after they reported abusive tweets, and trolls were able to circumvent the mute feature by purposefully misspelling words.

And, in February of this year, Twitter introduced "safe search," designed to exclude potentially abusive content from searches and a feature designed to keep replies users want to see prominent while hiding those they don't.