Del Harvey sees too many of the same kind of tweet. It contains a screenshot of a message from Twitter's user support, saying that a post someone reported as abusive on the site doesn't violate Twitter rules. It's often accompanied by commentary: "Seriously?"
For Ms. Harvey, it's a personal criticism. She's the head of trust and safety at Twitter, and it's her job to make sure that people have tools to deal with harassment and abuse on the social-media site. But during the course of her eight years at the company, she said, "It's very easy to find instances where we got to the wrong answer.''
That's an understatement. Twitter's failure to curb harassment has been a main complaint of users, and was one reason the company failed to get a bid from potential acquirers when it was exploring a sale. But Ms. Harvey's team, after months of study and consultation with outside advisers, has an answer to the critics, in the form of changes to its customer-support system and new tools for people to block what they don't want to see.
Now, users will be able to mute seeing certain words in their notifications, such as racial slurs and curse words -- an update that was in the works for many months. People can also block whole conversations. The change may help lessen the ability of abusers to reach their victims, decreasing their motivation to do it.
For anyone who spots language that violates the company's policy, there's a new option to report "hateful conduct" in addition to abuse and harassment. Meanwhile, Twitter is retraining everyone who has the potential to review user complaints to increase understanding of cultural issues.In the instances where Twitter failed to resolve a report when it should have, "We really tried to look at why -- why did we not catch this?'' Ms. Harvey said. "And maybe the person who did that front-line review didn't have the cultural or historical context for why this was a threat or why this was abuse.''
Anti-Semitic threats of the sort that have become more frequent during U.S. President-elect Donald Trump's campaign may not be obvious to reviewers who aren't educated in Holocaust history, for example. Different countries have different threatening words and phrases, based on their histories.If new methods of abuse start to occur, the team now has a way to send a quick update to the training -- though it won't be fail-proof.
"I wish that I could say that after this launch we would never ever miss something again, but that is not realistic,'' Ms. Harvey said. "But hopefully we will be able to get it wrong less often.''
The updates are the most significant efforts to mitigate the company's abuse problem in its history. When co-founder Jack Dorsey became chief executive officer of Twitter last year, he named user safety one of his top priorities. The company made small updates -- like letting users report multiple harassing tweets at once.
But during his tenure, coinciding with the presidential campaign, abuse on Twitter has intensified and become more public. Twitter permanently banned Milo Yiannopoulos, known as @nero on the site, for leading harassment of "Ghostbusters" actress Leslie Jones. Several high-profile users, including a New York Times reporter, abandoned the platform after being targeted.
Ms. Harvey said Twitter's action on the issue has been slow in part because the company's engineers in the past saw other challenges as higher priorities. In her early days at the company, for example, technicians were often working to keep the site up and running, she said.
`"We certainly have not always moved as quickly as we would have liked to address this,'' Ms. Harvey said. "This launch is not going to mean that all of these issues are fixed. It's not going to mean that we get everything right, it's not going to mean that there's no longer abuse on Twitter, or that we're done. But it's a significant shift.''
-- Bloomberg News