Starting Tuesday, the newspaper will open the comments section to about one-fourth of its stories, up from 10% before, and the goal is to reach 80% by year-end, according to Bassey Etim, the Times' community editor. The New York Times is using software from Jigsaw, a technology incubator within Google owner Alphabet, to screen what readers write, helping human reviewers spend time on cases that require judgment calls. All top stories on the homepage during business hours will accept comments.
The media industry has struggled with how to take advantage of freewheeling online conversation without giving voice to hate speech or violent threats -- problems that have also dogged Google's YouTube and other tech companies including Facebook Inc. and Twitter Inc. Jigsaw has worked with The Guardian newspaper, The Economist and Wikipedia to help develop its software. Called Perspective, it's free for anyone to use and integrate into websites.
"It has become too easy for trolls to dominate conversations online," Jigsaw Chief Executive Officer Jared Cohen said in a statement. He said this had led to comments sections being shut down entirely, which harmed free expression and constructive discourse. "The power of machine learning offers us an opportunity to tip the scales and reverse this trend."
Jigsaw wants to use technology -- especially machine learning -- to solve problems ranging from online harassment and bullying to the radicalization of terrorist recruits online. The division has about 60 employees, mostly based in New York.
The software isn't perfect. In a recent test, Bloomberg News conducted on the project's website, Perspective flagged comments with obscenities. But it failed to identify threatening comments such as, "Nice little house you've got there. Pity if anything should happen to it," or "Remember, I know where you live." Jigsaw has said when the software was deployed on a furniture retailer's website, it struggled with the multiple meanings of the word "screw." But the system is designed to learn and improve its accuracy over time.
The Times will use software developed by Jigsaw that groups similar comments so moderators can approve more of them. It judges their offensiveness based on a score that includes three tags: their potential for obscenity, toxicity and likelihood to be rejected. Readers will be asked to modify comments deemed offensive. If they decide to post a comment anyway, the software flags it to human moderators so they can decide whether to approve it.
The Times has 14 people moderating comments. Automating the approval process isn't expected to lead to job cuts in that department, but it will make it more efficient, Mr. Etim said.
Now, instead of posting 200 comments an hour, Times moderators will be able to post more than 1,000, Mr. Etim added. For politics stories, the technology will be able to flag off-topic comments and the Times will also compile lists of emerging topics to filter out, such as "late-breaking Trump insults." The Times won't allow comments for crime stories or certain obituaries, Mr. Etim said.
"There will be a little bit of yelling, but it will be based on the issue at hand rather than 'Your opinion is stupid,'" he said. "Now, you're going to have to explain why their opinion is stupid."
Last month, the Times announced it was eliminating the role of public editor, saying the position had become outdated because social media users already hold its reporters and editors accountable. The publisher also announced the formation of a Reader Center to make its coverage decisions more transparent and to better listen to its audience.
The Times is seeking to boost its online subscriptions so it's less reliant on declining print advertising and sees giving readers more chances to comment as fueling that strategy.
"The extent to which we make a community that feels like a microcosm of the Times and a place to discuss the news in an urbane way, the better it is for our business," Mr. Etim said.
-- Gerry Smith and Jeremy Kahn, Bloomberg News