Twitter Inc.’s highest-profile users—those with lots of followers or particular prominence—often receive a heightened level of protection from the social network’s content moderators under a secretive program that seeks to limit their exposure to trolls and bullies.
Code-named Project Guardian, the internal program includes a list of thousands of accounts most likely to be attacked or harassed on the platform, including politicians, journalists, musicians and professional athletes. When someone flags abusive posts or messages related to those users, the reports are prioritized by Twitter’s content moderation systems, meaning the company reviews them faster than other reports in the queue.
Twitter says its rules are the same for all users, but Project Guardian ensures that potential issues related to prominent accounts—those that could erupt into viral nightmares for the users and for the company—are dealt with ahead of complaints from people who aren’t part of the program. This VIP group, which most members don’t even know they’re a part of, is intended to remove abusive content that could have the most reach and is most liable to spread on the social-media site. It also helps protect the Twitter experience of those prominent users, making them more likely to keep tweeting—and perhaps less apt to complain about abuse or harassment issues publicly.
See Ad Age’s 2021 Year in Review here.
“Project Guardian is just the internal name for one of many automated tools we deploy to identify potentially abusive content,” Katrina Lane, vice president for Twitter’s service organization, which runs the program, said in a statement. “The techniques it uses are the same ones that protect all people on the service.”
The list of users protected by Project Guardian changes regularly, according to Yoel Roth, Twitter’s head of site integrity, and doesn’t only include famous users. The program is also used to increase protection for people who unintentionally find the limelight because of a controversial tweet, or because they’ve suddenly been targeted by a Twitter mob.
That means some Twitter users are added to the list temporarily while they have the world’s attention; others are on the list at almost all times. “The reason this concept existed is because of the ‘person of the day’ phenomenon,” Roth says. “And on that basis, there are some people who are the ‘person of the day’ most days, and so Project Guardian would be one way to protect them.”
The program’s existence raises an obvious question: If Twitter can more quickly and efficiently protect some of its most visible users—or those who have suddenly become famous—why couldn’t it do the same for all accounts that find themselves on the receiving end of bullying or abuse?
The short answer is scale. With more than 200 million daily users, Twitter has too many abuse reports to handle all of them simultaneously. That means that reports are prioritized using several different data points, including how many followers a user has, how many impressions a tweet is getting, or how likely it is that the tweet in question is abusive. An account’s inclusion in Project Guardian is just one of those signals, though people familiar with the program believe it’s a powerful one.