NEW YORK (AdAge.com) -- In the past year, Twitter grew up, became mainstream and finally admitted it had a problem: a spam problem.
Some of the same characteristics that helped Twitter grow like a weed have also made it vulnerable, not just to spammers but unscrupulous or just misguided marketers trying to use the service. And the faster Twitter grows -- as many as 50 million tweets a day (600 a second) from 2.5 million over the past year -- the more appealing it becomes to both.
During the past year, Twitter's "trust and safety" unit, the division charged with policing the network for fraud and abuse, has grown to 22 people, the largest group at the 140-employee company.
Twitter's challenge is to filter out the bad actors at the same time it rolls out an advertising platform that generates revenue from marketer interest in real-time search, while making sure the marketing itself doesn't destroy the medium it's trying to leverage. The dirty secret of Twitter's war on spam? A significant amount of it emanates from clumsy marketers that just don't know any better.
"Even sophisticated marketers will fall prey to these things," said Del Harvey, the unit's director. "They're in a rush. They just launched a Twitter account and have a big announcement to make and [need to] add tons of followers. It's not unheard of for legit marketers to do things that aren't OK."
The most-blamed populace? "The intern always get blamed," she said.
Ms. Harvey took the helm of Twitter's anti-spam effort in late 2008, but she was alone on the case until the following May, when the department started staffing up. By Twitter's account, volume peaked in August 2009, when spam routinely spiked to more than 10% of all messages on the service. And while it may seem like spam and incessant "LOL is this you?" phishing attacks are on the rise, Twitter execs say they've greatly reduced their prevalence, down to 1% of total messages as of February and declining.
In fact, Ms. Harvey said they are getting fewer reports of spam from users overall than a year ago, a function of better technology and systems that shut down suspicious accounts faster.
"They're doing a great job, considering they're young," said Beth Jones, senior threat researcher at online security firm Sophos. "Nobody anticipated their explosive growth."
Ms. Harvey, 28, joined Twitter in late 2008 after five years working for Perverted Justice, the nonprofit best known for orchestrating stings for "Dateline NBC" to lure child predators in front of TV cameras. By her early 20s, Ms. Harvey became a spokesperson for the organization, working with NBC on several seasons of "To Catch a Predator."
In addition to spam and phishing, Ms. Harvey's team works to counter hacking attacks and adjudicates copyright and brand claims, including, say, whether a Taiwanese programmer Dennys Hseih (175 followers) has the right to use @dennys over the American restaurant chain. (Answer: yes, which is why Denny's tweets under @dennysgrandslam).
Much of Twitter's enforcement effort is automated, but every complaint about the wrongful termination of an account is reviewed by a human. Ms. Harvey said her team doesn't take the shutdown of an account lightly, especially if the account has a large number of what appear to be legitimate followers attached to it.
Technology can, say, flag accounts that quickly follow thousands of users, unfollow, and then follow more. Those, more than likely, are the phantom followers offering "new pics" they've supposedly just posted, trying to get you to click through. This "follower spam" can be the work of third-party "get more Twitter followers" services that start following and un-following thousands of accounts in hopes of getting some follow-backs.
They also might be an unwitting marketer that set up an account for some purpose and started following people to get it rolling, perfectly legal in Twitterdom, except where it starts to get spammy. Replying to individual users is fine. Setting up an auto-reply triggered by keywords? Not fine.
"We aren't trying to do away with affiliate marketing on Twitter," she said. "One successful rule of thumb is to engage the people you are trying to sell stuff to. If you are creating a dialogue with people and not just touting things because you want to make a buck, you are going to have a network of people that value your input."
Phishing scams on Twitter are another challenge and there's a lot less Twitter can do about those beyond trying to educate users that direct messages like "just read this blog about you" and "ha ha is this you?" are simply scams to get people to follow a link and enter their username and password at a fake Twitter login page. That account can then be hijacked to send out more direct messages.
Last week the service started running all links through another layer of technology that looks for scams, and all shortened links in direct messages will be replaced with Twitter's own shortener twt.tl to protect users that might unwittingly click on a phishing scam that slips through.
New tools mean new avenues for spammers to infiltrate. Ms. Harvey said security personnel at Twitter, Facebook, Linked In, MySpace and other social networks routinely meet and trade notes. Keeping the social web safe from spam is an increasingly complex job. "We're constantly researching the new groups coming in; the new types of malware," she said. "[We're] trying to figure the correct algorithmic balance to eliminate as many false positives as possible."