×

Once registered, you can:

  • - Read additional free articles each month
  • - Comment on articles and featured creative work
  • - Get our curated newsletters delivered to your inbox

By registering you agree to our privacy policy, terms & conditions and to receive occasional emails from Ad Age. You may unsubscribe at any time.

Are you a print subscriber? Activate your account.

Facebook Trains AI to Identify False News Spam Sites

By Published on .

Facebook will lower the ranking of pages with spammy ads.
Facebook will lower the ranking of pages with spammy ads. Credit: Facebook
Most Popular

Facebook is taking another step to penalize the lowest quality websites by lowering their visibility on the social network.

Facebook is using artificial intelligence to identify when a link leads to a website with the worst ad experiences, like sponsored content spam and other negative cues, and then will restrict the posts and ads leading to those pages.

The change to Facebook's rankings will target the worst publishers that often peddle in false news and promote shocking and sexualized content.

"Publishers that don't have this kind of low-quality landing page experience are going to see no changes or maybe even a small increase in traffic," said Andrew Bosworth, Facebook's VP of ads and business platform.

This is yet another move from Facebook to respond to advertisers and users concerned by content on the platform. Brands, in general, are starting to demand some accountability from platforms like Facebook and YouTube for the content they support.

The issue became a top priority following an election influenced by a torrent of false news. Facebook recently issued a report on its role in the 2016 presidential election and acknowledged being used as a platform for misleading political propaganda.

Following the election, it promised to try to cut off support for the sites that generated false news and launched a review process to identify phony stories.

"We are disrupting the economics of spam and the operating of ad farms, which are the ones creating misleading, disliked content," Bosworth said.

The latest changes to what posts get more visibility will be based on a new scoring mechanism that ranks the quality of pages. Facebook trained an artificial intelligence program by showing it hundreds of thousands of examples of bad pages, taking into account pop-up ads, sponsored content, the number of ads and other characteristics.

The change could prioritize more premium publishers, and give Facebook a cleaner platform to present to brands.

Top publishers would welcome more visibility on Facebook, but they were also hoping its content problems would bring advertisers to their websites, according to one publishing ad executive speaking on condition of anonymity.

"Facebook and Google were both caught with inappropriate content, whether that's fake news, violent videos or hate speech," the publishing exec said. "When you go on social platforms, there will always be problematic content and there's nothing they can do about it."

There have been a number of incidents of users posting heinous videos of killings and extremists. Earlier this month, Facebook announced it would hire 3,000 human reviewers to police videos on the platform.

That followed YouTube promising similar measures after brands threatened to cut off the platform because it couldn't promise ads wouldn't appear alongside hate speech and violence.

During NewFronts and upfronts season, as marketers plan where to spend their money, everyone is focusing on content quality.

"Brands are telling them, 'if you're accepting my money, I'm a wholesome brand, fix it or suffer,'" said the publishing executive, who works closely with Facebook media partners.