×

Once registered, you can:

  • - Read additional free articles each month
  • - Comment on articles and featured creative work
  • - Get our curated newsletters delivered to your inbox

By registering you agree to our privacy policy, terms & conditions and to receive occasional emails from Ad Age. You may unsubscribe at any time.

Are you a print subscriber? Activate your account.

Cloak and Spammer: Facebook Beefs Up AI to Stop Black Hat Pages From Covering Their Tracks

By Published on .

Scam artists and others use a method called cloaking to make their links on Facebook (left) look more innocuous than the content they serve (center). They even show Facebook reviewers fake versions of their sites (right).
Scam artists and others use a method called cloaking to make their links on Facebook (left) look more innocuous than the content they serve (center). They even show Facebook reviewers fake versions of their sites (right). Credit: Facebook

Facebook says it is intensifying its efforts to control scams and fake news by taking a harder line on "cloaking," a tactic that bad actors use across the web to avoid detection.

"We've recently been ramping up our enforcement," says Rob Leathern, Facebook product management director. "We are making it clear: We don't tolerate cloaking."

Cloaking is a longtime but straightforward practice of so-called black hats online. Fraudulent marketers, pornographers and even racists have used it to disguise their true nature in search results and in social feeds.

Facebook was already seeking out links on its platform to landing pages that don't deliver what was promised, serve deceptive ads or have too many ads. Possible penalities included warnings, lower visibility for links and outright bans from the platform.

Some of the more elaborate cloaking efforts, however, are tough to recognize. One method is to show Facebook one version of a site to gain approval, then serve something different when Facebook users arrive.

"For example, they will set up web pages so that when a Facebook reviewer clicks a link to check whether it's consistent with our policies, they are taken to a different web page than when someone using the Facebook app clicks that same link," Leathern wrote in a blog post Wednesday. "Cloaked destination pages, which frequently include diet pills, pornography and muscle building scams, create negative and disruptive experiences for people."

The company says it has beefed up both human reviews and its artificial intelligence algorithms to spot cloaking. "In the past few months these new steps have resulted in us taking down thousands of these offenders and disrupting their economic incentives for misleading people," Leathern wrote.

Widespread phenomenon
Jessie Daniels, a professor of sociology at Hunter College-City University of New York, has studied cloaking since the 1990s, and says it's very much mixed in with the problems of false news and racist propaganda propagating on the web.

Cloaking is often done by "someone who is concealing authorship in order to disguise a political agenda," Daniels says.

She points to search results from a query as simple as "Martin Luther King." The first page of Google results includes the site martinlutherking.org, which promises "historical trivia, articles and pictures" in "a valuable resource for teachers and students alike." The site is actually run by the racist group Stormfront.

"The thing I look at are people who are politically motivated and people close behind them who are profit-motivated," Daniels says. "It's always the pornographers and white supremacists."

Stopping cloaking will be difficult for any company, even Facebook, Daniels adds. In the end, it takes a lot of human reviews. Facebook has recently hired 3,000 people to comb the site for objectionable videos, but it wouldn't say how many people are dedicated to cloaking patrol.

Most Popular