Facebook's Menlo Park, California, headquarters feels like a theme park, the House That Zuck Built.
From the outside, it could be any big-box mall, announced by warehouse-like structures. But gradually, the offices give way to a planned community. There's a Main Street lined with picture-perfect storefronts, an ice cream parlor, cafes, a barbecue shack and even a woodshop. In the center of town, a jumbotron flashes the faces of employees who are celebrating anniversaries with the company.
It's always sunny and warm at Facebook headquarters. There's a never-ending buffet of food for the workers; even the grits are perfect, prepared by a Southern chef.
Working at Facebook is like living in "The Truman Show," a curated community free of bullies and trolls—on the surface at least.
If only the Facebook that 2 billion people log on to every month felt this safe. Instead of politeness, the online public space has been afflicted by dishonest discourse and factionalism.
Over the past three years, Facebook has become something of a bogeyman. In 2016, it was overrun by bad actors and foreign agents who tried to expose fissures in the U.S. and other Western democracies. False news proliferated, explicitly designed to set people against each other. At the same time, Facebook has found that even inoffensive content in the form of viral videos and clickbait headlines do not contribute to well-being.
It's perhaps fitting that Facebook's campus feels intelligently artificial, because the social network increasingly relies on one technology more than any other to restore civility and fix its other problems: artificial intelligence. It's the only approach that can potentially grapple with more than 2 billion users, 7 million advertisers and trillions of decisions a day.
"Our North Star right now as a company is the pivot to deeper meaning," says Mark Rabkin, Facebook's VP of ads and business platforms. "We want everything on the platform to be more meaningful."
Rabkin is seated with one leg tucked beneath him, slightly reclining on a couch in a conference room with a three-foot-tall Darth Vader looking down from the top shelf at the center of the wall. Rabkin is one of the AI enthusiasts who believes the machines are the key to scrubbing hate speech and criminal elements while elevating the parts of the community that will enrich 21st-century life.
"I want to have a really deep connection to communities around me," Rabkin says. "I want that connection but I don't want strangers polluting my stuff."
More enriching lives should also mean more riches for Facebook, which along with Google constitutes the so-called duopoly dominating online ad revenue.
Facebook, in particular, needs AI to help unlock more value from a platform that is reaching saturation in mature markets like North America. The company says its daily use by 185 million people in the U.S. and Canada is near the limit. And its share of digital ad spending in the U.S. is expected to slip from 19.9 percent last year to 19.6 percent this year and 19.2 percent in 2019, according to eMarketer.
It's a plateau that only technology could help power past, according to advertising experts.
"AI is too important to not be the core feature of every ad tech product," says Karim Sanjabi, CEO of Robot Stampede, a San Francisco-based advertising agency focusing on AI and innovation. "Facebook and Google are in a race to see which algorithms are the best."
Jobs to be done
Facebook is pouring massive amounts of money and computing power into AI. Its overall R&D budget was $1.3 billion, though it does not disclose how much of that is artificial intelligence.
CEO Mark Zuckerberg has proselytized the benefits of AI for years, and holds it up as one of the most important areas of focus for the social network to solve its greatest challenges.
AI will police the network for fake news and fraudsters, in Zuckerberg's telling, stamping out hate speech and content from terrorists or their sympathizers. "Over the long term," he told legislators during his April visit to Congress, "building AI tools is going to be the scalable way to identify and root out most of this harmful content."
Rabkin shares that vision. "AI is the key to all those new jobs to be done," Rabkin says. "You just can't do them without AI."
AI is actually already coursing through Facebook. "I can't think of any Facebook product that doesn't have AI in it," Rabkin says. "A big percentage of every data center we run is now dedicated to AI, all over the world, hundreds of thousands, millions of machines."
Facebook is directing a lot of that machine power toward rooting out toxic dialogue like hate speech and sniffing out fake accounts designed to undermine democracy. Facebook has been trying to rid the platform of the bad elements that infected the 2016 U.S. elections, and it's racing against the midterm election timeline.
In May, Facebook started rolling out its new election ad rules that force political and advocacy advertisers to register with their real identities, and AI helps monitor that. Also in May, Facebook issued its first community standards enforcement report, which showed how effective AI is at finding hate speech and other offensive content like nudity and terrorism.
Rabkin says that AI can identify terrorism-linked content with 99 percent accuracy, but hate speech is different. Facebook's AI flagged hate speech only 38 percent of the time. That's because the machines have difficulty distinguishing between, say, someone describing a hate incident and someone preaching hate.
"AI is getting smarter. It is learning," Rabkin says, adding that a year ago AI could identify only about 10 percent of hate speech.
But it still has a ways to go, even according to Zuckerberg. "Hate speech—I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems, but today we're just not there on that," Zuckerberg told Congress. "Until we get it automated, there's a higher error rate than I'm happy with."
Just last week, Facebook's AI flagged the Declaration of Independence as "hate speech" when a Texas newspaper posted it to the social network. The document contains a phrase about "Indian savages" that may have set it off.
Scary smart
One in every four engineers is plugged into Facebook's AI nerve center, which is known as FBLearner Flow. "It's the overall framework that ties different pieces of AI together," Rabkin says.
Engineers working on News Feed ranking or ad ranking—the algorithms that make content decisions on the social network—use FBLearner to refine their models and implement new techniques.
FBLearner made headlines earlier this year when The Intercept's Sam Biddle claimed it enabled Facebook to "predict future behavior." Facebook could predict brand loyalty, he reported, finding users who were open to switching from Coke to Pepsi, for example. And Facebook AI could prod a consumer to action, or so the report claimed.
If that sounds like the dream of most marketers and the promise of most ad sellers, it was also an ominous portrayal of the company's influence given its part in the Cambridge Analytica affair. The firm allegedly misused data on 87 million Facebook users to target them with political messaging during the 2016 presidential election, prompting Zuckerberg to finally accept calls to testify on Capitol Hill.
(The Cambridge Analytica firestorm also reportedly disrupted Facebook's commercial ambitions in AI, complicating its plans to release a voice-assistant speaker to compete with Amazon Echo, Google Home and Apple HomePod; an always-on listening device may have seemed a little intrusive in that climate.)
For all the uproar about the potential abuse of data and concerns that the platform has become spooky powerful, however, Facebook's actual capabilities appear rather limited. It's unclear that the machines are really that effective in targeting ads and manipulating consumer behavior. Of course, Facebook stands by its $40 billion a year ad business and its effectiveness.
Still, Facebook's Matt Steiner, engineering director, describes the current Facebook ambitions as something far less than total information awareness and perfect predictive power. "We're hitting the limits of power and scale," Steiner says. "We're still really far from where we want to be."