Senators don't believe the internet giants Facebook, Twitter and Google have fully accounted for the nefarious activity detected on their platforms.
On Wednesday, representatives of Facebook, Twitter and Google faced their second day of Senate hearings, fielding questions about the extent of Russian election meddling on their platforms and how many people were subjected to disinformation or incitements meant to disrupt American politics.
"Russian operatives attempted to infiltrate and manipulate American social media to hijack the national conversation and make Americans angry," Senator Mark Warner, Democrat of Virginia, said at the hearing. "To set ourselves against ourselves."
Warner sharply accused Twitter of miscalculating the number of bots active on its service—automatons that look like human Twitter users but are controlled en masse by others. He also called out Facebook, which he said "has more work to do to see how deep this goes."
The hearings that started Tuesday have given the public its most detailed look yet at how Russia-backed groups created fake personas and organizations on social media. The groups posted unpaid messages on Facebook, Instagram, Twitter and elsewhere, and paid for misleading ads, reaching tens of millions of Americans.
Here were some of the most important finding, moments and questions so far from the hearings in both the House and the Senate:
Ads are revealed
The public hearings gave senators the chance to display some of the ads that until now had mostly been kept secret. The ads were tied to a shady troll farm known as the Internet Research Agency out of St. Petersburg, Russia. Facebook has said 470 fake groups with Russian ties bought 3,000 ads over the past two years at a cost of at least $100,000, but it wouldn't release the ads publicly.
A sampling of the paid messages on Facebook and Twitter were shown during the hearings and released by the House Intelligence Committee to the public. Particularly troubling were ones that were designed to deter voting on Election Day. One was a photoshopped image of comedian Aziz Ansari holding a poster with the wrong voting information on it.
"What is Twitter doing to proactively identify illegal voter suppression tweets?" asked Senator Dianne Feinstein, Democrat of California.
Twitter's representative said the company was trying to identify when accounts were prone to posting those types of tweets, all the better to intercept them as quickly as possible. That answer did not satisfy Feinstein, because it'd didn't seem to offer a way to prevent that type of message from appearing altogether.
In another instance, Russians started Facebook accounts under the names The Heart of Texas and United Muslims for America. With barely any paid promotion, the two fake accounts were used to generate interest in a Mosque protest in Houston, encouraging opponents and defenders alike to turn out. Hundreds of people showed. That was an example of the type of divisive messaging that typified much of the Russian interference.
Another ad promoted a fake group called Army of Jesus that seemed to offer a nice message. When people liked its Facebook page, however, they were exposed to posts like one that pitted a Satanic-looking Hillary Clinton against Jesus in a fist fight.
Just how far did it go?
Facebook has already had to amend how many people were exposed to these messages multiple times. Its latest figure is that 146 million Americans saw paid and unpaid posts from foreign actors across Facebook and Instagram.
Facebook only revealed in the past 48 hours that 120,000 pieces of content linked to the illicit accounts had been found on Instagram, reaching up to 20 million people.
Google was questioned over its permissiveness in regards to RT, formerly Russia Today, which U.S. intelligence agencies describe as a propaganda arm of the Kremlin. RT had been a preferred partner on YouTube, meaning it generated enough views to be considered a high-quality publisher for brands' ads. YouTube also told senators that the only reason it was dropped from its premium ad service was because its traffic dropped, not because of problems with its content.
YouTube revealed that it found 1,100 videos that were posted during the election from sources suspected of being tied to Russia.
Twitter bot questions
"Twitter seems to be vastly underestimating the number of fake accounts and bots pushing disinformation," Warner said.
The messaging service has said that it has typically found only 5 percent of its accounts are automated bots. But Senators weren't buying it. Previous testimony by other experts showed up to 15 percent of Twitter accounts are bots.
"Bots generated one out of every five political messages posted on Twitter over the entire presidential campaign," Warner said.
What to do
On Wednesday, Google followed Facebook and Twitter's lead promising more scrutiny and transparency around election ads. The internet companies fear new regulations could infringe on their business and are trying to take steps themselves to correct the problem.
Kent Walker, Google's general counsel, told senators the company was developing a database that could be searched to reveal who is buying all political ads on its platform.
"Users will be able to easily find the name of any advertiser running an election ad on search, YouTube or Google Display Network," Walker said.
The systems appears similar to one that Facebook and Twitter both recently announced to show what political ads are running and who bought them.
What are you, exactly?
A key question over the two days of testimony for all the companies was whether they consider themselves content companies or just tech companies. The distinction is important because if they are content companies, that puts them in the category of the media and arguably more responsible for the information they help disseminate.
At the hearings, they all downplayed the amount of content they actually produce in-house, pointing out that the vast majority of material traveling over their servers comes from users and outside publishers.
Warner also suggested that the senators were looking into whether Reddit and LinkedIn had been used by Russian operatives looking to influence U.S. political discourse. Reddit has rarely been brought up in the same conversation as Facebook and Twitter, but it has one of the most active groups devoted to President Donald Trump as well as groups dedicated to other politicians. During the election, there were fears that bots and troll farms could manipulate the conversation there by elevating conspiracy stories and burying facts.
On Wednesday, CEO Steve Huffman conducted a question-answer session on Reddit, where he suggested that the company was involved in the investigation, too. He was asked if Reddit was looking into Russian interference, too.
"I would love to be completely transparent about what we're doing here, but given the sensitive nature of the situation, I have to be vague. My apologies," Huffman said. "Independent of any scrutiny, we take both the integrity of Reddit and the U.S. elections extremely seriously."