LinkedIn says it's caught in a 'cat-and-mouse game' of COVID-19 misinformation
LinkedIn on Tuesday said it’s removing COVID-19-related videos and posts for violating its policy on spreading misinformation.
The content in question, first published by Breitbart News, features a group of doctors in white lab coats claiming hydroxychloroquine to be an effective treatment for COVID-19. A video of the group staging a press conference in Washington D.C. quickly went viral after President Donald Trump shared multiple versions of the clip to his 84 million followers on Twitter (the tweets were later deleted).
Although YouTube, Facebook and Twitter banned the video within 24 hours for spreading misinformation, it was still widely shared on LinkedIn, with many of the posts garnering hundreds of likes and comments.
The company today, however, said it was removing all misleading content related to America’s Frontline Doctors after Ad Age flagged numerous posts shared to its 700 million-plus users. In a statement, LinkedIn said it has clear guidelines as to what users can and cannot post.
“These policies are clear on what is not tolerated and has no place on LinkedIn, including misinformation,” the company told Ad Age. “Any content that violates these policies, including confirmed misinformation about COVID-19, will be removed by our team.”
Still, new videos linking directly to the America’s Frontline Doctor’s website were still being posted this morning. The company could not give an exact figure as to how many posts it would remove, adding that it is caught in a "cat-and-mouse game" with its users, where new misleading posts are popping up shortly after others are taken down.
Arbiters of truth
The business-networking site has borrowed many features and design choices from Facebook, some of which include its color scheme, layout on mobile, the thumbs-up “like” button and LinkedIn Live. Consequently, some users are treating it as another social media platform, and the company has seen a surge in misleading posts regarding the pandemic, according to two people familiar with the situation.
LinkedIn told Ad Age that it’s seen a 50 percent year-over-year increase of users reacting, commenting and sharing posts. The company suggests that the growth and increased engagement on the platform may point to why it’s seeing more content that could potentially be classified as misinformation.
The issue highlights how the business-focused platform is facing the same problems as other social media giants such as Facebook and Twitter. One post, for instance, links to multiple QAnon videos, which is a far right group that is widely associated with the spread of misinformation. Twitter has banned QAnon from its platform while Facebook is planning to take similar action.
LinkedIn told Ad Age that its policy is to review the content before taking it down.
Social media platforms such as Facebook and Twitter are grappling with being the so-called arbiters of truth, with each claiming they must walk a fine line between freedom of speech and preventing the spread of misinformation. Although LinkedIn has mostly kept a low profile, its parent company, Microsoft, is likely to face increased scrutiny should it successfully acquire TikTok’s U.S. business.
LinkedIn says it’s using real people, artificial intelligence and machine learning to quickly identify and remove such posts. Last Friday, the company released a blog post emphasizing that its platform is a place where people “can express themselves professionally.”
“We are a place for them to find reliable news, data, and insights from sources they can trust,” Blake Lawit, senior VP and general counsel at LinkedIn, wrote on the company’s website last Friday. “We will continue to invest in making LinkedIn that platform, and empowering our members with the right tools to have the conversations they care about.”
Last year, LinkedIn removed nearly 59,000 posts for violating its content policy; the company says it does not have figures for posts containing misinformation, however.