U.S. internet giants Facebook, Twitter and Google were rebuked by the European Union's executive arm for failing to meet their pledge to tackle online hate speech within 24 hours.
It's been more than six months since the companies agreed on a code of conduct, but only 40% of all notifications of alleged illegal online hate speech are currently being reviewed within 24 hours of being reported, the European Commission said Tuesday as it presented its first evaluation.
"The last weeks and months have shown that social media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalization, illegal hate speech or fake news," EU Justice Commissioner Vera Jourova said in a statement. "While IT companies are moving in the right direction, the first results show that the IT companies will need to do more to make it a success."
The code of conduct that the companies, including Microsoft agreed to on May 31, came when Europe was trying to come to terms with the bloody attacks in Paris and Brussels by Islamic State, which has used the Web and social media to spread its message of hate against its enemies.
The commission said Tuesday in its report that over the period examined, no notification of illegal hate speech on its services had been made to Microsoft.
Google and Facebook didn't immediately respond to a request for comment on the EU report. Microsoft declined to comment, as did Twitter, which said last month it's improved its "internal tools and systems in order to deal more effectively with this conduct when it's reported to us. Our goal is a faster and more transparent process."
The companies said in May that it remains a "challenge" to strike the right balance between freedom of expression and hate speech in the user-generated content on online platforms.
The results of the EU's first evaluation are based on analysis by 12 non-governmental organizations in nine European countries to see how the companies responded to notifications over a period of six weeks, the commission said.
"The findings indicate that among the 600 notifications made in total, 28% lead to a removal, 40 percent of all responses were received within 24 hours while another 43% arrived after 48 hours," the commission said.
Facebook, Microsoft, Twitter and YouTube separately said Monday they are creating a shared database of the most severe terrorist videos and images that they have removed from their sites. The database, which will be hosted by Facebook, will store "hashes" -- a kind of unique digital fingerprint created by a cryptographic algorithm -- for each piece of content.