Facebook, Twitter, Google’s YouTube and Microsoft pledged on Tuesday to review requests for the removal of hateful content posted on their platforms within 24 hours as part of a code of conduct agreed with European Union (EU) regulators.
European Justice Commissioner Vera Jourova said tackling illegal online hate speech has taken on added urgency because of the increasing use of social media by terrorist groups to radicalise young people and spread violence and hatred.
The European Commission said the four web giants will review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content if necessary.
They will also strengthen their cooperation with civil society organisations who help flag hateful content when it goes online and promote “counter-narratives” to hate speech.
“There’s no place for hate speech on Facebook,” said Monika Bickert, Head of Global Policy Management at Facebook.
“With a global community of 1.6 billion people we work hard to balance giving people the power to express themselves whilst ensuring we provide a respectful environment.”
Twitter has suspended over 1,25,000 accounts since the middle of 2015 for threatening or promoting terror acts, primarily related to the Islamic State (ISIS).