Facebook Was Aware of Anti-Muslim Content During Assam Polls, Took Little Action
The key reason for Facebook’s inability to regulate content is the lack of training its AI has in foreign languages.
Facebook has been found to remove only a fraction of posts that violate its hate speech rules, a media report stated on Sunday, 18 October.
After reviewing internal documents, Wall Street Journal said that one of the key reasons for Facebook’s inability to regulate content on its platform is the lack of training its artificial intelligence (AI) program has in foreign languages.
Facebook Inc executives have long said that AI would address the company’s problems keeping hate speech, excessive violence, and underage users off its platforms.
The report said that in March this year, the company observed that hate speech was a major risk in Assam, during the legislative Assembly elections that were held from 27 March to 6 April.
Whistleblower and former Facebook employee Frances Haugen had told United States authorities recently that Facebook was aware of incendiary anti-Muslim narratives being promoted on the platform in India, because of the “lack of Hindi and Bengali classifiers”. Classifiers refer to algorithms that detect hate speech.
Haugen also said that “fear-mongering content” was promoted by “Rashtriya Swayamsevak Sangh users, groups and pages”.
According to Facebook, it added hate speech classifiers in Hindi starting in early 2020 and introduced Bengali later that year.
The Wall Street Journal report stated that Facebook’s AI cannot consistently identify “first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes”.
Subscribe To Our Daily Newsletter And Get News Delivered Straight To Your Inbox.