advertisement
(Trigger warning: This article includes descriptions and mentions of violence, and links to posts containing violence.)
Over the past few days, there have been thousands of words written about the impact that content creators on social media have on our society. Let me stay on that topic, but change the focus—from a cringe joke by Ranveer Allahbadia aka Beer Biceps on a comedy game show to toxic, hate speech targeting minorities by India’s politicians and far-right figures, each of them content creators in their own right.
On 11 February, an official of India’s Information & Broadcasting Ministry tweeted that the episode of India’s Got Latent “with obscene and perverse comments by Ranveer Allahbadia has been blocked following Government of India orders”.
Now, compare this with the following:
Less than 24 hours earlier, the India Hate Lab project of the Center for the Study of Organized Hate (CSOH) had published a report on hate speech in India in 2024. Part of the report dealt with assessing the effectiveness of Facebook, Instagram, and YouTube’s reporting tools in enforcing their self-professed community standards on violence and incitement.
The reported content featured hate speeches delivered at in-person events in multiple regional languages, including Hindi, Marathi, Gujarati, Kannada, Odia, and Malayalam.
This isn’t an aberration, but the norm.
We found a similar trend in a 2024 CSOH report whose research and data analysis I worked on. During that project, on how Instagram fuels cow vigilantism in India, a total of 167 Instagram posts depicting explicit violence by cow vigilantes were reported by selecting the “showing violence, death, or severe injury” option under the “violence, hate, and exploitation” category.
Of course the Modi government treats the two situations differently—their inaction on hate speech by the ruling party’s own leaders is not surprising in the least—but what is especially important to note here is the role of Big Tech. Yes, YouTube may have taken down the episode of India’s Got Latent on the orders of the Indian government.
But when it comes to hate speech, Big Tech consistently fails to act on content that violates their own platform guidelines because doing otherwise would irk the Hindu nationalists running the Indian government. Their allowance for hateful anti-Muslim and anti-Christian Hindutva content to thrive on their platforms is part and parcel of kowtowing to the Modi administration in an attempt to serve their business interests in India.
This was perhaps best exemplified by reporting by the Wall Street Journal in 2020 that revealed how Ankhi Das, who was Facebook’s top public policy executive in India at the time, had told staff members that punishing violations by politicians from the BJP would damage the company’s business prospects in the country.
Months later, Facebook would suspend Donald Trump from its platforms “following his praise for people engaged in violence at the Capitol on January 6, 2021.” Conveniently enough, Facebook’s actions came on the heels of Trump having lost the election, at a time when it seemed that his political career may well be over.
That Facebook applied their ‘principles’ on hate speech and content depicting, inciting or celebrating violence, extremely elastically depending on the reigning political interests of the geography they were in was all too clear.
Therefore, it came as no surprise to those of us who had observed these hypocrisies closely when Meta CEO Mark Zuckerberg began making one concession after another to Trump in an effort to gain his favour after his victory in the 2024 US election.
From doing away with the fact-checking program, to agreeing to pay roughly 25 million USD to settle a 2021 lawsuit that Trump had brought against the company for suspending his accounts, the acts of institutional subservience have been coming thick and fast.
And there is arguably no place to observe that better than in India, the largest market in terms of number of consumers for most of these companies.
In several of these instances, the people being targeted were Muslims, as evidenced by their responses to being asked their names on camera in the videos posted by the vigilantes.
Take a look at this Instagram reel, for example, showing a man being physically restrained and assaulted by vigilantes.. It has more than 2.9 million views and over 2,600 comments on Instagram. Or this one showing a man being violently assaulted by three different men inside a car. The man who is being beaten up has blood on his face. Both of the videos continue to be up on Instagram.
Such videos showing and glorifying violence have no place on social media platforms like Instagram and YouTube, if one were to go by the official, public guidelines of these companies. For instance, Instagram’s community guidelines claim that the platform removes content that “contains credible threats or hateful conduct” or “targets private individuals to degrade or shame them”.
The guidelines also condemn the encouragement of violence or attacking anyone “based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities or diseases.”
However, hundreds of videos that we analyzed were in clear violation of these policies, with cow vigilantes spouting hate, calling for violence against members of marginalized groups, predominantly Muslims, and routinely filming and uploading videos of this violence itself. The Meta-owned platform further says that they “may remove videos or images of intense, graphic violence to make sure that Instagram stays appropriate for everyone”—when it comes to Hindutva vigilante violence, however, it just consistently fails to do so.
A large number of the videos we had analyzed were also explicitly in violation of the platform’s “illegal content” rule, which states that they do not “allow support or praise of terrorism, organized crime, or hate groups on Instagram.”
So, not just is such content that is explicitly violative of Big Tech’s community guidelines in multiple ways rampant in India, there has seemingly been no concerted effort to crack down on them either over the past several years.
To understand this better, let’s take a closer look at how hate speech actually gets amplified.
According to CSOH’s analysis of 1,165 hate speech events in India in 2024, the top four most frequent purveyors of hate speech were senior BJP politicians and elected officials.
The four of them delivered a staggering total of 247 hate speeches in a year marked by several important elections, including the Lok Sabha polls. But let’s consider how these hate speeches by BJP politicians and other far-right Hindutva leaders make their way from the pulpit to their final audiences.
Politician X makes a speech, say at a rally during an election campaign. The speech’s live audience consists of a few hundred — or if the leader is a prominent one, maybe a few thousand — attendees. Enter the star amplifiers — TV news media and social media channels.
Across TV news screens, anchors give the hate speeches legitimacy and amplification, furthering the agenda of the hate speeches themselves, whether it be on anti-Muslim politics, policies or conspiracy theories.
The CSOH hate speech report, for instance, lists various ‘jihad’-based conspiracy theories that featured in hate speeches they analyzed, including ‘love jihad’, ‘land jihad’, ‘vote jihad’, ‘population jihad’, ‘rail jihad’, ‘economic jihad’, ‘halal jihad’, ‘mazar jihad’, ‘thook jihad’, ‘UPSC jihad’, and ‘fertilizer jihad’.
The news channels play a crucial role in furthering hate speeches on such topics, and discussing them further. The videos from their shows are then posted on social media by the channels. In early February last year, I had analyzed 809 YouTube videos from across eight different primetime shows on Times Now, CNN-News18, NDTV 24x7 and India Today.
That is where the army of Hindutva-affiliated social media accounts and pages come in. They post reels and shorts of the hate speeches, often with inspiring or Hindutva pop music in the background. Along with such clips of hate speeches are videos of other types of hateful content, including violence inflicted on members of minority communities.
The omnipresence of these videos normalize and mainstream such hateful ideas, making them extremely accessible and providing that one key ingredient to the hate speech and content ecosystem: scale.
Compare the scale of impact of a hate speech being listened to just by the in-person attendees of an event or an act of vigilantism being witnessed by a few dozen bystanders or passersby, versus that hate speech, act of vigilantism or hateful content reaching millions of viewers across the country through social media.
For instance, as part of the cow vigilantism report, we had analyzed 121 Instagram reels showing vigilantes engaging in physical violence against people who were transporting cattle. The 121 reels had more than 8.3 million views. Nine of the 121 posts (7.4%) had more than 100,000 views each.
Another instance that helps underscore this point are the videos of BJP leaders such as Ravi Negi harassing Muslim vendors and shop-owners. Though the harassment caused by the likes of Negi to the vendors is reprehensible by itself, its impact increases when videos of Negi targeting them are distributed and shared across social media.
Yet again, the scale increases from the locality where the harassment is done to a viewership of hundreds of thousands of people in the city, the state, and across the country.
There is also a network effect at play that Big Tech enables through its recommendation algorithms, and through features such as cross-posting and collaboration posts. Notice the bloodied man in this collaborative post by “@rahul_hindu_gau_sevak” and five other accounts. It shows a man bleeding severely and being assaulted and badly wounded by vigilantes.
Exactly the same visuals (of the same bloodied individual) were shared in more than a dozen differently edited posts and reels published by various accounts. Across 13 reels showing clips of the same man, these vigilante accounts had racked up more than 127,000 views with an average of close to 10,000 views per reel.
This is despite the fact that Meta states that the use of Instagram Gifts is subject to a creator complying with their Community Guidelines and Monetization Policies, which these accounts are clearly and explicitly in violation of.
Monu Manesar, a notorious Hindutva vigilante who has been accused of instigating violence in Nuh in which six people died and scores were injured, and has also been a prime suspect in the murder of two Muslim men in Rajasthan, would regularly upload videos of his violent vigilantism on YouTube.
In October 2022, he had received a ‘Silver Creator’ award from YouTube for reaching 100,000 subscribers. Manesar would go on to cross 200,000 subscribers on the platform. The fact that he was allowed to post his violent vigilantism on YouTube and other social media platforms unabated for years shows the complete failure of Big Tech in countering this menace despite having advanced content detection techniques and tools at their disposal.
The pipeline of hateful speech and content to our mobile screens and social media apps is a rather unencumbered one in India, as is the symbiotic relationship between acts of offline harm and the spread of online hate. The offline harm, such as through acts of vigilantism, make for content conducive to the spread of online hate. And the spread of online hate seeks to further normalize and incentivize such acts of offline harm.
Countries across the world, especially ones that are currently not ruled by despotic regimes, would do well to advocate and bargain for more stringent measures to be adopted by Big Tech companies in regulating such hateful content on their platforms. Studying these myriad ways in which social media platforms amplify anti-minority hate speeches and content depicting, inciting, and celebrating violence in India can be instrumental in understanding the dangers posed globally by this phenomenon.
This is especially crucial at a time when Meta says they are drastically reducing content restrictions on their platforms, as part of a series of moves designed to curry favour with Donald Trump.
And X under Elon Musk consistently allows hateful content on its platform, with the owner-billionaire and member of the Trump administration routinely sharing hateful posts himself, not just on matters involving the United States but elsewhere too, including countries in Europe.
Importantly enough, there are precedents to show that when social media giants try to combat the spread of a particular online phenomenon, there have been varying degrees of success achieved.
For example, after facing mounting pressure to act against the rapid rise of QAnon groups in the United States on their platform, Facebook had announced in August 2020 that they had “removed over 790 groups, 100 Pages and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram.”
QAnon was a collection of false conspiracy theories revolving around a core falsehood that a group of Satan-worshiping elites who run a child sex ring are trying to control American politics and media. In May 2019, the FBI had identified QAnon as a potential domestic terrorism threat. Some QAnon followers had reportedly even committed acts of violence inspired by the theory, including attempted arson.
After Facebook’s actions in August 2020 however, traffic for QAnon phrases and hashtags fell drastically on the platform. It wasn’t a complete win though — Vox reported that “around the same time, membership in groups posing as anti-child trafficking groups exploded, and in those groups, users were still largely spreading QAnon content.”
But one could argue that it made the job of spreading QAnon a few steps harder, and the pipeline to become a QAnon member less easily accessible.
The systemic rot of hateful content in India today could replicate itself elsewhere tomorrow, in countries that aren’t impacted by this phenomenon at anywhere near the scale that India is currently.
To prevent such a situation, knowing that social media companies adopt different guises for different geographies, we must closely look at the regions and contexts where these Big Tech companies are on their worst behaviour, and better learn how to counter them.
The methods to spread anti-Muslim hate online in India could prove instructional to neo-Nazi outfits in Europe. Upping the ante against the continued laxity of Big Tech on hateful content in India is therefore beneficial not just for Indians, but for the world.
At the end of the day, social media companies need markets beyond the shores of the United States to survive and thrive. Stakeholders around the world, from governments in Europe to advocacy and human rights groups in the United States, must hence make it tougher for Big Tech to get away with enabling and amplifying online hate at such extreme levels in the world’s most populous country.
(Meghnad Bose is an award-winning multimedia journalist based in New York. He is a former Deputy Editor of The Quint.)
Published: undefined