ADVERTISEMENTREMOVE AD

How Big Tech Is Shaping the Israel-Palestine Conflict & What It Means for Others

Platforms have responded to the Israel-Palestine conflict with a slew of measures. Are they getting it wrong?

Published
Tech News
8 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

You've probably seen the photos and videos from the latest escalation of the Israel-Palestine conflict. A barrage of rockets streaking across the night sky, airbursts of white phosphorus, buildings reduced to rubble, wounded children in hospitals. And if you've seen these disturbing visuals, you most likely saw them first on social media.

In the wake of the deadliest war in Gaza, platforms like Instagram, YouTube, and X (formerly Twitter) have yet again come under scrutiny for their role as modern 'town squares'.

This was obvious from the recent backlash against Meta-owned Instagram for allegedly shadow-banning pro-Palestinian users.

Over the past week, several users reported a sudden drop in followers and views on their Instagram 'Stories'. However, Meta attributed the low visibility to a technical glitch and denied that it had anything to do with the content of the posts.

While shadow-banning may be a fuzzy possibility that is hard to prove, the incident raised some interesting questions.

How have platforms responded to the Israel-Palestine conflict? Are they getting it wrong? What are the dilemmas platforms face amid such a geo-political conflict? Could they be used to justify stricter government regulations in India? Let's take a closer look.

How Big Tech Is Shaping the Israel-Palestine Conflict & What It Means for Others

  1. 1. In Times Like These

    Since Palestinian militant group Hamas launched a 'surprise attack' against Israel on 7 October, major tech companies like Meta, Google, and X have outlined the following steps to 'ensure the safety of users':

    Hamas ban: "Hamas is banned from our platforms, and we remove praise and substantive support of them when we become aware of it, while continuing to allow social and political discourse – such as news reporting, human rights related issues, or academic, neutral and condemning discussion," Meta said.

    • Meanwhile, X said that it has deactivated newly created "Hamas-affiliated" accounts.

    Content takedowns: Over 7,95,000 pieces of content across Meta-owned platforms were marked 'disturbing' while seven times as many pieces of content were taken down compared to the past two months. It further said that several Instagram hashtags were also restricted.

    • X said it took action against "tens of thousands of posts" for sharing graphic media, violent speech, and hateful conduct. It further took action against hundreds of accounts for trying to manipulate trending topics. "We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts," the Elon Musk-owned platform added.

    Fake account networks: "Our teams have detected and taken down a cluster of activity linked to a covert influence operation we removed and attributed to Hamas in 2021," Meta said.

    • On the other hand, Google said that its cybersecurity unit Mandiant had "observed fake accounts connected to Iran which are promoting anti-Israeli narratives across various services."

    • Mandiant is also reportedly looking into distributed denial-of-service (DDoS) attacks "by suspected pro-Hamas and pro-Russia hacktivist groups targeting Israeli government websites."

    Violent content: Recognising that violent content may be taken down by mistake amid the conflict, Meta said that it has tweaked its violent content policy to ensure that accounts with multiple strikes are not disabled.

    • "We’re also taking steps to reduce the visibility of potentially offensive comments under posts on Facebook and Instagram," Meta said.

    Protecting hostages: "In order to prioritise the safety of those kidnapped by Hamas, we are temporarily expanding our Violence and Incitement policy and removing content that clearly identifies hostages when we’re made aware of it, even if it’s being done to condemn or raise awareness of their situation," Meta said, adding that it will also blur the images of victims in line with standards under the Geneva Convention.

    Video earnings: Refusing to classify the Israel-Palestine conflict as a 'Sensitive Event', YouTube said that "if a video provides authoritative news reporting on a violent event in a journalistic context, it may still be eligible for monetisation." The video-sharing platform also reiterated its existing Community Guidelines.

    The Quint has reached out to these platforms and will update this article if we hear back.

    Expand
  2. 2. Now vs The Russia-Ukraine War

    The Russian invasion of Ukraine which took place in February 2022 was dubbed as the "first social media war." So, how did platforms approach content moderation then as opposed to the ongoing Israel-Palestine conflict?

    Let's start with propaganda. Soon after the Ukraine crisis, Meta proactively blocked access to Russian State-controlled media, took down accounts belonging to State-affiliated media, prohibited ads from such handles, and issued fact-check labels.

    X (then-Twitter) also said that it was adding labels on tweets sharing links to Russian State-affiliated media websites. The Kremlin retaliated by banning Facebook, Instagram, and X in the country.

    But, in the context of the Israel-Palestine conflict, platforms have failed to take equally drastic measures to counter State-sponsored narratives.

    For instance, the Israeli government recently went after Hollywood celebrity Gigi Hadid on Instagram.

    The half-Palestinian supermodel had said, "There is nothing Jewish about the Israeli government’s treatment of Palestinians," to which Israel's official Instagram handle replied, "Have you been sleeping this past week? Or are you just fine turning a blind eye to Jewish babies being butchered in their homes? Your silence has been very clear about where you stand. We see you."

    There was also this tweet by Israel after India's victory against Pakistan in the 2023 ICC World Cup.

    Speaking of propaganda, Politico reported that the Israeli Foreign Affairs Ministry was pushing paid ads on X and YouTube in an effort to shape public opinion around the war.

    But in 2022, Russia and Ukraine were temporarily stopped from buying ads on X in order to "ensure critical public safety information is elevated." Russian media couldn't buy ads then on YouTube either.

    Additionally, Meta's justified ban on Hamas is mirrored by its alleged (in)action against pro-Russian mercenary outfits like the Wagner group which has been allowed to skirt the company's policy on "Dangerous Organisations", according to CNN.

    And while platforms have taken cautious steps to ensure that Israeli hostages are not identified, the same sensitivity wasn't initially extended to Russian prisoners of war (POWs), who were seen denouncing the invasion in viral videos shared by Ukrainian forces.

    It was only after observers like Human Rights Watch pointed out the potential violation of Geneva Conventions that platforms like X moved to update their existing policies and take down content depicting POWs in the context of the Ukraine war.

    All this is to prove that there are inconsistencies in how major social media platforms have responded to two different geo-political events, the Russian-Ukraine war and the Israel-Palestine conflict. It's also easy for platforms to push back against the aggressor when they have the favour of Western democratic governments. But the question of what platforms will do gets trickier when the 'forces of good and evil' are not clearly delineated.

    Expand
  3. 3. Difficulties in Content Moderation

    The moves that platforms have made in response to the Israel-Palestine conflict may not be perfect. But is there even such a thing as perfect content moderation?

    "At times like this, we're really exposed to the limits of content moderation. It is extremely challenging to do," Prateek Waghre, the policy director of digital rights organisation Internet Freedom Foundation (IFF), told The Quint.

    "Wielding the technology is the easiest part of running a global social media platform. It's the politics and the other decisions that are the challenging bit and we've seen that with every such incident," he said.

    "When a war is happening, the amount of change that content moderation practices can bring about is limited. But that's not an excuse for platforms to shrug off that responsibility," IFF's Prateek Waghre opined.

    While acknowledging that content moderation was no cakewalk, Waghre also pointed out how platforms had actively taken steps over the last year or so that made them worse prepared to deal with the Israel-Palestine conflict.

    "There have been a number of reports about how platforms like X and Meta have been scaling back their trust and safety teams and their election integrity teams," he added.

    Notably, in the aftermath of the Russian invasion of Ukraine, Meta had requested its Oversight Board to weigh in on how it should moderate content during wartime. But later, the big tech giant reportedly backpedaled and didn't want the Oversight Board's help any more.

    Then, there's the question of misinformation and how that has spread.

    In the last year, Instagram and X revamped their verification systems to allow paid-for-blue ticks. Since these subscribers are also promised boosted engagement as part of the deal, many believe that this has allowed fake news about the Israel-Palestine conflict to spread quickly.

    "Even platforms themselves don't always know the ground truth. Nor do people verifying the facts at that stage because there is a lag in the process. Information gets out but false information also gets out. How platforms deal with the broader concept of virality remains an unanswered challenge," Waghre said.

    Expand
  4. 4. Inviting Regulatory Crackdown

    Following the recent escalation of the Israel-Palestine conflict, big tech companies have largely acted in the way that governments have asked them to act.

    For instance, Meta revealed what measures it was taking to curb fake news on its platforms only after European Union (EU) Commissioner Thierry Brenton's 24-hour ultimatum.

    In a letter addressed to Meta CEO Mark Zuckerberg, Brenton said:

    "We are seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms. I would ask you to be very vigilant to ensure strict compliance with the DSA [Digital Service Act] rules on terms of service, on the requirement of timely, diligent and objective action following notices of illegal content in the EU and on the need for proportionate and effective mitigation measures."
    Given that Meta promptly responded to the EU commissioner's letter with a slew of measures, could it similarly be asked to comply with government ultimatums in other countries like India? Will it lead to tougher platform regulations?

    IFF's Prateek Waghre opined that there was a tendency for governments to introduce "more regulation that brings in more control over the internet and over people's ability to communicate over the internet."

    "Anything that can be used to justify or shore up that narrative will happen," he said.

    Case in point: When former US President Donald Trump was de-platformed from Twitter in 2021, Lok Sabha MP Tejasvi Surya had said, “If they can do this to POTUS, they can do this to anyone.” Coincidentally, the IT Rules regulating social media platforms and other intermediaries were notified by the central government a few months later.

    But Waghre also emphasised that the tendency to try and control digital spaces exists irrespective of which party is in power, citing as examples Left-ruled Kerala's fake news ordinance and Congress-ruled Karnataka's attempts to set up a fact-checking unit despite opposition.

    Expand
  5. 5. Projecting the Past Onto the Future

    Visuals of the massacre at the Supernova music festival in Israel or the deadly blast at a crowded hospital in Gaza that have been uploaded on social media are violent and disturbing. But such visuals could also serve as important evidence of war crimes.

    That's another major content moderation dilemma that platforms face. Most of the time, platforms are being told to take down violent content and hence, they build their systems accordingly. But in war scenarios, the content becomes important documentation, Waghre said.

    Interestingly, the need to preserve such vulnerable digital information has inspired non-profit initiatives like Mnemonic, which has set up individual digital archives with over ten million records of human rights violations and other crimes in Syria, Sudan, Yemen, and Ukraine.

    Thus, the parameters that platforms use to take down posts are crucial as it could have lasting impacts across regions and periods of time.

    This responsibility becomes paramount when you consider that AI-powered chatbots like ChatGPT are also drawing from the flood of online posts, visuals, and opinions on the Israel-Palestine conflict.

    British journalist Mona Chalabi realised it when she asked ChatGPT two simple questions on justice for Israelis and Palestinians. The answers thrown up by the chatbot were similar, with one major caveat. See for yourself.

    "This is why I care about fair journalism, why I care about headlines that say Palestinians “died” and Israelis were “killed” - because we are documenting the present for future artificial intelligence, algorithms and government policies and calculations for how many bombs to buy and where to send them to," Chalabi said.

    It's also why we should care about content moderation by big tech platforms.

    (At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

    Expand

In Times Like These

Since Palestinian militant group Hamas launched a 'surprise attack' against Israel on 7 October, major tech companies like Meta, Google, and X have outlined the following steps to 'ensure the safety of users':

Hamas ban: "Hamas is banned from our platforms, and we remove praise and substantive support of them when we become aware of it, while continuing to allow social and political discourse – such as news reporting, human rights related issues, or academic, neutral and condemning discussion," Meta said.

  • Meanwhile, X said that it has deactivated newly created "Hamas-affiliated" accounts.

Content takedowns: Over 7,95,000 pieces of content across Meta-owned platforms were marked 'disturbing' while seven times as many pieces of content were taken down compared to the past two months. It further said that several Instagram hashtags were also restricted.

  • X said it took action against "tens of thousands of posts" for sharing graphic media, violent speech, and hateful conduct. It further took action against hundreds of accounts for trying to manipulate trending topics. "We’re also continuing to proactively monitor for antisemitic speech as part of all our efforts," the Elon Musk-owned platform added.

Fake account networks: "Our teams have detected and taken down a cluster of activity linked to a covert influence operation we removed and attributed to Hamas in 2021," Meta said.

  • On the other hand, Google said that its cybersecurity unit Mandiant had "observed fake accounts connected to Iran which are promoting anti-Israeli narratives across various services."

  • Mandiant is also reportedly looking into distributed denial-of-service (DDoS) attacks "by suspected pro-Hamas and pro-Russia hacktivist groups targeting Israeli government websites."

Violent content: Recognising that violent content may be taken down by mistake amid the conflict, Meta said that it has tweaked its violent content policy to ensure that accounts with multiple strikes are not disabled.

  • "We’re also taking steps to reduce the visibility of potentially offensive comments under posts on Facebook and Instagram," Meta said.

Protecting hostages: "In order to prioritise the safety of those kidnapped by Hamas, we are temporarily expanding our Violence and Incitement policy and removing content that clearly identifies hostages when we’re made aware of it, even if it’s being done to condemn or raise awareness of their situation," Meta said, adding that it will also blur the images of victims in line with standards under the Geneva Convention.

Video earnings: Refusing to classify the Israel-Palestine conflict as a 'Sensitive Event', YouTube said that "if a video provides authoritative news reporting on a violent event in a journalistic context, it may still be eligible for monetisation." The video-sharing platform also reiterated its existing Community Guidelines.

The Quint has reached out to these platforms and will update this article if we hear back.

ADVERTISEMENTREMOVE AD

Now vs The Russia-Ukraine War

The Russian invasion of Ukraine which took place in February 2022 was dubbed as the "first social media war." So, how did platforms approach content moderation then as opposed to the ongoing Israel-Palestine conflict?

Let's start with propaganda. Soon after the Ukraine crisis, Meta proactively blocked access to Russian State-controlled media, took down accounts belonging to State-affiliated media, prohibited ads from such handles, and issued fact-check labels.

X (then-Twitter) also said that it was adding labels on tweets sharing links to Russian State-affiliated media websites. The Kremlin retaliated by banning Facebook, Instagram, and X in the country.

But, in the context of the Israel-Palestine conflict, platforms have failed to take equally drastic measures to counter State-sponsored narratives.

For instance, the Israeli government recently went after Hollywood celebrity Gigi Hadid on Instagram.

The half-Palestinian supermodel had said, "There is nothing Jewish about the Israeli government’s treatment of Palestinians," to which Israel's official Instagram handle replied, "Have you been sleeping this past week? Or are you just fine turning a blind eye to Jewish babies being butchered in their homes? Your silence has been very clear about where you stand. We see you."

There was also this tweet by Israel after India's victory against Pakistan in the 2023 ICC World Cup.

Speaking of propaganda, Politico reported that the Israeli Foreign Affairs Ministry was pushing paid ads on X and YouTube in an effort to shape public opinion around the war.

But in 2022, Russia and Ukraine were temporarily stopped from buying ads on X in order to "ensure critical public safety information is elevated." Russian media couldn't buy ads then on YouTube either.

Additionally, Meta's justified ban on Hamas is mirrored by its alleged (in)action against pro-Russian mercenary outfits like the Wagner group which has been allowed to skirt the company's policy on "Dangerous Organisations", according to CNN.

And while platforms have taken cautious steps to ensure that Israeli hostages are not identified, the same sensitivity wasn't initially extended to Russian prisoners of war (POWs), who were seen denouncing the invasion in viral videos shared by Ukrainian forces.

It was only after observers like Human Rights Watch pointed out the potential violation of Geneva Conventions that platforms like X moved to update their existing policies and take down content depicting POWs in the context of the Ukraine war.

All this is to prove that there are inconsistencies in how major social media platforms have responded to two different geo-political events, the Russian-Ukraine war and the Israel-Palestine conflict. It's also easy for platforms to push back against the aggressor when they have the favour of Western democratic governments. But the question of what platforms will do gets trickier when the 'forces of good and evil' are not clearly delineated.

0

Difficulties in Content Moderation

The moves that platforms have made in response to the Israel-Palestine conflict may not be perfect. But is there even such a thing as perfect content moderation?

"At times like this, we're really exposed to the limits of content moderation. It is extremely challenging to do," Prateek Waghre, the policy director of digital rights organisation Internet Freedom Foundation (IFF), told The Quint.

"Wielding the technology is the easiest part of running a global social media platform. It's the politics and the other decisions that are the challenging bit and we've seen that with every such incident," he said.

"When a war is happening, the amount of change that content moderation practices can bring about is limited. But that's not an excuse for platforms to shrug off that responsibility," IFF's Prateek Waghre opined.

While acknowledging that content moderation was no cakewalk, Waghre also pointed out how platforms had actively taken steps over the last year or so that made them worse prepared to deal with the Israel-Palestine conflict.

"There have been a number of reports about how platforms like X and Meta have been scaling back their trust and safety teams and their election integrity teams," he added.

Notably, in the aftermath of the Russian invasion of Ukraine, Meta had requested its Oversight Board to weigh in on how it should moderate content during wartime. But later, the big tech giant reportedly backpedaled and didn't want the Oversight Board's help any more.

Then, there's the question of misinformation and how that has spread.

In the last year, Instagram and X revamped their verification systems to allow paid-for-blue ticks. Since these subscribers are also promised boosted engagement as part of the deal, many believe that this has allowed fake news about the Israel-Palestine conflict to spread quickly.

"Even platforms themselves don't always know the ground truth. Nor do people verifying the facts at that stage because there is a lag in the process. Information gets out but false information also gets out. How platforms deal with the broader concept of virality remains an unanswered challenge," Waghre said.

ADVERTISEMENTREMOVE AD

Inviting Regulatory Crackdown

Following the recent escalation of the Israel-Palestine conflict, big tech companies have largely acted in the way that governments have asked them to act.

For instance, Meta revealed what measures it was taking to curb fake news on its platforms only after European Union (EU) Commissioner Thierry Brenton's 24-hour ultimatum.

In a letter addressed to Meta CEO Mark Zuckerberg, Brenton said:

"We are seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms. I would ask you to be very vigilant to ensure strict compliance with the DSA [Digital Service Act] rules on terms of service, on the requirement of timely, diligent and objective action following notices of illegal content in the EU and on the need for proportionate and effective mitigation measures."
Given that Meta promptly responded to the EU commissioner's letter with a slew of measures, could it similarly be asked to comply with government ultimatums in other countries like India? Will it lead to tougher platform regulations?

IFF's Prateek Waghre opined that there was a tendency for governments to introduce "more regulation that brings in more control over the internet and over people's ability to communicate over the internet."

"Anything that can be used to justify or shore up that narrative will happen," he said.

Case in point: When former US President Donald Trump was de-platformed from Twitter in 2021, Lok Sabha MP Tejasvi Surya had said, “If they can do this to POTUS, they can do this to anyone.” Coincidentally, the IT Rules regulating social media platforms and other intermediaries were notified by the central government a few months later.

But Waghre also emphasised that the tendency to try and control digital spaces exists irrespective of which party is in power, citing as examples Left-ruled Kerala's fake news ordinance and Congress-ruled Karnataka's attempts to set up a fact-checking unit despite opposition.

ADVERTISEMENTREMOVE AD

Projecting the Past Onto the Future

Visuals of the massacre at the Supernova music festival in Israel or the deadly blast at a crowded hospital in Gaza that have been uploaded on social media are violent and disturbing. But such visuals could also serve as important evidence of war crimes.

That's another major content moderation dilemma that platforms face. Most of the time, platforms are being told to take down violent content and hence, they build their systems accordingly. But in war scenarios, the content becomes important documentation, Waghre said.

Interestingly, the need to preserve such vulnerable digital information has inspired non-profit initiatives like Mnemonic, which has set up individual digital archives with over ten million records of human rights violations and other crimes in Syria, Sudan, Yemen, and Ukraine.

Thus, the parameters that platforms use to take down posts are crucial as it could have lasting impacts across regions and periods of time.

This responsibility becomes paramount when you consider that AI-powered chatbots like ChatGPT are also drawing from the flood of online posts, visuals, and opinions on the Israel-Palestine conflict.

British journalist Mona Chalabi realised it when she asked ChatGPT two simple questions on justice for Israelis and Palestinians. The answers thrown up by the chatbot were similar, with one major caveat. See for yourself.

"This is why I care about fair journalism, why I care about headlines that say Palestinians “died” and Israelis were “killed” - because we are documenting the present for future artificial intelligence, algorithms and government policies and calculations for how many bombs to buy and where to send them to," Chalabi said.

It's also why we should care about content moderation by big tech platforms.

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from tech-and-auto and tech-news

Topics:  War   Israel   Social Media 

Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More