(Editor's note: The story was first published on 5 December 2020. It is being republished from The Quint's archives in the light of Twitter marking a tweet about an alleged Congress ‘toolkit’ by BJP spokesperson Sambit Patra as ‘manipulated’.)
Is labelling a tweet ‘misleading’ or ‘manipulated’ an effective way to combat misinformation?
For the first time in India, days after BJP IT Cell head Amit Malviya, notorious for spreading disinformation, shared a ‘propaganda vs reality’ video from the ongoing farmers’ protest, Twitter labelled his tweet as ‘manipulated media’.
This meant that when a user interacted with the tweet, they were shown a prompt which indicated that the post contained information which wasn’t entirely accurate, pointing them towards verified information.
The move comes amid the mounting pressure on tech giants like Facebook and Twitter to take measures to combat mis/disinformation on their platforms.
Twitter has been labelling content on its platform since March 2020, but it was the first time they took action against an Indian political figure. During the counting of votes in the 2020 presidential election, Twitter stepped up its effort by labelling the tweets of outgoing US President Donald Trump.
But, are these strategies sound to tackle the larger issue of misinformation on the platform?
WHAT EXPERTS HAVE TO SAY
Speaking to The Quint, Prateek Waghre, a policy research analyst who actively tracks the disinformation ecosystem, notes that it is difficult to be able to tell the effectiveness of the labels in the short term but it is important to understand the basis for such an action.
“If they do take some action, it is important to understand on what basis are they taking action because if they are responding to public pressure then it is not based on any value system and principle, and it will seem like arbitrary action.”Prateek Waghre, Policy Research Analyst
Malviya’s three-second clip, labelled by Twitter, shows a policeman swinging a baton but not touching a protesting farmer, Malviya used the clip to build on the argument that the police “didn’t even touch the farmer”.
The Quint reached out to Twitter for a comment on how the tweet was labelled but the platform cited it “Synthetic and Manipulated Media policy” and didn’t offer any comment specific to how and why the label was applied.
Waghre suggests that this ambiguity leaves people guessing which creates room for “conspiracy theories” being floated, which includes theories about companies interfering in the election process.
SO, IS THIS THE BEST WAY TO DEAL WITH MISINFORMATION?
Srinivas Kodali, an independent researcher working on data, governance and internet, says that this is a welcome move, but there is a lot more that the companies need to do to tackle the issue at hand.
“I won’t pat Twitter on the back for the manipulated tags, especially when it was Twitter which promoted these individuals who are known to post fake news and hate speech by giving them verified handles.”Srinivas Kodali
Referring to Twitter’s action in the US against Trump, Kodali said even though his tweets were labelled, it didn’t stop the sharing of Trump’s tweets.
Alluding to the penetration of Facebook and WhatsApp in India and the amount of hate speech and misinformation on the platform, Kodali added that the measures taken by Facebook are also not enough “given the rate at which the manipulated content is being generated by IT cell and the rate at which it is being classified them as manipulated media”.
Platforms like WhatsApp, Facebook, Twitter have been long criticised for not taking adequate action against the content on the platform which has not just led to polarisation, hate speech, lynchings, but even genocide in Myanmar.
Users on Facebook have been exposed to these labels and fact-checks through the platform’s 3PFC program, of which The Quint’s WebQoof is a part, but a study by the researchers at Harvard, Yale, MIT, and the University of Regina suggested that the warning labels could actually backfire by making the unlabelled content seem accurate. They called it the “implied truth effect.”
David Rand, an associate professor of management and cognitive science at MIT’s Sloan School of Business and one of the co-authors of the study was quoted as saying by Quartz, that while most people, working in this area, agree that a warning label will make people believe and share the content less, but since most stuff doesn’t get labeled, it is a limitation of this approach.
The researcher further notes that it is much easier to produce false content than debunk it, thus the benefit of labelling a small subset of misinformation could be outweighed by the legitimacy it will lend to the unlabelled content.
THEN, WHAT NEXT?
Kodali suggests that while the onus is on these companies, it is equally important for institutions like Parliament, courts and even the governments to act.
“While saying that platforms do have a large role to play, but what we are witnessing now is that if Twitter starts following the rule of law, suddenly you might see a section of the right-wing say that there is an American company which is trying to censor us... let’s move on to our own platform, like you have the rise of Tooter. Now while people might change the platform, the problem might remain. It’s still in the public sphere. The only way you can regulate this is when Parliament, governments and even courts act.”Srinivas Kodali
Twitter had shared data on how effective these labels are in a blogpost on 12 November. As per the platform, they saw a decrease of 29 percent in quote tweets of these labeled tweets, but Waghre points out that in the absence of regular data, these statements have to be taken at face value.