US Elections: Tech Giants Are Trying to Avoid a Repeat of 2016

The world is watching what these tech giants are doing to avoid a repeat of the 2016 disinformation disaster.

Himanshi Dahiya
Image used for representational purposes.
Image used for representational purposes.
(Photo: Arnica Kala/The Quint)


In an attempt to avoid a repeat of the 2016 election fiasco, tech giants including Facebook and Twitter have introduced a host of measures to curb the spread of fake news and disinformation on their platforms ahead of the 3 November polls in the United States. These steps range from encouraging voter participation to rooting out false information, banning political advertisements and introducing new labelling systems.

We spoke to experts covering cyberspace and disinformation ecosystem who suggested that while the efforts by these tech companies to create friction between the source of misinformation and users to slow down the spread of fake news should be appreciated, but there are legitimate concerns regarding these efforts being “potentially too little and certainly too late”.

In the subsequent sections of this report, we’ll take a closer look at the measures which are being implemented by Facebook and Twitter, in particular, to deal with political disinformation and experts’ evaluation of these measures.

But First, A Quick Recap

Popular social media platforms, especially Facebook came under the scanner after the 2016 Presidential Elections following charges that fake news and misinformation on these platforms influenced the outcome of the election which saw Donald Trump becoming the President of the United States.

This mounted pressure on Facebook and others to tackle the problem of polarisation, hate speech and disinformation. In December 2016, Facebook for the first time toyed with the idea of hiring third-party fact-checkers to combat this menace. Since then, several methods have been deployed for stronger detection and verification of misinformation by the platform. However, the tech platforms continue to face the heat with critics demanding stricter policies on content moderation and increased transparency.

So, What’s Different in 2020?

In 2018 cybersecurity experts and intelligence officials in the States again raised an alarm over Russian disinformation activity aimed at the 2020 Presidential elections. Hence, Facebook and Twitter were under immense pressure to take proactive measure in order to avoid a repeat of 2016.

Here are some measures which have been employed by these tech companies to combat the menace of election-related disinformation this time around:


In August 2020, Facebook unveiled a campaign to encourage voter participation. As a part of this initiative, the platform said it would take down posts promoting voter suppression or content which encourages or intimidates people not to vote. In fact, Facebook down down a post by Donald Trump Jr calling for an “army” of volunteers to protect the polling stations.

In addition to this, Facebook has also decided to place labels on state-controlled media groups and has restricted them from posting advertisements targeting American voters. It will also be labelling speeches by politicians which might contain misleading information but cannot be taken down because of their news value.

Recently, Facebook also launched a crackdown on accounts linked with the “QAnon”, a conspiracy theory which proposes without evidence that President Trump is secretly working against a global child sex-trafficking ring.



In its efforts to counter political disinformation ahead of the elections, Twitter introduced a new labelling system in May 2020. This allowed the platform to label misleading tweets, as a result of which the social media giant flagged multiple tweets by President Trump including those containing claims about mail-in-voting.

Twitter also said it would place notices at the top of user feeds warning that there may be delays in full election results and that people might see ‘misleading information’ about voting by mail.

Also, users will now not be able to reply to or retweet tweets by United States politicians with a misleading information label. 

Further, unlike Facebook which has banned political advertisements only in the run-up to the elections, Twitter had banned all political ads worldwide in 2019.


In what can be considered as a litmus test for Facebook and Twitter’s efforts to counter election disinformation, both the platforms moved quickly to limit the spread of an unverified political story about Joe Biden’s son Hunter Biden, published by the New York Post.

In an unprecedented move, both the platforms acted against the story published by a mainstream news outlet which cited unverified emails from Democratic nominee Joe Biden’s son that were reportedly discovered by President Trump’s allies. While Facebook evoked its misinformation policy and showed the post to fewer people till it was fact-checked, Twitter blocked its users from tweeting out the link to the story and from sending it in private messages.

Although the tech companies acted quickly in this situation, their response was met with accusations of censorship.

What Are The Experts Saying?

The Quint reached out to Prateek Waghre, a policy research analyst who actively tracks the disinformation ecosystem in India and across the world and Sai Krishna Kothapalli, a cybersecurity expert and CEO at Hackrew Infosec who helped us understand the implications and impacts of the aforementioned measures adopted by Facebook and Twitter.

While both Waghre and Kothapalli appreciated the intent of the tech companies to disrupt the flow of misinformation, they strongly felt that these might not be enough to entirely root out the problem.

“What Facebook and Twitter are essentially trying to do is to create friction in the sharing process and reduce the flow of misinformation. There’s no issue with them trying these measures. But there is very real concern that this is potentially too little and certainly too late.”
Prateek Waghre, Policy Research Analyst

Waghre further added that these steps “might not be enough to prevent motivated bad actors from spreading misinformation”.

Sai Krishna Kothapalli, on the other hand, believes that these steps will certainly help maintain integrity in the election process. Although, he too, doubts how far-reaching impact will these measures have on curbing the problem of disinformation.

“While there are still ways in which these platforms can be misused, these measures will prompt people to educate themselves before mimicking views of others and is a good step towards creating more awareness among the voters.
Sai Krishna Kothapalli, CEO at Hackrew Infosec

Speaking about the move to ban political advertisements, Prateek Waghre pointed out that while banning political ads can help limit disinformation, it is certainly not a cure. “At this point, everybody is just guessing because there isn’t enough data to suggest what might work and what might not. Banning these ads will definitely create some friction, but there is a possibility that their space will eventually be filled by other types of motivated posts,” he said.

The role of disinformation and propaganda amplified by major tech platforms in the 2016 US presidential elections is indisputable. The entire world as it is glued to the 2020 elections in the United States, is also watching what these social media giants are doing to avoid a repeat of the 2016 disinformation disaster. While it looks like they’re trying hard, Facebook and Twitter definitely don’t have it all figured out, yet.

(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at and we'll fact-check it for you. You can also read all our fact-checked stories here.)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Published: undefined