ADVERTISEMENTREMOVE AD
Members Only
lock close icon

Facebook Row: How To Put Onus On Platforms and Ensure Free Speech?

Concerns over online harm have given rise to questions over what the responsibilities of online platforms should be.

Published
Opinion
5 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

A few days ago, The Wall Street Journal published a report alleging that Facebook India’s public policy director had exhibited ‘favouritism’ towards the BJP on a number of occasions, including by personally opposing the application of Facebook’s hate speech rules to its leaders. Similar allegations of bias have been levelled against other platforms as well. In February 2019, for instance, Twitter CEO Jack Dorsey had been summoned by the Parliamentary Committee on Information Technology following concerns around the platform discriminating against right-wing accounts.

While it is most important to protect our fundamental freedom of speech and expression against any onslaught from the State and even the judiciary, these recent incidents point to the urgent need for initiating a separate conversation on the merits and demerits of private regulation of online content by large platforms like Facebook and Twitter.

This should include a discussion on regulatory constraints, if any, that should apply to such activities.
ADVERTISEMENTREMOVE AD

What Should Online Platforms Like Facebook Be Held Responsible For?

At present, online intermediaries, such as WhatsApp and Twitter, are exempt from liability arising out of user-generated content provided they exercise due diligence and take down content upon receiving ‘actual knowledge’ of its illegality. This framework, also known as the ‘safe harbour’ regime, has been an important factor behind the growth of the digital economy by ensuring that platforms do not have to invest in ‘pre-screening’ of user content. Yet, concerns around a variety of online harms such as fake news, hate speech, online harassment and obscenity, and the difficulties faced by law enforcement agencies in investigating and punishing cross-jurisdictional cyber crimes have given rise to complex policy questions regarding what the responsibilities of online platforms should be in an age where the public discourse and news cycle is primarily driven by social media.

The inability of platforms to impartially implement their own content moderation policies on many occasions has only strengthened the calls for revisiting the current regulatory framework governing platforms.
Concerns over online harm have given rise to questions over what the responsibilities of online platforms should be.
Facebook CEO Mark Zuckerberg’s testimony in front of the European Parliament in 2018.
(Photo: AP) 

As a result, the Ministry of Electronics and Information Technology had released the (Draft) Information Technology (Intermediaries Guidelines (Amendment) Rules, 2018 (Draft Rules) for public comments in December, 2018. These Draft Rules seek to impose a range of obligations on intermediaries, including requirements for larger intermediaries to incorporate in India and establish a physical office here; putting in place mechanisms to trace originators of information; enabling proactive identification and removal of illegal content, providing information and technical assistance to government agencies, etc.

0
Snapshot
  • The inability of platforms to impartially implement their own content moderation policies on many occasions has only strengthened the calls for revisiting the current regulatory framework governing platforms.
  • Facebook recently announced the membership of its controversial Oversight Board, which will be responsible for deciding upon some its most challenging content moderation cases.
  • The success or failure of Facebook’s new experiment remains to be seen.
  • The current intermediary liability framework adopts a fairly broad definition of the term ‘intermediary’ and is applicable to all kinds of intermediaries irrespective of the nature of the intermediary or the service being provided by it.
ADVERTISEMENTREMOVE AD

Facebook Controversy: Relying Only On Voluntary Efforts By Platforms Isn’t The Best Idea

While many acknowledge the need to address online harms, the Draft Rules have been criticised widely by civil society and industry, particularly on grounds that it could lead to overzealous implementation of proactive identification/removal of content obligations giving rise to unprecedented online censorship, compromise the privacy of users, and place disproportionate costs on businesses by imposing onerous obligations on all intermediaries irrespective of their size and scale. Amidst the rising heat around this issue, Facebook recently announced the membership of its controversial Oversight Board, which will be responsible for deciding upon some its most challenging content moderation cases.

The success or failure of Facebook’s new experiment remains to be seen.

However, the experience around relying solely on voluntary efforts by platforms has not been very encouraging –– and the recent Facebook row demonstrates that clearly.
Concerns over online harm have given rise to questions over what the responsibilities of online platforms should be.
Facebook CEO and co-founder Mark Zuckerberg.
(Photo: iStock)

Therefore, any proposed regulations in this space have to carefully balance the legitimate concerns around not restricting freedom of speech and expression online, as well as ensuring accountability from platforms regarding their private content moderation practices.

One possible way of achieving this is to design targeted procedural obligations for intermediaries based on their size, nature and the risks they pose.
ADVERTISEMENTREMOVE AD

Problems With A ‘One-Size-Fits All’ Approach

The current intermediary liability framework adopts a fairly broad definition of the term ‘intermediary’ and is applicable to all kinds of intermediaries irrespective of the nature of the intermediary or the service being provided by it. That is, it imposes the same duties on large social media and messaging platforms like Twitter and WhatsApp and a local cyber cafe. However, regulatory attention has focused on certain types of platforms in the context of specific online harms. For example, our research indicates that most online harm cases before courts revolve around large user-facing social media, e-commerce and messaging platforms.

There may, therefore, be a need to re-evaluate this legal framework to ensure that imposition of specific and targeted obligations is permissible, rather than adopting a ‘one-size- fits-all’ approach.

In addition, any attempt to impose new responsibilities on intermediaries should be based on the type of activities being performed by them and the risks and challenges emerging from those activities.

ADVERTISEMENTREMOVE AD

Ensuring Transparency & Accountability In Content Moderation Practices

Under the Information Technology Act, 2000, intermediaries are simply required to publish terms and conditions that inform users not to engage in illegal and harmful activities, and to notify users that violation of such terms may result in withdrawal of services. However, most major intermediaries publish detailed content moderation policies outlining the types of content that users are prohibited from posting on their platforms and the consequences for the same, and providing some mechanism for users to report content that violates these policies.

In a recent research paper, we highlight that these content moderation policies are often ambiguously framed.

This allows platforms a lot of discretion in regulating user content.

In addition, the standards applied for blocking users or content are not always clearly mentioned even though most platforms allow an affected party to approach the platform for redressal. This lack of transparency can lead to censorship of legitimate speech or inconsistent application of content moderation policies by intermediaries.

Therefore, any proposed regulations around this issue should focus on laying down procedural requirements that ensure transparency and accountability in the implementation of voluntary policies implemented by intermediaries.
ADVERTISEMENTREMOVE AD

For example, content moderation policies must provide a clear path to raise complaints, ensure appropriate timelines within which user grievances are responded to, provide reasoned decisions to users, and outline an accessible mechanism for appealing wrongful takedowns. Further, intermediaries must be required to publish detailed reports at periodic intervals disclosing the number and nature of accounts/posts taken down to ensure greater transparency around their content moderation practices. These measures will ensure that important democratic rights such as free speech are protected on platforms which are playing an increasingly large role in our personal and political lives.

(Faiza Rahman is a Research Fellow in the technology policy team at the National Institute of Public Finance and Policy, New Delhi. Faiza has also recently co-authored (along with Varun Sen Bahl and Rishab Bailey) a paper titled: Internet intermediaries and online harms: Regulatory Responses in India. She tweets @rahmanfaiza6. This is an opinion piece and the views expressed are the author’s own. The Quint neither endorses nor is responsible for them.)

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from opinion

Topics:  PM Modi   Facebook   Mark Zuckerberg 

Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More
×
×