ADVERTISEMENTREMOVE AD

How Reliable is Facebook's Transparency Report? Experts Weigh in

The Quint spoke to experts to find out the reliability of Facebook's Transparency report.

Updated
Tech and Auto
3 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

In compliance with the new Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, Facebook on 2 June had said that it took action against more than 30 million posts across 10 violation categories during 15 May to 15 June in the country.

Facebook-owned photo-sharing platform Instagram also said it 'took action' against two million posts during this period.

As per the new IT rules, Significant Social Media Intermediaries (SSMIs) have to publish monthly compliance reports on the number of complaints received, and action taken against them.

In a bid to find out how reliable Facebook's transparency report is The Quint spoke to experts: Prasanth Sugathan, Legal Director at Software Freedom Legal Centre (SFLC), Yashaswini Basu, Privacy and Right To Information Fellow at Internet Freedom Foundation (IFF) and Kazim Rizvi, Founder of The Dialogue, a privacy policy think tank.

ADVERTISEMENTREMOVE AD

What Content Was Removed?

The report published by Facebook removed 3.11 lakh posts on hate-speech content and about 1.8 million posts on content related to adult themes, nudity and sexual activity between 15 May and 15 June.

Rizvi told The Quint that Facebook's transparency report is a measure of how well their machine learning technology is able to automatically identify violating content. Moreover, this report contains the details of only the ‘proactively’ initiated takedowns by the platforms.

Using AI to Monitor Hate Speech 'Problematic'

Facebook in its community standards defines the type of content they are likely to remove on account of a potential violation of the aforementioned standards. One of the categories therein is 'violent and criminal behaviour'.

Hate speech is purportedly governed as per their understanding of this category. "However, as can be gleaned from the various instances in which the Facebook Oversight Board has overturned such decisions, the categories often prove to be ambiguous and subjective. It also deploys AI-based content monitoring to detect hate speech," said Basu, a fellow at IFF.

"Hate speech is dependent on context, language, region, and the person – making it a very complex problem to solve. For example, a simple misspelling of words can circumvent AI deployed to identify and action hate speech," explains Rizvi.

0

Accuracy of the Report

It is worth noting that the report itself appends a disclaimer about its accuracy and that the figures are their best estimates of their content. The figures merely indicate the type and volume of content that were actioned under the enlisted categories.

Contentions regarding the actions initiated aren't addressed in the report. However, Facebook's Transparency Centre enables one to check the number of content takedown requests that it received from the country during a certain quarter. Here again, it is not specified what was the impugned content and which government body had flagged the issues.

Basu notes that while an indication of how many total posts were taken down, and under which broad provisions, is a welcome step to advance transparency, it is necessary for individual users whose posts were removed to understand specifically why such take down action was undertaken.

It should be noted that Facebook sends a notice to every individual before removing any posts, but the report does not show the links to the posts that were removed.

High Possibility of 'One-Sided' Actions

One sided actions be it by the government or the intermediary is a cause of concern, believe all the experts . It can only be effectively catered by seeking appropriate levels of transparency from all stakeholders.

"It is important to conduct independent audits of the findings that the platforms publish in their transparency reports to ensure utmost fairness. These efforts are crucial for tackling the concerns around biasedness and it is important to further encourage such practices to ensure utmost conformance with the principles of equality and non-discrimination during the application of content moderation norms"
Kazim Rizvi, Founding Director, The Dialogue
ADVERTISEMENTREMOVE AD

Hate Posts Continue to Be Seen on Facebook

While intermediaries do claim that they are trying very hard to make their platforms free from content that is illegal or problematic, there is a lot of content that escapes the clutches of content moderation systems and tools which are put in place by the intermediaries.

Sugathan from SLFC, asserts that it is understandable that virtually it is an impossible task for an intermediary to be one hundred percent accurate about their content moderation, but a lot more work needs to be put in by the intermediaries in this regard.

"While platforms are able to action harmful content in bulk without involving human reviewers using these technologies, we also need to be concerned that the same technology can lead to over-censorship if not responsibly deployed," adds Rizvi.

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from tech-and-auto

Published: 
Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More
×
×