ADVERTISEMENTREMOVE AD

Twitter Policy to Remove Photos Posted Without Consent: Why Experts Are Weary?

The 'vagueness' and several unanswered questions on implementation, could make it just another policy, say experts.

Updated
India
5 min read
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large
Hindi Female

IIT-Bombay alumnus Parag Agrawal, who took over as the CEO of Twitter, replacing co-founder Jack Dorsey, may have grabbed the headlines for the company through the week.

But, amid the management shuffle buzz, a significant expansion of Twitter's privacy policy got buried. That Twitter can now take action on media that is shared without any explicit abusive content, provided it’s posted without the consent of the person depicted.

While the policy could directly impact women, and vulnerable members of the society who are often at the receiving end of targeted social media attacks, the 'vagueness' on implementation could make it 'just another policy.'

Jhalak M Kakkar, Executive Director, Centre for Communication Governance, National Law University Delhi, told The Quint, that while it is a 'move in the right direction', several questions need to be answered before we can celebrate.

  • How will this consent be taken?

  • Will Twitter seek 'proof' of consent?

  • How will Twitter make a determination of what content to take down?

  • How will this be executed – by human moderators or will there be automated systems?

ADVERTISEMENTREMOVE AD
"Social media is often used to intimidate, harass, and share personal information of individuals – hence, it is good to see social media platforms like Twitter actively thinking about this issue, given that intimidation, harassment, involved. "
Jhalak M Kakkar to The Quint
The 'vagueness' and several unanswered questions on implementation, could make it just another policy, say experts.

'Implementation Is Key, But A Traditional Problem'

Twitter, in its expanded policy, has said that it will "always try to assess the context in which the content is shared" – and that only either the person featured or a representative can report it.

"For instance, we would take into consideration whether the image is publicly available and/or is being covered by mainstream/traditional media (newspapers, TV channels, online news sites), or if a particular image and the accompanying tweet text adds value to the public discourse, is being shared in public interest, or is relevant to the community," the statement said.

0

This, cybersecurity expert Srinivas Kodali, asserted is vague and the underlying problem with implementing such a policies – by not just Twitter but most social media platforms operating in India.

The 'vagueness' and several unanswered questions on implementation, could make it just another policy, say experts.
"The person who is taking the call could be someone who is in the US or Europe. Awareness, bias and language come into play. We do not know much about how fair the implementation is. We see everyone complaining about it – because it is not fairly implemented."

"This is a traditional problem. It is not a problem that is there with this rule – but anything that has to do with content removal – sensitive information, impersonation, COVID-19, misinformation – to name a few. This difficulty of enforcement is a larger issue – and that not lies with just Twitter," added Apar Gupta, Executive Director of the Internet Freedom Foundation.

ADVERTISEMENTREMOVE AD

What Are the Exceptions?

  • Media featuring public figures or individuals when media and accompanying tweet text are shared in the public interest or add value to the public discourse.

  • Instances where account holders may share images or videos of private individuals in an effort to help someone involved in a crisis situation, such as in the aftermath of a violent event, or as part of a newsworthy event due to public interest value, and this might outweigh the safety risks to a person.

Radhika Jhalani, Counsel, Software Freedom Law Centre explained to The Quint that the policy should also define who is a public figure, and how.

The 'vagueness' and several unanswered questions on implementation, could make it just another policy, say experts.

"Twitter has said that it has to be reported by the person concerned or the representative of such a person. But it could so happen that the person does not come across the misused photo at all. Unless, someone mentions me on Twitter, there is no way for me to know that my photo has been shared. This should be resolved," explained Jhalani.

Twitter's privacy policy, which is up on their website, neither defines who a public figure it or the rationale behind not including mainstream media in the policy. The Quint had reached out with specific questions to Twitter – which were not answered at the time of publication.

"Twitter should also clearly define who are the public figure. There are instances where the privacy of a public figure is violated in order to harass them and there is no way that Twitter will know unless the person reports the photo. The reason behind not removing something that mainstream media is running is also questionable. Since that could also threaten one's privacy."
Radhika Jhalani, Software Freedom Law Centre to The Quint
ADVERTISEMENTREMOVE AD

Policy May Be Misused, Warn Experts

The good thing about this is that a certain section will have the chance to question the sharing of images without their due approval.

But there is also another side.

The 'vagueness' and several unanswered questions on implementation, could make it just another policy, say experts.

In their statement, Twitter has said that it will look for context. But will it be based on what news organisations are spreading, asks Kodali.

"In a protest-like situation, suddenly the media available for law enforcement agencies to identify people and even when you may not have caused something bad you could get into trouble. Then there are scenarios of LGBTQIA protests – like Pride Walks – where photos have been taken, published or sold, without the person's permission. It should actually be context driven. In fairness, it could be a good policy, if they implement it in the right way."
Srinivas Kodali to The Quint

Kakkar explained that while it is inevitable that automated systems will be used to take such decisions, we must seek more accountability and transparency from social media platforms.

"It is inevitable on large social media platforms that automated tools will be deployed, however, these tools have error rates and we will have to see how this would play out. Automated systems do not always work as intended and there is a void on how these systems work. There is a need to move towards asking social media platforms to be more transparent around the deployment of such tools and their decision making processes," Kakkar said.

Without transparency, experts agree that this will be "just another vague policy."

(At The Quint, we are answerable only to our audience. Play an active role in shaping our journalism by becoming a member. Because the truth is worth it.)

Read Latest News and Breaking News at The Quint, browse for more from news and india

Topics:  Jack Dorsey   Cybersecurity   Parag Agrawal 

Published: 
Speaking truth to power requires allies like you.
Become a Member
3 months
12 months
12 months
Check Member Benefits
Read More
×
×