Twitter in a recent policy change has tried to make the platform safer for its users but the move has raised more doubts than assurances.
Under its updated policy, the social media giant will now take action against users who post photos or videos of private individuals without their permission. And the policy also raises the question of the role of human content assessors at Twitter, who are seemingly now the final authority on the intent behind every post on the platform.
In a statement, Twitter said that the misuse of such information can have a “disproportionate effect on women, activists, dissenters, and members of minority communities”.
While a section of the internet has welcomed the policy, the other raised several doubts on whether it would be practical to enforce.
What can be the foreseeable hurdles in implementing such a policy? And with more than 211 million daily active Twitter users, how do you get a policy like this correct at scale?
To discuss this, we spoke to Apar Gupta, the Executive Director of the Internet Freedom Foundation, Srinivas Kodali, an independent researcher at the Free Software Movement of India, and Radhika Jhalani, a Counsel at Software Freedom Law Centre.