Tech Giants Join Hands Post NZ Terror Attack, FB Outlines Efforts

Facebook has shared digital fingerprints of more than 800 “visually-distinct” videos.

Published
Tech News
2 min read
 People perform a special blessing ceremony on the site of 15 March’s shooting outside the Linwood mosque in Christchurch. Social media giants have been working in close contact to deal with the range of terrorists and violent extremists operating online.
i

Facebook, after removing 1.5 million videos of the deadly Christchurch mosque terror attack in New Zealand that left 50 people dead, has now shared digital fingerprints of more than 800 "visually-distinct" videos related to the attack, via its collective database along with URLs and context on its enforcement approaches.

The social media giant has been working closely with other tech giants like Google, Twitter and Microsoft through the Global Internet Forum to Counter Terrorism, towards industry cooperation against the range of terrorists and violent extremists operating online.

In a blog post, Facebook laid out its efforts towards responding to the attack and helping the New Zealand police with the investigations. It said that the attacker's video was removed within minutes of the police reaching out to them.

Facebook Shares Details of Efforts

Following are the details Facebook provided to help with the investigations:

  • It said that the video was viewed less than 200 times during the live broadcast and that no one reported the video during the live streaming. The video was viewed about 4,000 times on Facebook before being removed.
  • The first user report came 29 minutes after the video started and 12 minutes after it ended.
  • Before Facebook was alerted, someone on 8Chan posted the link to a copy of the video on a file-sharing site.
  • It categorised the shooting as a terror attack, meaning that any sort of praise or support and representation of the events will violate Facebook's Community standards.
  • Personal accounts of the terrorist were removed from both Facebook and Instagram. It is still removing impostor accounts that show up.
  • Facebook hashed the video so that other shares visually similar are detected and deleted automatically (both from FB and Instagram).
  • Wherever visuals were difficult to detect, it used audio technology as an additional detection system.
  • Removed 1.5 million videos of the attack in the first 24 hours globally
  • Shared digital fingerprints of more than 800 "visually-distinct" videos related to the attack, via its collective database.

Global Pressure on Social Media Platforms

Facebook said that it will continue to work on this and provide further updates. This comes after the companies have been called out for airing the live stream of the terror attack.

Earlier, telecom companies in New Zealand had written to Twitter, Facebook and Google demanding an urgent solution to the problem of the video’s circulation.

"We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand government level on an enduring solution to this issue," read the letter quoted by news portal Stuff NZ.

Facebook, however, has been the most transparent towards what it has done to address the problem.

(With inputs from Stuff New Zealand)

(The Quint is available on Telegram. For handpicked stories every day, subscribe to us on Telegram)

Stay Updated

Subscribe To Our Daily Newsletter And Get News Delivered Straight To Your Inbox.

Join over 120,000 subscribers!