The Seoul Metropolitan Government recently rolled out AI-enabled CCTV cameras to prevent suicides on bridges.
According to CTV News, the South Korean administration said it had been working on a CCTV-surveillance-and-response system since 2012.
It is worth noting that South Korea, with a population of 52 million people in 2019, had the highest suicide rate, with more than 13,700 people ending their lives, a report by Organization for Economic Co-operation and Development (OECD) had revealed.
Nearly 500 suicide attempts are reported on 27 bridges over the nearly 500-kilometre-long Han River every year, a press release by the administration stated.
The Quint tries to find out the reliability of AI enabled CCTV cameras, which use deep learning to identify the 'behavioural patterns' of people in crisis and if such a system can be implemented in India.
How Accurate Is This Tech?
The Seoul Institute of Technology on Wednesday, said that the CCTV AI system automatically learns patterns of behavior by analyzing the data from cameras and sensors.
Prashanth Guruswamy, co-Founder of InstaSafe Technologies, a cyber security solutions company, explains how CCTV surveillance camera works.
"Behavioral analysis is done after using data collected from an array of sources, including CCTV footage, bridge sensors, previous suicide attempts, information from people who had previously attempted suicide, phone calls, and text messages. This data is then collated to determine a possible hazard. The AI can then forecast a hazardous situation and immediately alert rescue teams," he explained.
It should be noted that South Korea has a high density of public surveillance technology , with over 75,000 cameras in Seoul alone.
Guruswamy said, "The combined data from multiple sources can lead to an accurate assessment of risks and the fact that the surveillance systems consider various environmental factors and adjust readings as per these factors, further increases the accuracy. Since this is an AI-based system, the more quantum of data is collected, the higher the accuracy."
High Potential of Misuse
Any surveillance technology that is being used to analyse behavioral patterns has a high potential of being misused. The same is true for this project as well.
"Even though Seoul authorities claim that the video data is discarded within a month as per security regulations, the potential to use this data for purposes other than suicide probability determination, is both an endearing and a worrying prospect," Guruswamy added.
Interestingly, Korea’s privacy protection law, the Personal Information Protection ACT (PIPA) lays down very stringent regulations when it comes to collection and identification of such critical data.
The question of ethics and privacy is the most contentious one. Many citizens worry that the use of AI-enabled cameras could restrict individual freedom. "These systems are highly intrusive because they rely on the capturing, extraction, storage, or sharing of people’s biometric facial data – often in the absence of explicit consent or prior notice," read a statement from Privacy International.
Guruswamy noted that the gaping loophole is that government agencies that need any personal data for public good, can collect and use it without obtaining consent. Now, defining public good is a fallacy, which can lead to multiple privacy intrusions.
"The need to store such critical data, and the need to address the concern that this data maybe hacked without proper security measures in place, is paramount. Internet of Things (IoT) devices, such as CCTV cameras, are relatively novel and the concept of IoT security is still in its infancy. As such, special care is necessary while securely transferring the data to secure servers," he asserted.
Countries That Use AI Cameras
Authorities in Russia, use AI-based cameras to check for breach in quarantine rules by potential COVID-19 carriers.
According to Analytics Insight, in Moscow alone, there are over 1,00,000 facial recognition-enabled cameras in operation.
China has the highest ratio of CCTV cameras to citizens in the world – 1 for every 12 people. Several media reports claim that by 2023, China will be the single biggest player in the global facial-recognition market.
San Francisco is the first city in North America to ban facial recognition technology. Several other cities, including Oakland and Northampton, too, have voted to ban the technology. Following their footsteps, France and Sweden recently banned the use of facial recognition in schools.
Is India Ready for Such Tech?
"Implementation of such technology not only requires the necessary infrastructure, the cost of which could be large, but it would also result in a significant discounting of existing rights to privacy in the name of the greater good and may well be termed by critics as the first step towards a surveillance state," Guruswamy said.
Sourajeet Majumder, a cyber security expert, told The Quint that If India uses this technology it will only lead to mass surveillance. "In the absence of personal data privacy laws, it would impossible to trace how the government is using our personal data. This will lead to mass surveillance, which violates right to privacy and freedom of expression," he said.
"Implementation of such technology in a country like India needs to look at from various perspectives, especially what purpose would it solve. If we start extending the utility of such a technology from identification of suicidal patterns, to say, identification of robbery or mal intent patterns, it will result in a blatant invasion of privacy rights of individuals, as is happening in China"Prashanth Guruswamy ,Co-Founder & Global Head of Business Development, InstaSafe Technologies