Ultimately, last week’s terror attack in Lower Manhattan was halted by the bravery and skill of New York’s finest – Officer Ryan Nash. As authorities investigate new technologies to keep citizens safe it is unclear if intelligent machines could have prevented or alerted first responders sooner to prevent loss of life. Enhancing law enforcement with “robocops” opens a Pandora’s box of civil liberty issues, but it does not seem to be slowing a growing worldwide trend of mechanized police forces.
Already there are more than 13,000 NYPD video cameras surveilling every street corner of Manhattan. This is on top of tens of thousands of privately owned CCTVs above every storefront, office lobby and apartment entrance (not including the eight million cameras in people’s pockets). In fact, the private security footage of retail stores in New York led to the arrest of a terrorist last year responsible for a bomb in Chelsea. According to the Assistant U.S. Attorney Shawn Crowley, “The Rahimi case relies on video from security cameras in storefronts and businesses all over New Jersey and New York. You’ll see video of the defendant in every stage of the attack.”
In the popular television drama, “Person of Interest,” crimes are prevented by utilizing artificial intelligence software in analyzing thousands of video feeds automatically. Life imitates art, as seen last year when China reported a new security solution using an AI deep learning platform to automatically screen travelers in airports and train stations. Outsourcing surveillance to AI is part of a growing trend in responding to an overabundance of visual data, which is expected to increase with the exploding shipments of CCTVs that are projected to be more than 150 million by 2020. Last March, the leading digital video provider, Dahua Technology, team up with chip-maker Nvidia to launch Deep Sense, a new AI-enabled server infrastructure. Utilizing Dahua’s proprietary face recognition technology, Deep Sense is able to autonomously monitor close to 200 simultaneous videos streams for suspicious activity. Human analysts are able to quickly program the neural network with search parameters, such as gender, ethnicity, age and clothing, to track suspicious behavior and locate suspects even before they commit crimes regardless of image quality.
Last week Dahua’s announced a major enhancement to its platform with their strategic partner, Seagate Technologies. Seagate’s SkyHawk AI hard drive is the first ever disk designed specifically for AI-enabled video. According Sai Varanasi, Seagate’s Vice President, “The use of AI technology in surveillance is steadily increasing – both in the edge and backend installations such as retail fronts and large city traffic management.” The immediate use case for SkyHawk AI is traffic enforcement, but long-term such next-generation technology could be used for monitoring individuals on the terror watchlist.
While video surveillance becomes more searchable through the implementation of deep learning systems, mobile robots aim to take surveillance to the next level. Unlike fixed cameras, these bots offer laser scanning, thermal imaging, and 360-degree video that can literally follow suspects. Currently, there are four companies commercially deploying robots in the sector: Knightscope, Gamma2, SMP and Sharp.
Knightscope has the most success in the US market with fifty units operating in ten states. Stacy Dean Stephens, co-founder, explains: “We’re about to see a rising of this type of technology. It’s very reasonable to believe that by the end of next year, we’d have a couple of hundred of these out.” Their robotic security guard stands at five and half feet tall, three feet wide and weighs over four hundred pounds. To fund its rollout, Knightscope has been raising money through public crowd funding campaigns complete with national advertisements in the Wall Street Journal and on CNN.
Others might not share Stephens enthusiasm given that there have been already multiple accidents in robots plowing over toddlers and pedestrians. Professor Michael Froomkin wrestles with the public policy around mobile devices, “There are a lot of issues of how to stop it from hurting people, accidentally running over their toes, pushing over children and dogs, that kind of thing. If you have a robot with no distinguishing marks, who are you going to call? It’s a very good question and it’s already happened in real life.”
While the clumsiness of security robots will eventually work itself out, the big question is the addictive benefit to police professionals. Stephens, himself a former Dallas police officer, explains that the purpose of his fleet of robots is to deter crime, not to stop it. “Where we draw a very, very thick red line is the weaponization of the machines, even less than lethal,” explains Stephens. Instead, when criminals attempt to evade Knightscope’s robots it sets off a series of alarms to bring greater attention to the perpetrator. China has taken a more extreme measure with robots patrolling the streets of Shenzhen and Hong Kong that are equipped with tasers. Unveiled last April by the Chinese National Defense University, the Anbot robot is first armed security robot operating in the world (even the US military has been hesitant of crossing such an ethical threshold). As the Chinese newspaper The People’s Daily extols, “AnBot is able to patrol autonomously and protect against violence or unrest.”
This morning as I commuted to work on my Citibike, traveling the same route as the horrible terror attack last week, I was met with an obstacle course of concrete barriers protecting New Yorkers from future attacks. This low-tech solution is indicative of governments ability to quickly respond to disasters, while preventing them leads to lengthy political debate. The fear with security robots is if a catastrophe happens within feet of the machine that could have been prevented by weaponizing it, the public outcry could lead to more countries adopting China’s stance.
This concern was first reported in June 2016, when a leading group of AI researchers sent an open letter to the world in an effort to warn citizens of the deployment of “Lethal Autonomous Weapons Systems.” In their words: “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing…starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”