To have or to have not?: The great public safety technology debate
A range of technologies are being used by public safety agencies globally to protect our communities. CCTV, body worn video cameras as well as facial and license plate recognition (LPR) systems are among modern solutions that emergency services in operation every day. However, these technologies have also faced fierce opposition over concerns about privacy and fears of their misuse by authorities.
While some concerns are valid, uninformed and even irrational objections are sometimes expressed about their use.
Amnesty International Ban the Scan earlier this year, a campaign calling for the end to facial recognition systems because “the technology exacerbates systemic racism and could disproportionately impact people of colour,”.
There have also been open letters calling for companies including Amazon to stop selling facial recognition technology to law enforcement. While Amazon did implement a moratorium on sales of facial recognition to law enforcement agencies in the United States, the company remains adamant that the technology should continue to be used the right oversight from from legislative functions to ensure its appropriate use.
A serious debate is brewing about when it is appropriate for governments to use emerging technologies in the United States, the EU and other parts of the world.
Concerns continue to be raised about whether algorithms used by facial recognition systems are biased against people with darker skin. In 2018, the American Civil Liberties Union (ACLU) found that Amazon’s software, Rekognition, incorrectly detected 28 members of Congress as criminals with many among the misidentified being people of colour.
Yet these challenges are not insurmountable for technology companies. According to the National Institute of Standards Technology, the accuracy of facial recognition has improved considerably in recent years, largely due to the advances in artificial intelligence and machine learning as the core technology.
Steps are also being taken to reduce errors made by AI and facial recognition systems by training them on more comprehensive and broadly representative data sets – thereby eliminating cases of bias where false accusations can be made.
Facial recognition technology is already helping to increase accuracy while reducing human error in policing. One of the best uses of AI by emergency services is actually to help public safety personnel to make faster and better decisions at critical times, not to replace people with machines.
Used appropriately, facial recognition systems actually help to exonerate suspects, increase fairness in our judicial system and improve current policing practices. Consider the fact that mistaken eyewitness identifications contributed to 69 percent of more than 375 wrongful convictions in the United States. Although these were subsequently overturned by DNA evidence, facial recognition and AI technology could have prevented that.
In a real-life example, Juan Catalan, the subject of the Netflix documentary Long Shot, was arrested in 2003 for a murder he didn’t commit. To prove his alibi, his lawyers pored over hundreds of hours of footage to try and find a needle in a haystack – proof that he was among 56,000 people at an LA Dodgers game when the crime was committed. Video evidence ultimately proved Catalan was not the murderer and literally got him off death row.
Had law enforcement or Catalan’s defence attorneys had access to video recognition software at that time the trial could have been avoided in the first place.
In the article “Should we fear facial recognition”on Medium.com, the authors conclusion was “What’s needed is a rational and considered evaluation of the benefits and drawbacks of such technology, including how it assists law enforcement agencies in protecting law-abiding citizens from criminals.”
Other useful applications for AI in public safety include helping to quickly identify someone in a large crowd who needs medical assistance or is incapacitated and safely identifying victims of crimes including human trafficking or abduction.
A personal privacy experiment
Meanwhile, a Malaysian video blogger and political activist raised concerns last year that the trackers and permissions in MySejahtera, Malaysia’s federal COVID-19 contact tracing app, contravened civil and privacy rights. Yet by conducting my own review of the trackers and permissions on my Android phone, I found that permissions would only be granted to MySejahtera if I chose to allow them.
From responsible AI to Video analytics as a Services
Civil rights and privacy concerns have been raised over the use of video analytics, artificial intelligence (AI) and Automatic Licence Plate Recognition (ALPR) used to enhance public safety and crime prevention.
In a Forbes article from February 2020, Paul Steinberg, Senior Vice President and Chief Technology Officer at Motorola Solutions said that AI used for law enforcement purposes must be anchored in (and measured against) widely accepted culturally and ethically appropriate methods. It must also be fair, easy to understand and adhere to strict codes of privacy and security.
In short, having these controls in place means AI can be trusted by its users, the police and society as a whole.
“It’s important to understand that AI is fundamentally amoral. It is not influenced directly by human discriminatory tendencies, emotions, distractions or fatigue. But issues such as bias occur when the output of the AI process results in inconsistent treatment across a group. This is often because, for example, the data used to train the AI was itself biased. For instance, misidentifying faces for one demographic such as race, gender, age and physiology,” said Steinberg.
Vehicle license plate reading is a highly specialised practice that requires purpose-built cameras and analytics.
Motorola Solutions, acquired data and image analytics company VaaS International Holdings, Inc. (VaaS) in 2019. An acronym for ‘Video analytics as a Service’, the company’s image capture and analytics platform, including fixed and mobile license plate reader cameras powered by machine learning and artificial intelligence, provide vehicle location data to public safety and commercial customers.
It’s important to understand that AI is fundamentally amoral. It is not influenced directly by human discriminatory tendencies, emotions, distractions or fatigue. But issues such as bias occur when the output of the AI process results in inconsistent treatment across a group. This is often because, for example, the data used to train the AI was itself biased. For instance, misidentifying faces for one demographic such as race, gender, age and physiology. – Steinberg.
VaaS’ platform enables controllable, audited data-sharing across multiple law enforcement agencies.
With vehicle location information, authorities can shorten the time needed to resolve an incident and improve outcomes for public safety agencies, especially when location details are combined with other police records. Law enforcement agencies have successfully used technologies such as those provided by VaaS to quickly identify and apprehend dangerous suspects and to find missing persons.
More particularly
Automatic license plate recognition (ALPR) provides a cost-effective means for law enforcement agencies to manage a variety of tasks – from detecting vehicles belonging to suspected criminals, monitoring the movements of convicts on probation or parole, enforcing outstanding fines and more.
In Sacramento County USA, the two biggest local law-enforcement agencies have more than tripled their number of vehicle detections conducted with ALPR to 37 million plate reads last year.
“We’ve got numerous cases where this is used as an investigative resource,” the Sheriff, Sgt. Kyle Hoertsch told media title, SN&R. “We use it in every aspect of the job.”
Cameras and the cloud
By uploading images of vehicle plates to the cloud or on premise servers, police officers can conduct more comprehensive searches and leverage advanced analytics to uncover valuable insights that enable them to operate more efficiently.
By integrating ALPR technology with other video sources such as in-car and body-worn camera systems, a larger pool of visual evidence is created for law enforcement users to capture and manage data to support their investigations.
A licence plate image captured by an officer in the field can help to determine whether a vehicle is stolen or associated with an outstanding warrant. Having that kind of information could help an officer know what the risks are before approaching a known felon at a routine traffic stop.
According to the International Association of Chiefs of Police, 97% of car thieves are charged with additional crimes, often while using stolen vehicles. Therefore, ALPR cameras used to identify stolen vehicles can also help to prevent additional crimes.
In law enforcement, ALPR, facial recognition and AI-based technologies are all helping agencies to work more safely and effectively while maintaining compliance with rules and regulations.
Debate over the ethical use of emerging technologies should continue – that’s an essential part of ensuring these technologies are developed with the fairness, accuracy and privacy standards in place.
In law enforcement, ALPR, facial recognition and AI-based technologies are all helping agencies to work more safely and effectively while maintaining compliance with rules and regulations.
However, the bottom line is that no technology can ever be guaranteed to be 100 percent accurate. Despite that, we cannot overlook the fact that these technologies should continue to be developed to help law enforcement authorities to protect our communities. With access to the best possible technologies, our emergency services are better equipped to deliver their vital daily work.