Artificial intelligence (AI) technology continues to make its way into every industry, and that holds true of law enforcement. Law enforcement agencies across the country have begun to integrate predictive policing, facial recognition and technologies into their day-to-day work. 

See Next: This Startup Invented Programmable, Drinkable Plastic That Dissolves In Water In 60 Hours

To date, the use of facial recognition software to identify Capitol rioters on Jan. 6, 2021, is perhaps the most notable use case in recent history. But the technology has notably had a number of other use case, both good and bad.

For example, a recent story published by The Guardian suggests a man was wrongfully arrested due to facial recognition. But the growing use of body cameras has led to a number of horrific events being unearthed. For example, many of the riots and protests from the Black Lives Matter movement in 2020 originated from released body cam footage.

Retail investors have even invested over $670,000 in a startup called Truleo on the popular startup investing platform, StartEngine. The startup is using artificial intelligence to analyze bodycam footage and produce “baseball card like stats” to measure police professionalism.

A North Carolina State University report published in February outlines the potential for AI technology to either bridge or deepen the divide between police and the public. 

Study Results

While there’s no shortage of key takeaways from the study — which was based on 20 semi-structured interviews of law enforcement professionals in North Carolina — the overarching theme is simple: Law enforcement agencies should be involved in the creation of public policies regarding AI and law enforcement. 

“Law enforcement agencies have a crucial role to play in implementing public policies related to AI technologies,” said Veljko Dubljević, co-author of the study and an associate professor of science, technology and society at North Carolina State University. “For example, officers will need to know how to proceed if they pull over a vehicle being driven autonomously for a traffic violation. For that matter, they will need to know how to pull over a vehicle being driven autonomously.”

To stay updated with top startup news & investments, sign up for Benzinga’s Startup Investing & Equity Crowdfunding Newsletter

On the plus side, many law enforcement officials believe that AI technology has the potential to improve public safety. But there’s a growing concern that implementing this technology in certain situations could cause more harm than good regarding trust between police and the public. 

Ronald Dempsey, the primary author of the study, touched on how AI technology is already changing law enforcement for the better.

“There are a number of AI-powered technologies that are already in use by law enforcement agencies that are designed to help them prevent and respond to crime. These range from facial recognition technologies to technologies designed to detect gunshots and notify relevant law enforcement agencies.”

With all the potential for good, there are also risks and downfalls associated with AI, including the fact that criminals will have access to the same — if not better — technology. 

“Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement,” the study concluded.

In the months and years to come, it will be critical for law enforcement agencies to properly train officers on AI capabilities, how they work and steps for mitigating risks. This training will help them more effectively use AI technology while making more informed decisions about its limitations and ethical risks. 

See more on startup investing from Benzinga.



Source link

author-sign