Monday, Apr 29 2019

Can We Create Algorithms to Catch Criminals? Is It Ethical?

Written by

Can We Create Algorithms to Catch Criminals Is It Ethical

 

As technology becomes increasingly capable, it is easier to find more potential uses for various solutions. Recently, the New York Police Department (NYPD) announced a new tool designed to assist officers with reviewing police reports to find crime patterns that could indicate that the same person was involved in a string of offenses.

The program, which has been named Patternizr, could potentially assist in a range of investigations, saving time and valuable resources. While those some actions are currently completed by analysts, the task would be significantly less cumbersome if algorithms could manage much of the process.

However, some fear that such technologies cross a line. Many argue that programs like Patternizr could be unethical or might end up biased. If you are wondering whether we can or should use algorithms to catch criminals, here are some points to consider.

Increased Research Capabilities

When analysts look for patterns, they are often limited to their individual precincts. The sheer amount of data means a lot of time and attention has to be paid to look for other crimes that may relate to the one they are focused on.

When a tool like Patternizr is introduced, not only can individual precinct data be reviewed more quickly and accurately, but information for other precincts can be easily factored in as well. Since a criminal may not limit their activities to a single location, this increases the scope of a search without similarly increasing the workload on analysts.

 

The Potential for Bias

While the NYPD asserts that Patternizr isn’t designed to examine race or gender when identifying potential suspects and that tests have confirmed racial bias isn’t present, some worry that tools could make existing issues surrounding bias worse if they are used improperly. They fear that such solutions could perpetuate inequities within the policing system, arguing that certain racial minorities may be unfairly targeted based on their over-policing today.

Issues of bias have been present in some artificial intelligence (AI) solutions. Everything from the nature of the source data to how the technology is programmed can lead to bias, even if it isn’t intentional.

However, the NYPD is allowing independent researchers to audit their technology before it is actively used in their area. Their goal is to ensure fairness and promote transparency, hopefully keeping bias out of the equation.

 

Human Oversight

At this time, Patternizr is identifying possible patterns, but by no means has the final say in the matter. All potential suspects located by the system are referred to analysts, ensuring a person who also performs that function is involved before any actual action is taken.

Patternizr is being treated as a tool, not an authority. As long as human oversight remains in the equation and analysts make unbiased decisions, the algorithm isn’t actually catching criminals, people are.

Additionally, the use of tools like Patternizr doesn’t eliminate any basic rights. Any identified suspect would still have the same rights and protections as before, such as due process.

Ultimately, it is fair to say that algorithms can make certain forms of police work more efficient, potentially drastically. Only time will tell if these solutions will be used in an ethical and unbiased manner. However, there are numerous civil liberties organizations and advocates who are watching, and who will likely make themselves heard if they feel anything is awry.

 

Elevate Your IT Career with the Recruiting Experts at The Armada Group

If you would like to learn more about emerging technologies, the team at The Armada Group can help. Contact us to speak with a member of our experienced staff today and see how our technical knowledge can benefit you.