Introducing The Next Generation Of Leaders And Thinkers

Artificial Intelligence Is Deciding How Long People Should Be Jailed For

It has not been a hushed fact that the U.S. imprisons more of its citizens than any other country in the world, including nations under an authoritarian rule or those subjected to offend human rights. Our judiciary system comprised of judges and juries can make biased decisions in a verdict due to an array of influencing factors that range from their mood of said day or personal life experience that creates a bias. Considerable circumstances may unfairly determine a person’s sentencing. To err is of basic human nature, but those errors have proven to be costly to defendants whose lives bank on the mood of the juries and judges. So how do we restructure our system to rely less on human accounts and in lieu, the particulars of each case? Having the technology of AI arbitrate the sentencing of the plaintiff.

Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, is one of the most popularized computer programs for risk-assessment. The algorithm distributed by Northpointe, INC. operates by using data fed to it to predict whether a criminal defendant is most likely to re-offend or not. The program then spits out a number on a scale of 1 to 10 as a “risk score.” Although these risk scores intention was not to be the sole decider of the defendant’s fate, they have now encompassed the decision making’s process; weighed down with this magnitude of importance put upon them that was not intended. In what is originally already a sensitive and vulnerable judgement, has been diminished to be decided among a digital questionnaire run by artificial intelligence.

The algorithm’s inability to emote, sympathize and empathize has caused this practice to be overrun with complications. The capacity the AI possess to comprehend data has allowed it to imitate the biases of its designer, which is one of the most overt issues with the program.

A study found that COMPAS was found reprehensibly labeling plaintiffs whom were black as most likely to re-offend over white plaintiffs.

This also meant that white defendants were more likely to be given lower scores due to the program’s data of processing cold, hard numbers and using that knowledge to make premature assumptions of each defendant, therefore creating a bias.

If the program is not being sought out with the proper upkeep to prevent these premature assumptions, the bigotry it learns becomes further ingrained in it. The program is not mimicking our own bigotry as humans, but further exaggerating it.

While a human jury may be flawed in how its psyche is doing on a day while analyzing a defendant, an AI has not yet replaced how we are able to sense underlying emotions tucked away from the surface through a defendant’s tone and actions. We, as people, are not summed up by lists of numbers and facts, but raw emotion.

The AI can also mistakenly measure other traits as less severe versus others in a defendant. It may view, for example, a serial sexual offender with a job as someone with a lower risk score than someone who may have been prosecuted for petty theft but is homeless. That contributing factor to the lack of emotion shows the danger it imposes on our judicial system if we continue to place so much emphasis on risk assessment program’s decisions.

Related Posts