Automation has been occupying more space in the productive system in today’s society and algorithms are at the core of it. They are logical sequences of instructions that enable autonomous execution by machines. The expansion of these calculation methods is largely due to the use of software, able to collect a more significant amount of data from different sources. It is embedded into users’ everyday tools such as Google’s search, social media networks, movies and music recommendations of Netflix and Spotify, personal assistants, video games, surveillance, and security systems.
Computers are much more efficient in replicating endless tasks without committing mistakes. They connect information faster and establish protocols to log input and output data. They don’t get distracted, tired, or sick. They don’t gossip, or miscommunicate with each other. The Post-Accident Review Meeting on the Chornobyl Accident (1986) reviewed that poor team communication and sleep deprivation were the major issues that caused the disaster.
In 2018, the project Blue Brain relieved brain structures are able to process information in up to 11 dimensions. Computers, on the other hand, process zillions of dimensions and are able to unhide patterns that the human brain could not imagine. The concept of big data goes beyond the number of dataset cases. It also involved the number of features/ variables able to describe phenomena.
Of all the advantages of computers, the most important one is their inability to be creative – at least so far. If I go to bed trusting that my phone is not planning revenge or plotting against me with other machines is because computers don’t have their own wills. Computers don’t have an agenda. Human beings do. Public opinion has become more aware of the impact of automation on the global economy. Accordingly, to a Pew Research 2019 study, 76% of Americans believe that work automation is more likely to increase inequality between rich and poor people, and hurt workplaces (48%). 85% of them favor limiting machines to dangerous or unhealthy jobs.
Computers uncover patterns; they don’t create new ones. Machines use data to find patterns from past events, which means their predictions will replicate the current reality. If the reliance is on algorithms, the world will continue as it is. In “Weapons of Math Destruction,” Cathy O’Neil adds a new layer to how automation has propagated inequality by feeding biased data to models. “Weapons of Math Destruction” is a book by Cathy O’Neil published in 2016, which explores the societal impact of algorithms. O’Neil introduces the concept of “weapons of math destruction,” referring to big data algorithms that perpetuate existing inequality. She highlights three main characteristics of WMDs: they are opaque, making it challenging to understand their inner workings and question their outcomes; they are scalable, allowing biases to be magnified when applied to large populations; and they are difficult to contest, often used by powerful institutions that hinder individuals from challenging their results. Stretching her own example, if we based educational decision-making policies on college data from the early 1960s, we would not see the same level of female enrollment in colleges as we do today. The models would have primarily been trained on successful men, thus perpetuating gender and racial biases.
This article is intended to explore one of the examples she gives in her book, about the recidivism algorithm. One case to illustrate it was published in May 2016 by the nonprofit ProPublica. The article Machine Bias denounced the impact of biased data used to predict the probability of a convict committing new crimes in a commercial software Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) risk scores. The algorithms used to predict recidivism were logistic regression and survival analysis. Both models are also used to predict the probability of success of medical treatment among cancer patients.
“The question, however, is whether we’ve eliminated human bias or simply camouflaged it with technology. The new recidivism models are complicated and mathematical. But embedded within these models are a host of assumptions, some of them prejudicial. And while Walter Quijano’s words were transcribed for the record, which could later be read and challenged in court, the workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite”.
To calculate risk scores, COMPAS analyzes data and variables related to substance abuse, family relationship and criminal history, financial problems, residential instability, and social adjustment. The scores are built using data from several sources, but mainly from a survey of 137 questions. Some of the questions include “How many of your friends have been arrested”, “How often have you moved in the last twelve months”, “In your neighborhood, have some of your friends and family been crime victims”, “Were you ever suspected or expelled from school”, “how often do you have barely enough money to get by”, and “I have never felt sad about things in my life”.
According to the Division of Criminal Justice Service of the State of New York 2012, “[COMPAS-Probation Risk]Recidivism Scale worked effectively and achieved satisfactory predictive accuracy”. The Board of Parole currently uses the score for decision-making. Data compiled by the non-profit Vera show that 40% of people were granted parole in 2020 in NY. In 2014, Connecticut reached 67% of the granted parole rate, Massachusetts 63%, and Kentucky 52%.
Former U.S. former Attorney General Eric Holder commented about the scores that “although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice […] may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”
Race, nationality, and skin color were often used in making such predictions until about the 1970s, when it became politically unacceptable, according to a survey of risk assessment tools by Columbia University law professor Bernard Harcourt. Despite that it is still targeting underprivileged communities, unable to access welfare. In 2019, African-Americans and Hispanic Origin Groups’ poverty rate was 18.8% and 15.7%, respectively, compared to 7.3% of white people.
The assessment of social projects has shown a decrease in violence among vulnerable communities assisted by income transfer programs in different parts of the world. In the US, the NGO Advance Peace conducted an 18 monthly program targeting members of a community that are at the most risk of perpetrating gun violence and being victimized by it in California. The program includes trauma-informed therapy, employment, and training. The results show a decrease of 55% in firearm violence after the implementation of the program in Richmond. In Stockton, gun homicides and assaults declined by 21%, saving $42.3 -$110M in city expenses in the 2-year program.
In this sense, using algorithms will propagate the current system. Predictions reinforce a dual society in which the wealthy are privileged to receive personalized, humane, and regulated attention, as opposed to vulnerable groups that are condemned to the results of “smart machines”. There is no transparency in those machines, and no effort from companies or governments to educate the public opinion regarding how the decisions are made. In this regard, a scoring system is created to evaluate the vulnerable. The social transformation will come from new policies directed to reduce inequality and promote well-being.