32 pages • 1 hour read
Cathy O'NeilA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Summary
Chapter Summaries & Analyses
Key Figures
Themes
Index of Terms
Important Quotes
Essay Topics
Tools
While technology can seem like a way of eliminating prejudice from a variety of domains, it often encodes human biases and prejudice. One way it does this is by applying a specific set of rules to a large group of diverse people, effectively stripping away consideration of their individual needs and circumstances. By forcing many people to conform to a single model, outliers and complex situations are lost in the shuffle and suffer the consequences. It may seem fair that the same model or equation is applied to everyone, but the lack of consideration for the unique situations people face makes that equation unbalanced.
Beyond scale, machines are also biased in that they can only see what their creators have programmed them to see. In the case of Weapons of Math Destruction, or the algorithms that determine how much of our society functions, they are often created by a select group of wealthy individuals who work in the financial or Big Data sectors with privileged backgrounds and top tier educations. While their intentions aren’t necessarily to create bias, they do seek to yield profits. When people in this elite class are the ones making the rules and applying them to everyone, the machines can’t help but inherit the bias their creators harbor, knowingly or not. After all, “[a] model’s blind spots reflect the judgments and priorities of its creators” and are essentially just “opinions embedded in mathematics” (21).
Machines don’t have the tools to question and evaluate fairness, and because their work is based in math, the humans creating these algorithms are often blind to their repercussions. O’Neil points out that an ideal model would be transparent and flexible, as it would allow for nuances and the data would be available so that anyone could see flaws in the machine’s output.
In a world governed by WMDs, inequities flourish because they are perpetuated by systems that seem beyond question and target the most vulnerable in society. O’Neil calls out each inequity and injustice as they pertain to WMDs.
The primary target of these inequities is marginalized communities, particularly people of color, women, and those who live in poverty. To a reader familiar with history, the fact that marginalized groups are targets likely doesn’t come as a shock. One example O’Neil highlights is the exclusion of “[a]s many as sixty of the two thousand [medical school] applicants [… who] may have been refused an interview purely because of their race, ethnicity, or gender” (117). This exclusivity also applied to people of color and women trying to access loans or any other privileges typically not afforded to them. Another example O’Neil offers is that hiring companies that use credit checks to determine a worker’s suitability “disproportionately affects low-income applicants of color” (149).
Education is also another area where poorer and middle-class Americans are “saddled with debt that will take decades to pay off” (65). O’Neil even points to an algorithm that seeks out people who are down on their luck to advertise private colleges as a way out of their situation, even though this will only perpetuate the cycle of debt and poverty. In fact, there are almost no domains of life where the odds aren’t stacked against marginalized communities, as O’Neil demonstrates throughout the book.
Of course, the nature of discrimination goes far beyond these barriers of denial and debt, but it’s important to acknowledge they’ve always existed. The key difference here, however, is the extent to which communities of color in particular are being targeted. Before, where a single loan officer or medical school could put up barriers, now there are automated systems that guide daily life, from credit scores to policing rates, which can inflict the prejudiced work of a single model onto millions of people with the execution of a program.
As O’Neil states, “[t]he computer learned from the humans how to discriminate, and it carried out this work with breathtaking efficiency” (116). For the people running these programs, this efficiency looks like success because the toxic feedback loop validates false results and often brings profit. Even if the intentions aren’t nefarious, the data isn’t properly analyzed and used to reformulate the models in a meaningful way by humans. So, these broad-stroke approaches end up isolating and punishing members of these marginalized communities.
One metric that many algorithms use, directly or indirectly, is location, and O’Neil demonstrates just how critical a zip code can be in predicting the course of someone’s life. Even algorithms that claim not to rely on zip codes to make predictions collect data that may reveal location by proxy, whether based on the crime rates, types of stores, or average income.
For example, areas that are poorer will be policed more heavily, and therefore, there will be more recorded crime, particularly minor offenses that often go unnoticed in other, more affluent areas. Individuals who live in these areas will be considered riskier in general, even if they have no record of any crime and a strong credit history. O’Neil argues that factoring in zip codes to e-scoring models “codif[ies] past injustices [… because] [w]hen they include an attribute such as ‘zip code,’ they are expressing the opinion that the history of human behavior in that patch of real estate should determine, at least in part, what kind of loan a person who lives there should get” (146). In essence, it’s the idea of making broad judgements about a person based on a potentially unrelated factor or at least a factor that may be beyond their control. This causes people who live in certain codes to receive loans with higher interest rates and longer terms that make them significantly harder to pay off and cause people to fall into debt traps.
Location doesn’t only impact immediate financial decisions or policing rates but also areas that can be harder to gauge, like education. Teachers in poorer areas are often pushed out of the system, as seen with educator Sarah Wysock, who was a strong instructor removed from a high-needs DC school by an opaque scoring model. When these stronger teachers leave for a variety of reasons, they bring their valuable skills to schools with the capacity to appreciate their efforts and rely on more than just nebulous evaluation metrics, schools that are typically in wealthier districts. High teacher turnover and a hyper-focus on testing causes schools in neighborhoods with poverty to experience instability. This kind of instability is also perpetuated by the nature of employment for many of the individuals who live in poorer communities as well. In the scheme of WMDs, where a person lives determines much of what technology assumes about them and shapes the opportunities afforded to these individuals accordingly.