I started my reading of Meredith Broussard’s Artificial Unintelligence: How Computers Misunderstand the World with a set of haunting questions: “Will AI eventually replace my job?” and “Should we be worried about the future where AI might dominate the world?” The entire reading experience was enlightening and inspiring, with one of the most direct takeaways being that as a woman, I should not fear studying STEM, despite societal notions during my childhood that my brain may not be qualified to process complex logic and math. Broussard starts the book by illustrating some personal experiences like dissecting toys/computers to learn, providing women’s perspectives in the male-dominated STEM field, and writing criticisms on our current definitions of artificial intelligence. Each chapter stands alone as an independent piece, yet they all contribute to a cohesive narrative that guides readers from understanding the limitations of AI to the concept of “technochauvinism” and how this misplaced faith in tech superiority can lead to a harmful future.
The book starts with a critique of technochauvinism. She gives a wonderful description in the first chapter, “The notion that computers are more ‘objective’ or ‘unbiased’ because they distill questions and answers down to mathematical evaluation; and an unwavering faith that if the world just used more computers, and used them properly, social problems would disappear and we’d create a digitally enabled utopia.” (8) I suggest juxtaposing this statement with one from Chapter 7, where she writes, where she writes, “Therefore, in machine learning, sometimes we have to make things up to make the functions run smoothly.” (104) By comparing the two arguments, I understand that Broussard addresses that a clean, repeatable, large-scale structure and mathematical reasoning do not guarantee a valid model in all contexts. While some abstract aspects in human lives, like emotions, memories, values, and ethics, are reduced to numeric representations in a single dimension but with the wrong direction, data and ML algorithms are “unreasonably effective”(119) in calculations and would intensify bias, discrimination, and inequity.
This book is quite scathing in its criticism of the technochauvinist male leaders in the tech industry. You can find quite direct praise and criticism in the sixth chapter of this book. She recalls the mindset and working methods that tech entrepreneurs and engineers have been employing to guide the world since the 1950s. Bold innovation and disregard for order are two sides of the same coin. What intrigues me is how feminist voices can lead to specific interventions in such an environment. This book was written in 2018, and as of 2023, with the increasingly rapid development of AI, we are witnessing a ‘glass cliff’ phenomenon for female tech leaders. Check the news on the newly appointed Twitter CEO Linda Yaccarino and the concept of the glass cliff that women are usually promoted to leadership roles in industries during the crisis and are, therefore, set up for failure.
In the book’s first chapter, Broussard emphasizes what is considered a failure and why we should not fear failure in scientific learning. I find it fascinating to connect Chapter 6 with the recent news about the “glass cliff,” which reminds us to consider the definition of failure dialectically. The glass cliff of women leaders further alerts us that failure can also be unilaterally interpreted from data. The intervention of women in technochauvinism might also present a state of failure. It raises questions about how we can move beyond data and computational methods to consider feminist interventions in technological development.
Regarding feminist interventions, one possible example of self-driving robot cars in Chapter 8 provides unique insight. We talked about the concept of resist reading by Judith Fetterley in class and the scalar definitions of gender at the beginning of the course. One difference between the two kinds of algorithms in robot driving mentioned by Broussard reminds me of the above discussion. I will try to explain my idea here to make some connections.
- Both the resist reading and the scalar definitions are trying to find alternative interpretations and respect non-mainstream experiences.
- A robot car model by the CMU that uses the idea of the Karel problem (a chess game in a grid-like pawn for learning programming) challenges the preexisting assumption that we need to build a robot car mimicking human perception. Instead, they propose the idea that we use machine like machine to collect data from building 3D maps, do the calculations quickly, and feed the results back to driving on a grid-like path, which is just like how humans invented airplanes (we were not building a mechanical bird but discovered another model of flying).
If you think about 1 and 2 together: The workflow described above breaks down the idea of reconstructing human intelligence using machine/replacing humans with machine and provides alternative thinking in utilizing machine/algorithm. This narrow definition of machine learning/AI is also a restrained use of machine. I believe this case represents an instance of not excessively breaking the rules and advocating technochauvinism. They respect humanism yet are still achieving innovative results. It probably could also serve as an abstract example of feminist intervention.
And last, let us return to an example of textbook distribution in Chapter 5. Broussard believes that we have overestimated the role of algorithms and artificial intelligence in textbook distribution, resulting in many students without access to books. The sophistication of textbook database models cannot prevent the chaos of input data, and the disorder of textbook distribution and management cannot be solved by large models. Therefore, she visited several schools and discovered the management chaos behind big data. This action reminds me of the data cleaning phase we mentioned earlier, where we would fill in data to maintain a controllable cleaning structure. This kind of on-site investigation for problem identification might be considered another form of data cleaning. Although it seems to bring chaos to the big data model, her visit accurately identified the root cause. Therefore, if human problems are not solved, technology ultimately cannot solve societal problems.
Overall, Broussard raises many challenging questions, shares her perspective as a woman in STEM, and presents an argument for a balanced approach to technology.