Daily Archives: May 15, 2023

Roundtable 2 Abstract: Feminist Theories and Humanistic Computing

Our roundtable today loosely follows several themes within the larger question of what a feminist Text Analysis might look like for disciplines in the Humanities.

We begin with Atilio Barreda’s argument for a more strategic application of transfer learning as a model of a self-consciously feminist textual analysis that would recognize and account for the situated and contingent status of Machine Learning.  

Similarly, Zico Abhi Dey, in his recent examination of Open AI, implies that open-source language models with their shift in scale, and attention to computational costs over efficiency, might provide a feminist alternative to flawed Large Language Models.

Our other panelists, Livia Clarete, Elliot Suhr, and Miaoling Xue, bring a much-needed multi-lingual perspective to current Text Analysis and the applications of Feminist Data principles. 

Clarete examines communication styles in English- and Portuguese-language healthcare systems and asks how feminist notions of care and power-relations might inform our use of linguistic analysis and corpus analysis studies. 

Suhr explores how the biases in data collection, language models, and algorithmic functions can exacerbate disproportions of power in dominant and minoritized languages and suggests that an intersectional feminist framework is essential to unpacking these issues. 

Xue approaches another aspect of Humanistic computing—the construction of the historical past—by looking specifically at what is lost and what can be gained by applying current Western feminist models of digital archival reconstruction when approaching a corpus that differs in space, time, and significantly language. She concludes by considering the implications for representations of women in narrative history and the occluded labor of women in the production of texts, particularly in the so-called “invisible” work of editorial notation and translation.

Taken together, these discussions animate and ground what Sara Ahmed calls “the scene of feminist instruction” which she identifies as “hear[ing] histories in words; . . . reassembl[ing] histories in words . . . attending to the same words across different contexts” (Ahmed 2016) and which could equally be a description of a responsible and informed feminist text analysis itself.

Participants: Atilio Barreda, Bianca Calabresi, Livia Clarete, Zico Abhi Dey, Elliot Suhr, Miaoling Xue

Book Review: Meredith Broussard’s Artificial Unintelligence: How Computers Misunderstand the World

I started my reading of Meredith Broussard’s Artificial Unintelligence: How Computers Misunderstand the World with a set of haunting questions: “Will AI eventually replace my job?” and “Should we be worried about the future where AI might dominate the world?” The entire reading experience was enlightening and inspiring, with one of the most direct takeaways being that as a woman, I should not fear studying STEM, despite societal notions during my childhood that my brain may not be qualified to process complex logic and math. Broussard starts the book by illustrating some personal experiences like dissecting toys/computers to learn, providing women’s perspectives in the male-dominated STEM field, and writing criticisms on our current definitions of artificial intelligence. Each chapter stands alone as an independent piece, yet they all contribute to a cohesive narrative that guides readers from understanding the limitations of AI to the concept of “technochauvinism” and how this misplaced faith in tech superiority can lead to a harmful future.

The book starts with a critique of technochauvinism. She gives a wonderful description in the first chapter, “The notion that computers are more ‘objective’ or ‘unbiased’ because they distill questions and answers down to mathematical evaluation; and an unwavering faith that if the world just used more computers, and used them properly, social problems would disappear and we’d create a digitally enabled utopia.” (8) I suggest juxtaposing this statement with one from Chapter 7, where she writes, where she writes, “Therefore, in machine learning, sometimes we have to make things up to make the functions run smoothly.” (104) By comparing the two arguments, I understand that Broussard addresses that a clean, repeatable, large-scale structure and mathematical reasoning do not guarantee a valid model in all contexts. While some abstract aspects in human lives, like emotions, memories, values, and ethics, are reduced to numeric representations in a single dimension but with the wrong direction, data and ML algorithms are “unreasonably effective”(119) in calculations and would intensify bias, discrimination, and inequity.

This book is quite scathing in its criticism of the technochauvinist male leaders in the tech industry. You can find quite direct praise and criticism in the sixth chapter of this book. She recalls the mindset and working methods that tech entrepreneurs and engineers have been employing to guide the world since the 1950s. Bold innovation and disregard for order are two sides of the same coin. What intrigues me is how feminist voices can lead to specific interventions in such an environment. This book was written in 2018, and as of 2023, with the increasingly rapid development of AI, we are witnessing a ‘glass cliff’ phenomenon for female tech leaders. Check the news on the newly appointed Twitter CEO Linda Yaccarino and the concept of the glass cliff that women are usually promoted to leadership roles in industries during the crisis and are, therefore, set up for failure.

In the book’s first chapter, Broussard emphasizes what is considered a failure and why we should not fear failure in scientific learning. I find it fascinating to connect Chapter 6 with the recent news about the “glass cliff,” which reminds us to consider the definition of failure dialectically. The glass cliff of women leaders further alerts us that failure can also be unilaterally interpreted from data. The intervention of women in technochauvinism might also present a state of failure. It raises questions about how we can move beyond data and computational methods to consider feminist interventions in technological development.

Regarding feminist interventions, one possible example of self-driving robot cars in Chapter 8 provides unique insight. We talked about the concept of resist reading by Judith Fetterley in class and the scalar definitions of gender at the beginning of the course. One difference between the two kinds of algorithms in robot driving mentioned by Broussard reminds me of the above discussion. I will try to explain my idea here to make some connections.

  1. Both the resist reading and the scalar definitions are trying to find alternative interpretations and respect non-mainstream experiences.
  2. A robot car model by the CMU that uses the idea of the Karel problem (a chess game in a grid-like pawn for learning programming) challenges the preexisting assumption that we need to build a robot car mimicking human perception. Instead, they propose the idea that we use machine like machine to collect data from building 3D maps, do the calculations quickly, and feed the results back to driving on a grid-like path, which is just like how humans invented airplanes (we were not building a mechanical bird but discovered another model of flying).

If you think about 1 and 2 together: The workflow described above breaks down the idea of reconstructing human intelligence using machine/replacing humans with machine and provides alternative thinking in utilizing machine/algorithm. This narrow definition of machine learning/AI is also a restrained use of machine. I believe this case represents an instance of not excessively breaking the rules and advocating technochauvinism. They respect humanism yet are still achieving innovative results. It probably could also serve as an abstract example of feminist intervention.

And last, let us return to an example of textbook distribution in Chapter 5. Broussard believes that we have overestimated the role of algorithms and artificial intelligence in textbook distribution, resulting in many students without access to books. The sophistication of textbook database models cannot prevent the chaos of input data, and the disorder of textbook distribution and management cannot be solved by large models. Therefore, she visited several schools and discovered the management chaos behind big data. This action reminds me of the data cleaning phase we mentioned earlier, where we would fill in data to maintain a controllable cleaning structure. This kind of on-site investigation for problem identification might be considered another form of data cleaning. Although it seems to bring chaos to the big data model, her visit accurately identified the root cause. Therefore, if human problems are not solved, technology ultimately cannot solve societal problems.

Overall, Broussard raises many challenging questions, shares her perspective as a woman in STEM, and presents an argument for a balanced approach to technology.