Daily Archives: May 8, 2023

Abstract for Roundtable

“Data Feminism” by D’Ignazio & Klein identifies that data is not objective and reinforces existing social inequalities. Consequently, studying the hidden biases within a text is an important step for building a feminist analysis.

Intersectional feminist theories inform us that social inequalities are better reflected “not by a single axis of social division, but by many axes that work together and influence each other” (Collins & Bilge 2016, p. 2).

Following this idea, to properly critique text analysis, a feminist model should be bi-directional and multi-dimensional (spatial rather than scalar) encoding in itself the context in which words are used to relate to the social divisions at play.

Models like LDA can decode topics but are not context-aware or spatial. Earlier word-embedding models like word2vec are spatial but not context-aware.

The word-embedding model BERT can be suitable in this case. BERT is context-aware and can capture the meaning in which a word is used. Being a multi-dimensional model allows intersectional analysis to be performed, uncovering the relationships between different contextual text use cases. With its sentiment analysis and opinion-mining capabilities, we can uncover those expressed in the text concerning different social identities. Given the model’s customizability, it can also be trained to the specific domain.

However, BERT is notoriously computationally intensive. To feminist scholars that is an issue both in terms of environment and accessibility. To achieve compression, we can use BERT to create context-aware word-embeddings but apply knowledge distillation and pruning to reduce computational intensity, optimizing for maximum accuracy.

Abstract for roundtable – Co-opting feminism in politics

The increased participation of women in politics not only in government but as active voters, has contributed by tipping the balance in favor of those representing their interests at best. The last two democratic presidential winners, for example, saw a higher turnout of female voters than male voters, contributing to their success. This trend however, has been captured by those creating narratives around the political campaigns and used dishonestly by co-opting the language of feminist movements to further agendas that may indirectly or directly, promote gender inequality. A phenomenon referred to as “Purple Washing”. The argument surrounding this problem lies in the need to implement language recognition models able to identify the co-opting of feminine language in the political discourse, allowing for the dissemination of a more transparent ideology. As well as ensuring factors as diversity in the data sets, utilizing both quantitative and qualitative methods, and examining intersectionality across gender, race, class and sexuality are included, in order to reducing bias and securing that data is understood within a contextual framework. While the risks of perpetuating biases with the use of computerized tools needs to be acknowledged, there are also opportunities that may prove crucial at cutting ties with traditional politics and at challenging male dominated-structures. 

Abstract for Round Table

Artificial Intelligence (AI) systems are developing way too quickly with advanced Deep Neural Networks (DNNs) that enacts like biological neurons. Therefore, similarities between humans and AI are nothing but expected. For this to happen, AI needs to be trained and exposed to real world data. The problem of biasness occurs here. The datasets through which they get trained are not sufficiently diverse (for example, in facial recognition systems) and they are gender biased as well. The worst part is that the model can show high accuracy, but it will be biased (that usually goes unnoticed because of the prevailing supremacy of certain groups of people). Even Data Feminism ‘s “Data is Power” chapter talks about the failed systems in computational world due to an unequal distribution of the power that benefits small group of people at the expense of everyone else.

We can see various examples of gender bias adopted by the AI models and replicating the outdated views (at least not how we want our society to progress). For example, if the training dataset does not have enough contributing women, then there will be holes in AI’s knowledge as well. So, if the AI, wired with such biasness, gets standardized, then that is a big problem. If AI fails to understand the fundamental power differentials between women and men, then is feminist text analysis possible using deep neural network system of AI without any biasness? May be, if the feminist approaches are introduced at the initial phase of training an AI model, then there is still some hope. However, my stance lies in the opposite side as well. If biases are unavoidable in real-life, then how is it possible to not make it an unavoidable aspect of new technologies. After all, it is created by humans, based on the human brain, and is trained on data created by humans which makes it more complex. The solution that I see over here is a need for diverse data for which binary system needs to be wiped out.

Abstract for the Roundtable

Utilizing Feminist Text Analysis in Historical Imagination and Cultural Narrative

Is history a factual account of the past? How to understand interpretive and narrative frameworks constructed by historians? This presentation explores the potential of a feminist approach of historical imagination, re-enactment, and cultural narratives, challenging the notion of objectivity in history. Drawing from R.G. Collingwood’s seminal book The Idea of History and Joan Wallach Scott’s landmark work Gender and the Politics of History, I examine how feminist perspectives enrich our understanding of the past.

Moving to the realm of text analysis and computational studies, I aim to examine how we reconstruct the past through the discovery and presentation of patterns, trends, themes, and topics, particularly when faced with scanty documentary evidence. Reflecting on the works by scholars like Catherine D’Ignazio, Judith Fetterley, Lauren Klein, and Laura Mandell, I discuss the definition of “evidence” when implementing computational methods for inquiries into women’s history. I consider questions like, what counts as evidence from textual or other forms of historical data? Could distorted understanding occur in constructing the past and how computational methods intensify or mitigate this issue? Should we apply our projection of modern values onto the past when designing computational approaches, or should we avoid doing so?  

  • Collingwood, R. G. The Idea of History. Oxford: Oxford University Press, 1946.
  • D’Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge, MA: The MIT Press, 2020.
  • Fetterley, Judith. The Resisting Reader: A Feminist Approach to American Fiction. Bloomington: Indiana University Press, 1978.
  • Mandell, Laura. “Gender and Cultural Analytics: Finding or Making Stereotypes?” Debates in the Digital Humanities. Eds. Matthew K. Gold and Lauren Klein. Minnesota: U of Minn Press, 2019.
  • Scott, Joan Wallach. Gender and the Politics of History. New York: Columbia University Press, 1988.