Author Archives: Yunxia Wei

Abstract for roundtable

Data-Driven Feminist Text Analysis: Exploring the Significance of Computational Methods and Digital Humanities Tools in Literary and Cultural Studies

The role of feminist text analysis in literary and cultural studies, with a particular focus on the use of data and code-based tools to support this approach. Drawing on established feminist theories and practices of text analysis, we argue that feminist text analysis is a crucial lens for understanding how gender and power dynamics shape the production and reception of literature and other cultural artifacts.

We explore the ways in which computational methods and digital humanities tools can support feminist text analysis, including through the use of text mining, machine learning (for example: machine learning algorithms can be trained to identify and classify gendered language and stereotypes in texts, which can then be used to quantify and analyze patterns of gender bias and discrimination. This can enable feminist text analysts to more efficiently and effectively identify and critique problematic representations of gender in literature and other cultural artifacts) and other data-driven approaches.

Consider the challenges and limitations of these tools is also very crucial, including the potential for bias and the need for critical awareness of their limitations. To support this argument, examples of feminist text analyses that have successfully navigated these challenges, including studies on the representation of gender in children’s books, the use of the word “hysterical” on Twitter, and the gendering of job titles in academia. These examples demonstrate the potential of feminist text analysis to uncover patterns of gender bias and inequality, and to contribute to the promotion of gender equality and social justice. Ultimately, we argue that feminist text analysis is an essential approach to literary and cultural studies that can help us to create more inclusive and equitable representations of gender in our culture.

Book Report & Review Blog Post_YW

Weapons of Math Destruction – Cathy O’Neil

a.) summarizes the main takeaways of the book for classmates who have not had the opportunity to read it

Cathy O’Neil’s book “Weapons of Math Destruction” examines how mathematical models and algorithms are frequently employed to support and maintain systemic injustice and inequality. According to O’Neil, these “Weapons of Math Destruction” (WMDs) have serious adverse effects on people and society as a whole, especially on vulnerable and marginalized groups. The book offers numerous examples of WMDs being used in various fields, including hiring, advertising, education, and criminal justice. O’Neil, for instance, demonstrates how discriminatory and unfair predictive models may be when used to assess a teacher’s performance or a person’s creditworthiness.

The key lessons to be learned from the book include the necessity for increased accountability and openness in the design and use of algorithms, as well as the significance of adding moral considerations into algorithmic decision-making. The book also emphasizes the potential for algorithms to reinforce and magnify prejudices, highlighting the significance of diversity and inclusion in the technology sector.

In the Introduction to the book, the author sets the stage for her argument by describing how algorithms are increasingly being used to make decisions that have significant consequences for people’s lives, such as who gets hired or fired, who gets a loan or a mortgage, and who gets sent to prison. It notes that these algorithms are often proprietary and secret, meaning that the people affected by them have no way of knowing how they work or challenging their decisions.

There’s a popular saying that “men lie, women lie, but numbers don’t.” Because people tend to believe that ‘numbers don’t lie,’ many tend to cow into submission to anything that is based on numbers. However in “Weapons of Math Destruction” the author proves the use of warped mathematical and statistical models in algorithms to influence against ordinary people. These ‘algorithmic’ decisions tend to entrench existing inequalities by empowering the rich and powerful against the helpless mases. He debunks the notion of algorithmic neutrality with the argument that algorithms are based on data which are obtained from recorded behaviors and choices of people – most of which are flawed.

The author confirms Kate Crawford’s observation of the use of obscuring by mystification to conceal the truth from the people affected thereby. When confronted, computer scientists tend to present answers which suggest that the internal operations of the algorithms is ‘unknowable’ thereby slamming the door against all questioning. In line with the theory of political economy, the author observes that the effectiveness of algorithms is evaluated based on its ability to bring in the currency: i.e. political power to politicians, and money to business, but never on its effect on the affected people. Examples include the use of value-added modelling against teachers and scheduling software to optimize profits while exploiting desperate people and exacerbating working conditions of the worker and their social life. Political microtargeting which undermines democracy and provides politicians an avenue to be elusive by being ‘many things to many people.

b.) connects the book to our class conversations:

The book makes various connections to our class discussions on Feminist and Feminist Text Analysis. First, it draws attention to the ways in which algorithms can perpetuate systemic prejudices and discrimination, which can have serious adverse effects on disadvantaged and vulnerable communities, including women. Second, the book emphasizes the significance of including other viewpoints and voices in algorithmic decision-making processes, which is consistent with the intersectionality tenet of feminism. Finally, the book advocates for algorithmic decision-making to be more open and accountable, which is crucial for guaranteeing fairness and equity for all people, particularly women and other underrepresented groups.

c.) suggests what perspectives or new avenues of research and thought the book adds to the landscape of computational text analysis.

The book expands the field of computational text analysis by introducing a number of fresh viewpoints and lines of inquiry. One of the book’s major contributions is focusing light on the unfair application of mathematical models and algorithms in decision-making procedures like hiring, lending, and criminal justice, which can have a big impact on people’s lives. The book casts doubt on the idea of algorithmic neutrality by demonstrating how algorithms are built on inaccurate data derived from observed human actions and decisions, producing biased results that frequently worsen already-existing disparities.

Moreover, the impact of algorithmic decision-making on people, which reduces them to insignificant numbers and ignores their personal histories, psychological conditions, and interpersonal interactions, is highlighted in the book. It exposes the potential biases and inequities inherent in algorithmic judgments and emphasizes the necessity to address the ethical implications of using only algorithms to analyze human tales.

 Since many algorithms employed in significant decision-making processes are private and secret, it can be challenging for those who are affected by these judgments to understand how they operate or to contest them. This is why the book examines the topic of transparency and accountability in algorithmic decision-making. The book highlights the need for greater accountability and transparency in the creation and usage of algorithms and urges readers to evaluate these tools’ effects on society with greater knowledge and critical thought. The use of computational text analysis in domains like education, where algorithms are employed to evaluate professors and lecturers, and the potential biases and limitations of such evaluations are also raised in the book. It promotes deeper study and reflection on the creation of moral and just algorithms that take into account the intricate social and cultural influences on text data and analysis.

d.) Own critical reflections

In the chapter 7- Sweating Bullets, the author highlights an important issue: the unfair use of past records and WMD (weapons of math destruction) to screen job candidates, resulting in the blacklisting of some and the disregard of many others. When we rely on an algorithmic product to analyze human stories, individuals become mere numbers. For instance, a hard-working teacher’s efforts for a day are reduced to 8 hours in the database. Similarly, the practice of Clepening operates on the same principle. The machine does not care about an individual’s mental stress, personal preferences, or relationships; it only considers the additional hours worked. The Cataphora software system operates in the same manner. During the 2008 recession, companies used the software’s decision to lay off employees who had small and dim circles on the chart. While I agree with most of the author’s statements, I remain optimistic that with advancements in AI, the damages caused by WMD can be reduced. Although I am unsure of how this can be achieved, the author has addressed many of the problems, and solutions may exist. This chapter provided an example of Tim Clifford’s teacher evaluation case, it reminded me of the Student Evaluation of Teaching that is conducted every semester at City Tech, as well as at all other CUNY undergraduate colleges. These evaluations allow students to provide feedback on their classes before the final exams to eliminate potential bias. The feedback is then gathered and analyzed to help instructors improve their teaching. Prior to the pandemic, City Tech used a paper version of the evaluations, where professors would receive forms for each class and ask students to fill them out in class. Instructors had to leave the room while students filled out the forms, and a student would then send the completed forms to the Assessment and Research office. However, this evaluation process put pressure on some instructors, particularly those who were adjunct professors or had not yet received tenure. Some instructors chose to not distribute the forms to students, filled out the forms and submitted forms themselves etc. Despite the potential for bias from students, I believe that the Student Evaluation of Teaching questions are reasonable and can help instructors improve their teaching methods. At the same time, I recognize that the evaluation process may not be entirely fair to instructors, and that algorithms used to evaluate teaching may also be subject to biases and inequalities. Therefore, it is crucial to prioritize the development of ethical and fair algorithms that account for the biases and inequalities present in our society.

Respnse blog post_Week 4_2.27.23_YWei

Post-feminist text analysis by Sara Mill

Speaking in Tongues: Dialogics and Dialectics and The Black Woman Writer’s Literary Tradition

The author analyzes popular cultural texts using post-feminist theories. She investigates how gender and power are represented in these texts and how they reflect and influence societal beliefs and values about gender.

Sara Mills’ post-feminist text analysis work is relevant to the course goal of learning feminist text analysis because it highlights the ongoing struggle for gender equality in language and discourse. She also demonstrates how language is used to subtly undermine feminist goals and promote traditional gender roles by analyzing popular media texts such as advertisements. She demonstrates, for example, how women are frequently objectified and reduced to their physical appearance, whereas men are portrayed as powerful and dominant. And we have many scholars investigating how language constructs and reinforces gender roles, stereotypes, and power dynamics in feminist text analysis. Mills’ research expands on this foundation by investigating how postfeminist discourses that claim to have achieved gender equality actually perpetuate sexist attitudes and limit women’s agency.

By contrast, Mae G. Henderson’s article emphasizes how black women writers use language to challenge dominant cultural narratives and give voice to marginalized perspectives. Though Henderson’s article is not explicitly feminist, it can be viewed as part of a broader feminist project that seeks to amplify marginalized voices and challenge dominant cultural discourses.

In both articles, the importance of examining how language and media representations perpetuate stereotypes and power imbalances is stressed. In addition, they emphasize the importance of diverse representations that reflect marginalized perspectives and experiences. Both articles engage in feminist text analysis in critiques and challenges of dominant cultural narratives and promote greater diversity and inclusivity in media and literature.

Response blog post_Week 2 _2.6.23_YW

Sex and gender are often separated because they refer to distinct aspects of a person’s identity. gender is a social construct that can vary across cultures and can change over time. By separating the two concepts, it is possible to understand and address the ways in which gender and sex intersect and how they impact an individual’s experiences and opportunities. As we discussed in class today, gender is the question that is constantly asked when we need to fill out forms or join up for something. This reminds me of all the data analysis we did at City Tech for student enrollment, graduation, and retention, or surveys we hand out to gather data. We always included the variable of gender in our analyses. I thought it was interesting that the most recent Enrollment Dashboard, which we just updated for Spring 2022, had five demographic factors under the gender category: Men, Women, Non-binary Persons, Gender Nonconforming Persons, and Unspecified. We only have the gender categories of Men and Women when we get the data from the CUNY IRDB database prior to Spring 2022. This shift towards a more inclusive understanding of gender has led to an increase in the number of variables for gender in data analysis, allowing for a more nuanced and accurate representation of gender identities. And I believe it’s important for ensuring that data analysis is inclusive and respectful of all gender identities, and for providing a more complete picture of the experiences and perspectives of individuals who identify outside of the male/female binary.