Author Archives: Jamie Gelberg

Book Review: More Than a Glitch

In Feminist Text Analysis this semester I had the opportunity to read “More Than a Glitch  – Confronting Race, Gender, and Ability Bias in Tech” by Meredith Broussard. I thought I knew a fair amount about bias in the technology space, but my eyes have been widened to problems I didn’t even know existed. Many think that the fix to these biases is that we must simply enhance technology as it is the only answer. This is what has been coined “technochauvinism”. This is akin to what Catherine D’Ignazio and Lauren Klein, authors of Data Feminism, call “Big Dick Data” . This concept explains that many think the more data the better and that it can never be wrong1. However, Broussard makes the argument that while technology, namely AI (Artificial Intelligence) can be helpful, it can also be detrimental to society. To promote equality and equity, the right answer to these issues may be to not use technology at all. She asks, “Why use inferior technology to replace capable humans when humans are doing a good job?”

Societal issues cannot be solved tech alone and even tech can deepen the issues as well. These issues or “glitches” within software, AI, etc come off as a simple fix but this may not be the case. Broussard explains this as the difference between social and technological fairness. To put it simple, computers are just “machines that can do math”2. While they can compute at a high level to produce an answer, they do not have feelings or experiences and therefore cannot be the entire solution to these highly complex problems. We can align this idea to the concept of “resistant reading” we spoke about this semester. What are the alternatives? What can we do to challenge the norm and provide better results for our communities? 

Humans build code, the code may contain faults. AI is therefore not a neutral technology. These faults are often at the expense of already marginalized groups. Examples mentioned but are not limited to are predictive policing, AI facial recognition software, Google searches, testing technology for schools, lack of accessibility, reinforcing gender binaries and even the use of automated soap dispensers.
The intersectionality between race, gender, and technological advances is the main theme when being critical over technologies. Building technologies that need, or say they need, to accept race or gender as a datapoint have traditionally been built as a boolean or a select one fixed list within a user interface. We know that gender, and even so called biological sex is socially constructed. Before this was in the cultural zeitgeist, people were not aware that you could change your gender, and many databases made these fields uneditable. These ideals still persist today in coding as well as in legacy systems. Programmers are taught to optimize code in order to save memory when building programs. A boolean is cheaper in memory than a string of text. The concept of elegant code is therefore enforcing the gender binary and promoting cis-heteronormativity. Even the biggest names in tech, like Microsoft and Google, that promote themselves as a LGBTQIA+ allies3 sometimes have trouble recognizing ze, hir, xie, etc. as acceptable words to use or yield no results in their dictionaries within their respective word processing softwares.

Race, medicine and technology is yet another example of where these glitches take place. As mentioned previously, many softwares only allow for a user to check off one race within a list when identifying themselves. However, multi-racial people exist! What are they to do? How are they supposed to identify in these scenarios? People don’t fit into the neat little boxes decided and created by software engineers. One example of this happening is with electronic medical records, EMRs. As soon as race is entered into these charts, the type of care that is often received is unfortunately linked to the color of someone’s skin. It is known that historically that any complaints of pain from Black women have often been ignored whether that is from conscious or unconscious bias4. Social factors are also at play here which is why so many more Black women die from birth related events than other groups5 and it’s not just from the prejudice of doctors. Not all technology works equally. Pulse oximeters, a very common device that measures a person’s oxygen rate, often give false readings to those with darker skin tones6. Why would the FDA or any governing body decide it’s ok to sell and distribute this tech? Most likely it wasn’t tested on these underserved populations therefore showing no issue in the compliance process. You can’t provide results for something that is never tested. The same can be said for AI technology. 

At its core, AI is a way to provide high level statistics. Data scientists train algorithmic models on datasets. From those datasets, the model is able to predict accuracy or probability for new data that it is fed. What happens when the training data is missing important information? The model will be flawed and potentially hurt those affected by it. As an example, Broussard even ran her own experience with breast cancer through an AI to see if she was able to detect it herself over a doctor. While she was able to, it took immense trial and error, required outside help and hundreds of hours to get the right answer. On the other hand, her doctor was able to tell her in minutes from looking at a simple scan. Sometimes, it just doesn’t make sense to use these predictive technologies to replace expert humans.

Finally, we need to be more careful in how AI is created and need to be transparent about how it works. AI is often described as a black box. Therefore Broussard suggests further governmental action as well as action by individuals. As a single person, it is also possible to be critical of these systems. Broussard calls this “Bullshit Detection”7. We can use the following three questions to be critical about AI or whatever software is being advertised to us.

Who is telling me this?

How do they know it is possible to achieve results?

What are they trying to sell me?

    Additionally, all tech companies need to be held accountable for their actions which is where algorithmic auditing comes in handy. Similar to accounting audits, there are organizations dedicated to understanding algorithms and providing feedback to companies for them to manage risk. Major players in that field include but are not limited to Cathy O’Neil and Julia Angwin.

    O’Neil is the author of Weapons of Math Destruction and founded ORCAA8, a more traditional auditing advisory company focusing on understanding big tech and algorithms in order to help them mitigate risk. Angwin founded The Markup9 which is a news organization dedicated to watching and investigating big tech. What I found most interesting about them is that they provide documentation to their readers on how to replicate their studies. This is exactly what is meant by increasing transparency in the tech space, especially for algorithmic issues.
    Ultimately, I agree with Broussard in challenging new technologies to make sure they are suited for the common good. She sums this up nicely by stating, “And if we must use inferior technology, let’s make sure to also have a parallel track of expert humans that is accessible to everyone regardless of economic means”10.


    Footnotes

    1  D’Ignazio, C., & Klein, L. (2020). 6. The Numbers Don’t Speak for Themselves. In Data Feminism. Retrieved from https://data-feminism.mitpress.mit.edu/pub/czq9dfs5

    2  Broussard, Meredith. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press, 2023.

    3 https://unlocked.microsoft.com/pride/, https://pride.google/

    4 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4843483/

    5 https://www.cdc.gov/healthequity/features/maternal-mortality/index.html

    6 https://hms.harvard.edu/news/skin-tone-pulse-oximetry

    7 Broussard, Meredith. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press, 2023.

    8 https://orcaarisk.com/

    9 https://themarkup.org/

    10 Broussard, Meredith. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press, 2023.

    Response Blog Post – originally posted 4/8/23

    I’ve often wondered why anyone would need to build an algorithm that would produce the gender or ethnicity of the author of text. To me, it feels a bit creepy and teeters on the verge of a big brother reality. One of the readings assigned that relates to this was a Medium article entitled, “AI Ethics Identifying Your Ethnicity and Gender” by Allen Jiang.

    Blog articles are meant to be approachable to all audiences and even though this is a highly divisive topic, at first I thought the author did a good job of explaining AI and how it can be used to understand ethnicity and gender. Jiang gave a few business case examples as to why one would do this including, but not limited to, better customer experience and better customer segmentation.

    However, one of the first sentences as discussed in class is as follows “This is an analogous question to: if we had complete discretion, would we teach our children to recognize someone’s ethnicity.” The author uses this comparison as rational for this type of AI model. On the surface, one could glance at this and continue on reading without question. But taking a minute to think, this analogy is not valid.

    The author is equating the human experience to computers. We teach children to be accepting of all people, even when they may look different than themselves. We don’t ask a child to point out their friend’s race or ethnicity. While we need to be aware of our surroundings to be sensitive at a higher level this is not what this AI prompt is being used for. Additionally, humans learn in a completely different manner than computers. Human learning is based on experiences and emotions. Computers don’t have emotions or experiences. Computers are programmed to make decisions, which is what the author is equating to learning, based on decision trees, dictionaries and other methods. This is procedural knowledge, not experiential.

    Even if the author was to use the data collected (assigning gender to text from celebrity tweets) for business purposes, what would be the impact? One impact could be reinforcing stereotypes and gender binaries. The results from the experiment could mislead the business and they may not truly understand their customer’s needs, wants and preferences. Additionally, looking at the results from the experiment, the accuracy rate is only 72%. This is only about 1/4 higher than the prediction rate of simply guessing if a tweet was written by a male or female (50%). Ultimately, a poor model leads to a poor proxy.

    Perhaps one day I’ll come across a compelling argument for why AI would be helpful in detecting gender or ethnicity but it’s not today.

    Abstract: Is analyzing gender using computational text analysis ethical?

    Often in the tech world, we hear of algorithms that can predict with accuracy, the gender of the person who wrote a particular document, tweet, etc. Is this inherently unethical? To discover the answer, we can use the following as a jumping off point to arrive at a decision. 

    In every project, there is the potential for biases to be introduced. Some may ask how this could be possible if an algorithm is doing all the work. This is an inaccurate idea. It is important to realize there are people behind every algorithm written. It’s trained on data provided by people who have thoughts, feelings, and opinions which could be translated into the training material provided. Does the training data perpetuate gender stereotypes or other biases?

    Another element to consider is privacy. When collecting information about genders of authors, how is that data being used within the project? Was consent even obtained from the individuals providing the data? Was this communicated to the participants? If data were to be exposed would it cause harm? Would it be possible to anonymize the data and still provide significant results?

    It is also important to consider social and political context when attempting to analyze gender using computational text analysis. Do the results perpetuate power dynamics between socially constructed gender roles? If so, this could reinforce what has been ingrained in our society. However, constructs change over time. Have historical and cultural context been taken into account to eliminate misunderstandings of the results? Since gender does not stand on its own, was there an intersectional approach taken within the experiment? Other social categories such as race, social class and sexuality are highly intertwined.