Artificial Intelligence (AI) systems are developing way too quickly with advanced Deep Neural Networks (DNNs) that enacts like biological neurons. Therefore, similarities between humans and AI are nothing but expected. For this to happen, AI needs to be trained and exposed to real world data. The problem of biasness occurs here. The datasets through which they get trained are not sufficiently diverse (for example, in facial recognition systems) and they are gender biased as well. The worst part is that the model can show high accuracy, but it will be biased (that usually goes unnoticed because of the prevailing supremacy of certain groups of people). Even Data Feminism ‘s “Data is Power” chapter talks about the failed systems in computational world due to an unequal distribution of the power that benefits small group of people at the expense of everyone else.
We can see various examples of gender bias adopted by the AI models and replicating the outdated views (at least not how we want our society to progress). For example, if the training dataset does not have enough contributing women, then there will be holes in AI’s knowledge as well. So, if the AI, wired with such biasness, gets standardized, then that is a big problem. If AI fails to understand the fundamental power differentials between women and men, then is feminist text analysis possible using deep neural network system of AI without any biasness? May be, if the feminist approaches are introduced at the initial phase of training an AI model, then there is still some hope. However, my stance lies in the opposite side as well. If biases are unavoidable in real-life, then how is it possible to not make it an unavoidable aspect of new technologies. After all, it is created by humans, based on the human brain, and is trained on data created by humans which makes it more complex. The solution that I see over here is a need for diverse data for which binary system needs to be wiped out.