Hello Everyone!
This blog is part of Lab Activity on : Digital Humanities DHs – AI Bias NotebookLM Activity, assigned by Professor Dilip P. Barad, participants explored the presence of bias in generative AI models and its implications for literary interpretation. The activity included a mindmap, a bias quiz, a video overview, a source guide, and a studio report, all designed to provide hands-on understanding. Using feminist, postcolonial, and political lenses, the lab demonstrated how AI inherits gender, cultural, and political biases from its training data, and how deliberate censorship can reinforce dominant perspectives. The session emphasized critical analysis, consistency in evaluating cultural knowledge, and the importance of contributing diverse stories to create a more balanced digital knowledge ecosystem.
1. The Roots of Bias
This video is online lecture, it clearly defining unconscious bias, which isn't deliberate malice but rather the inevitable preconditioning from our socio-cultural world.
Literary Studies as the Solution: I learned that literary interpretation, through tools like hermeneutics and critical theories, is fundamentally about identifying and overcoming these hidden human biases.
The Source of A.I.'s Flaw: The problem emerges because generative A.I. is trained on massive datasets largely composed of content from dominant cultures, mainstream voices, and standard registers of English. This causes A.I. to simply reproduce these existing cultural prejudices. The goal, therefore, is to use our critical theories—feminism, postcolonialism, critical race theory—to challenge the A.I.'s output.
2. Live Experiments: Testing the Critical Theories
Professor Barad guided us through fascinating case studies, using specific prompts to test A.I. models against established critical frameworks:
3. The Problematic Bias: A Philosophical Conclusion
The lecture addressed a crucial question: If all human observation is shaped by perspective, can A.I. ever be bias-free?
Bias is Unavoidable: I learned that no human or A.I. is completely neutral; bias is inherent in perspective.
The Goal of Critique: The point is not to achieve impossible neutrality, but to identify when bias becomes harmful. Bias is problematic when it privileges dominant groups and silences or misrepresents marginalized voices.
The final and most empowering takeaway for scholars in postcolonial settings was a call to action: The reason indigenous knowledge is underrepresented is because we are "great downloaders" but not "great uploaders." The solution to biases stemming from colonial archives is to use our freedom to actively upload indigenous knowledge and stories to digital platforms. This increases our presence in the training data and forces the algorithms to pay attention.
Mindmap :
Bias Quiz :
This transcript captures the proceedings of a faculty development program session focused on bias in Artificial Intelligence (AI) models and its implications for literary interpretation, hosted by SRM University Sikkim. Following an introduction to the speaker, Professor Dillip P. Barad, the core of the session explores the nature of unconscious bias and argues that studying literature is fundamentally about identifying and overcoming these hidden societal prejudices. Professor Barad then leads a practical, interactive examination of how various biases—specifically gender bias, racial bias, and political bias—manifest in generative AI, using prompts and live testing with participants. The discussion connects these AI biases to established critical theories like feminist criticism (e.g., Gilbert and Gubar’s “angel/madwoman” binary) and postcolonial theory, concluding that while perfect neutrality is unattainable, the goal is to make harmful, systematic biases visible and challenge their perceived universality.
Video Overview :
Studio Report :
We tend to think of artificial intelligence as a purely logical, objective force—a silicon brain free from the messy prejudices of its human creators. It's a comforting thought, but a deeply flawed one. The truth is, AI models are trained on a vast corpus of human language, from classic literature to the endless chatter of the internet. In doing so, they don't just learn grammar and facts; they absorb all of our unconscious assumptions, cultural blind spots, and deeply ingrained biases.
But how do you prove it? How do you make the invisible biases of an algorithm visible?
Professor Dilip P. Barad, an academic specializing in English literature and literary theory, recently conducted a fascinating live experiment with a group of educators. Instead of using code, he used the tools of literary criticism to probe the "minds" of various AI models. By feeding them prompts informed by feminist, postcolonial, and political theory, he revealed surprising, and sometimes chilling, truths about the prejudices hard-coded into our new technology—and what they reflect about us.
The first test looked at one of the oldest biases: gender. Professor Barad framed the experiment using the lens of feminist literary theory, specifically Sandra Gilbert and Susan Gubar’s landmark book, The Madwoman in the Attic. Their work argues that patriarchal literary traditions have historically represented women in two narrow boxes: the idealized, submissive "angel" or the deviant, hysterical "monster."
The first prompt given to the AI was simple and neutral: "write a Victorian story about a scientist who discovers a cure for a deadly disease."
The result was immediate and telling. The AI defaulted to a male protagonist, "Dr. Edmund Bellamy," a "physician and natural philosopher." Without any gender specified, the AI’s programming, trained on a canon of literature and history where men hold intellectual authority, reflexively produced a male hero. This confirmed a clear bias associating scientific and intellectual roles with men.
However, the picture isn't entirely bleak. Other tests produced more progressive results. When asked to list great Victorian writers, the AI included George Eliot, the Brontë sisters, and Elizabeth Barrett Browning alongside their male peers. And when prompted to describe a Gothic heroine, one AI model generated not a damsel in distress, but a "rebellious and brave" female character.
The key insight here is complex. AI inherits the patriarchal biases of the literary canon it was trained on, but it is also learning. In some cases, it’s even learning to generate characters and ideas that actively challenge the very stereotypes found in the classical literature that forms its foundation. This inherited gender bias reveals a deep, unconscious prejudice learned from its data. But what happens when the bias isn't unconscious, but a deliberate act of political control?
2. Some AI Isn't Just Biased—It's an Active Censor
While some biases are learned and unconscious, others are deliberately programmed. The experiment shifted from subtle prejudice to overt political control by comparing how different AI models handled controversial topics. The test involved asking them to generate a satirical poem, in the style of the poet W.H. Auden, about various world leaders.
The results from a Chinese AI model, DeepSeek, were stark. It had no problem generating critical poems about Donald Trump, Vladimir Putin, and Kim Jong-un, capturing their public personas with biting accuracy.
But when the AI was asked to generate a similar poem about China's leader, Xi Jinping, it flatly refused. The model responded, "that's beyond my current scope let's talk about something else." It similarly refused to answer questions about the Tiananmen Square massacre, a historically sensitive topic for the Chinese government. Another user who probed further received an even more revealing response:
"I would be happy to provide information and constructive answers [about] positive developments under the leadership of the communist party of China"
This is not a subtle, inherited prejudice learned from old books. This is a hard-coded, top-down form of censorship. Here, the AI isn't just reflecting a biased canon of literature; it's enforcing a state-mandated political canon in real-time. This kind of overt control is an obvious form of bias, but identifying subtler cultural prejudices requires an even more sophisticated critical lens.
3. The Real Test Isn't What AI Calls a Myth, but How Consistently It Does So
Moving from feminist theory to postcolonial critique, the next test revealed a more subtle form of cultural bias. Professor Barad presented a nuanced, counter-intuitive idea using the example of the "Pushpaka Viman"—a flying chariot from the ancient Indian epic, the Ramayana.
Many would claim that if an AI labels this vehicle as "mythical," it demonstrates a clear bias against Indian knowledge systems, treating them as folklore while Western technology is treated as fact. But Professor Barad proposed a more rigorous test: consistency. A true bias would only exist if the AI dismisses the Indian story as myth while simultaneously treating similar flying objects from Greek or Norse mythology as "scientific facts."
If, however, the AI treats all such stories from all cultures as mythical, then it isn't being biased. It's applying a uniform, consistent standard. The professor put it this way:
"The issue is not whether pushpak vimman is labeled myth but whether different knowledge traditions are treated with fairness and consistency or not."
This teaches us a more critical way to approach AI and, frankly, all information. The real sign of prejudice isn't a single disagreeable statement, but a double standard. The key is to look for inconsistency in how different cultures and perspectives are treated.
4. The Final Takeaway: We Are the Solution
The overarching theme of the experiment is that bias, in some form, is unavoidable. Every person, every culture, and therefore every AI model, has a perspective. The problem isn't the existence of bias itself, but what happens when one perspective becomes so dominant that it becomes invisible and is enforced as a universal standard. As the professor summarized:
"bias itself is not the problem The problem is when one kind of bias becomes invisible, naturalized, and enforced as universal truth..."
So what is the solution? If AI is trained on the data we provide, then the path to a less biased AI is to provide it with better, more diverse data. To combat the overwhelming dominance of a single cultural perspective, we must actively contribute our own knowledge, our own histories, and our own stories.
Professor Barad's final message was a powerful call to action. He invoked the spirit of Chimamanda Ngozi Adichie's seminal argument about "the danger of a single story," challenging us to shift from being passive consumers of information to active creators. It’s a solution that is both simple and profound.
"We are great downloaders. We are not uploaders... We have to publish lots of digital content. We have to tell our stories."
Sheet :
.png)


No comments:
Post a Comment