It's Not Just The Algorithm That's Biased

When we prompt a generative AI platform, we don’t just invite the bias inherent in the data the algorithm has been trained on, we also invite our own unconscious bias into the returned outcome. In prompting with ‘a profile photo of a ________ professor’, Midjourney returns images of predominantly white, middle-aged males with remarkable consistency. It reflects a trained result supported by localized demographic data. That most professors in the United States are white males (NCES, 2022). It’s an outcome based on the statistics of greatest demographic percentage, rather than a more accurate reflection of a more nuanced, diverse whole. Generative AI seeks to return results with confidence, rather than inclusion or choice. In doing so they reinforce biases inherent in their training.

When we don’t include social identity qualifiers such as ethnicity or gender, we increase the likelihood of exclusion of less visible social groups in the results. The inclusion of ‘female’ into the prompt will consistently return females, but it won’t do it by default. This is the problem. That in reinforcing a westernized, white, male outcome as the default, on top of which we have to qualify parameters of social identity, we double down on a value system which doesn’t align with our lived experience.

But this isn’t just the algorithm. The user has agency to recognize their own unconscious bias in training the algorithm themselves. And the more diverse our prompts, the more the algorithm has the opportunity to respond with increasingly inclusive outcomes.

References:
National Center for Education Statistics (NCES). (2022). Race/ethnicity of college faculty. Retrieved from https://nces.ed.gov/fastfacts/display.asp?id=61.


Previous
Previous

Unit 1 Reflection: When Technological Experiences Transcend our Understanding of the World

Next
Next

Facebook vs. The Department of Housing & Urban Development