This Week in AI: Tackling racism in AI image generators

Staying abreast of developments in such a rapidly evolving field as AI can be quite demanding. So, until an AI can handle it for you, here’s a convenient roundup of recent stories from the world of machine learning, along with notable research and experiments that haven’t received individual coverage.

This week in AI, Google made headlines by temporarily halting its AI chatbot Gemini’s ability to generate images of people following complaints about historical inaccuracies. For instance, when prompted to depict “a Roman legion,” Gemini would present an anachronistic, cartoonish group of racially diverse foot soldiers, while rendering “Zulu warriors” as Black.

It seems that Google, like some other AI vendors including OpenAI, had employed clumsy hardcoding techniques to try to “correct” biases in its model. When prompted with requests like “show me images of only women” or “show me images of only men,” Gemini would refuse, citing concerns that such images could “contribute to the exclusion and marginalization of other genders.” Additionally, Gemini was hesitant to generate images of individuals identified solely by their race, such as “white people” or “black people,” purportedly due to concerns about “reducing individuals to their physical characteristics.”

Some critics on the right have seized upon these flaws as evidence of a “woke” agenda being perpetuated by the tech elite. However, a simpler explanation is apparent: Google, having faced criticism for biases in its tools in the past (e.g., classifying Black men as gorillas, mistaking thermal guns in Black people’s hands as weapons, etc.), is so determined to avoid repeating history that it’s attempting to create less biased worldviews in its image-generating models — albeit with flawed outcomes.

Google says its AI image-generator would sometimes 'overcompensate' for  diversity | WBAL Baltimore News

In her best-selling book “White Fragility,” anti-racist educator Robin DiAngelo discusses how the erasure of race — often referred to as “color blindness” — actually exacerbates systemic racial power imbalances rather than alleviating them. By promoting the idea of “not seeing color” or suggesting that merely recognizing the struggles of people of other races is enough to be considered “woke,” individuals contribute to harm by avoiding meaningful conversations on the topic, according to DiAngelo.

Google’s cautious approach to race-based prompts in Gemini didn’t directly address the issue; instead, it attempted to obscure the model’s biases. Many argue that these biases should not be ignored or minimized but rather confronted within the broader context of the training data from which they originate — namely, society on the internet.