See for yourself how biased AI image models are with these new tools


One theory as to why this might be is that non-binary brown people may have had more visibility in the press recently, meaning their images end up in the datasets that AI models use for training, Gernight says.

OpenAI and Stability.AI, the company that created Stable Diffusion, say they have implemented fixes to mitigate the biases embedded in their systems, such as blocking certain prompts that appear likely to generate offensive images. However, these new tools from Hugging Face show just how limited those adjustments are.

A spokesperson for Stability.AI told us that the company trains its models on “datasets specific to different countries and cultures,” adding that this should “serve to mitigate biases caused by overrepresentation in general datasets.”

An OpenAI spokesperson wouldn’t comment specifically on the tools, but pointed us to a blog post explaining how the company added various techniques to DALL-E 2 to filter out bias and sexual and violent images.

Addictions are becoming an increasingly pressing issue as these AI models become more widely adopted and produce ever more realistic images. They are already being implemented in multiple products, such as stock photos. Luccioni says he worries that the models risk reinforcing harmful biases on a large scale. She hopes the tools she and her team have created will bring more transparency to image-generating artificial intelligence systems and highlight the importance of making them less biased.

Part of the problem is that these models are trained on mostly US-centric data, meaning they mostly reflect American associations, biases, values ​​and culture, says Eileen Caliscan, an associate professor at the University of Washington who studies biases in artificial intelligence systems. intelligence and did not participate in this study.

“What ends up happening is the imprint of this online American culture … that’s been perpetuated around the world,” Kaliscan says.

Caliskan says Hugging Face’s tools will help AI developers better understand and reduce bias in their AI models. “When people see these examples firsthand, I believe they will be able to better understand the significance of these biases,” she says.



Source link