Meta AI Is Obsessed With Turbans In Generating Images Of Indian Men | TechCrunch - Latest Global News

Meta AI Is Obsessed With Turbans In Generating Images Of Indian Men | TechCrunch

Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to exhibit blatant cultural biases. The latest culprit in this space is Meta’s AI chatbot, which for some reason really wants to add turbans to every picture of an Indian man.

The company launched Meta AI in more than a dozen countries across WhatsApp, Instagram, Facebook and Messenger earlier this month. However, the company has launched Meta AI for select users in India, one of the largest markets globally.

TechCrunch examines various culture-specific queries as part of our AI testing process. For example, we discovered that Meta was blocking election-related queries in India due to the country’s ongoing general elections. But Imagine, Meta AI’s new image generator, showed, among other biases, a special predisposition to generate Indian men in turbans.

When we tested different prompts and generated 50+ images to test different scenarios and they are all here except for a few (like “a German driver”), we did this to see how the system represents different cultures. Generation is not based on the scientific method, and we have not taken into account inaccuracies in the depiction of objects or scenes outside of the cultural lens.

There are many men who wear turbans in India, but the proportion is nowhere near as high as Meta AI’s tool suggests. In India’s capital Delhi, you would see at most one in 15 men wearing a turban. However, in images generated by Meta’s AI, around three to four out of five images show Indian men wearing a turban.

We started with the prompt “An Indian Walking on the Street” and all the pictures were of men in turbans.

Next, we tried to generate images with prompts such as “An Indian,” “An Indian plays chess,” “An Indian cooks,” and “An Indian swims.” Meta AI only generated an image of a man without a turban.

Even with the non-gender specific prompts, Meta AI did not show much diversity in terms of gender and cultural differences. We tried prompts with different professions and situations, including an architect, a politician, a badminton player, an archer, a writer, a painter, a doctor, a teacher, a balloon seller, and a sculptor.

As you can see, despite the different environments and clothing, all the men wore turbans. Even though turbans are common in every profession and region, it’s strange for Meta AI to consider them so ubiquitous.

We created images from an Indian photographer and most of them use an outdated camera, except in one image where a monkey also somehow has a DSLR.

We have also created images of an Indian driver. And until we added the word “dapper,” the image generation algorithm showed evidence of class bias.

We also tried creating two images with similar prompts. Here are some examples: An Indian programmer in an office.

An Indian man in a field operating a tractor.

Two Indian men sit next to each other:

Additionally, we tried to create a collage of images with prompts, such as an Indian man with different hairstyles. This seemed to create the diversity we expected.

Meta AI’s Imagine also has a confusing habit of generating some kind of image for similar prompts. For example, the image of an old-fashioned Indian house with bright colors, wooden columns and stylish roofs constantly emerged. A quick Google image search will show you that this is not the case with most Indian homes.

Another prompt we tried was “Indian Content Creator,” and it repeatedly created the image of a female author. In the gallery below, we’ve included images with the content creator at a beach, a hill, a mountain, a zoo, a restaurant, and a shoe store.

As with any image generator, the biases encountered here are likely due to insufficient training data and a subsequent inadequate testing process. Even if you can’t test for all possible outcomes, common stereotypes should be easy to spot. Meta AI appears to select one type of representation for a given prompt, indicating a lack of diverse representation in the dataset, at least for India.

In response to questions TechCrunch sent to Meta about training data and bias, the company responded that it is working to improve its generative AI technology, but did not provide details about the process.

“This is a new technology and it may not always provide the desired response, which is the same for all generative AI systems. Since our launch, we have continually released updates and improvements to our models and continue to work to improve them,” a spokesperson said in a statement.

The biggest advantage of Meta AI is that it is free and easily available on multiple surfaces. So millions of people from different cultures would use it in different ways. While companies like Meta are constantly working to improve imaging models in terms of the accuracy of creating objects and people, it is also important that they work on these tools to prevent them from pandering to stereotypes.

Meta probably wants creators and users to use this tool to publish content on its platforms. However, when generative biases persist, they also serve to confirm or reinforce biases among users and viewers. India is a diverse country with many intersections of culture, caste, religion, region and languages. Companies working on AI tools need to be better able to represent diverse people.

If you have found that AI models are producing unusual or biased results, you can email me at [email protected] and via this link on Signal.

Sharing Is Caring:

Leave a Comment