Rephrase the title:Mark Zuckerberg is married to a Chinese-American woman, but Meta’s AI image generator can’t imagine an Asian man with a white woman

Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.:

  • Meta’s AI image generator, Imagine, has been accused of racial bias.
  • The tool was unable to produce pictures of an Asian man with a white woman.

Meta’s AI image generator has been accused of racial bias after users discovered it was unable to create a picture of an Asian man with a white woman.

The AI-powered image generator, Imagine, was launched late last year. It is able to take almost any written prompt and instantaneously transform it into a realistic picture.

But users found the AI was unable to create images showing mixed-race couples. When Business Insider asked the tool to produce an image of an Asian man with a white wife, only pictures of Asian couples were shown.

The AI’s apparent bias is surprising given that Mark Zuckerberg, Meta’s CEO, is married to a woman of East Asian heritage.

Priscilla Chan, the daughter of Chinese immigrants to America, met Zuckerberg whilst studying at Harvard. The couple married in 2012.

Some users took to X to share pictures of Zuckerberg and Chan, joking that they had successfully managed to create the images using Imagine.

The Verge first reported the issue on Wednesday, when reporter Mia Sato claimed she tried “dozens of times” to create images of Asian men and women with white partners and friends.

Sato said the image generator was only able to return one accurate image of the races specified in her prompts.

Meta did not immediately respond to a request for comment from BI, made outside normal working hours.

Meta is no means the first major tech company that has been blasted for “racist” AI.

In February, Google was forced to pause its Gemini image generator after users found it was creating historically inaccurate images.

Users found that the image generator would produce pictures of Asian Nazis in 1940 Germany, black Vikings or even female medieval knights.

The tech company was accused of being overly “woke” as a result.

At the time, Google said “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

But AI’s racial prejudices have long been a cause for concern.

Dr Nakeema Stefflbauer, a specialist in AI ethics and CEO of women in tech network Frauenloop, previously told Business Insider that “When predictive algorithms or so-called ‘AI’ are so widely used, it can be difficult to recognise that these predictions are often based on little more than rapid regurgitation of crowdsourced opinions, stereotypes, or lies.”

“Algorithmic predictions are excluding, stereotyping, and unfairly targeting individuals and communities based on data pulled from, say, Reddit,” she said.

Generative AIs like Gemini and Imagine are trained on massive amounts of data taken from society at large.

If there are fewer images of mixed-race couples in the data used to train the model, this may be why the AI struggles to generate these types of images.


Related Post