AI faces are 'more real' than human faces — but only if they're white

Abstract image of artificial intelligence robot generated by code.
(Image credit: Yuichiro Chino via Getty Images)

Artificial intelligence (AI) can make faces that look more "real" to people than photos of real human faces, a Nov. 13 study in the journal Sage found.

This is a phenomenon that the study's senior author Amy Dawel, a clinical psychologist and lecturer at the Australian National University, called "hyperrealism" — artificially generated objects that humans perceive as more "real" than their actual real-world counterparts. This is particularly worrying in light of the rise of deepfakes — artificially generated material designed to impersonate real individuals.

But there's a catch: AI achieved hyperrealism only when it generated white faces; AI-generated faces of color still fell into the uncanny valley. This could carry implications for not just how these tools are built but also for how people of color are perceived online, Dawel said.

The implications of biased AI

Dawel was inspired by research published in the journal PNAS in February 2022. Authors Sophie Nightingale, a lecturer in psychology at Lancaster University in the U.K., and Hany Farid, a professor of electrical engineering and computer sciences at the University of California, Berkeley, found that participants couldn't tell the difference between AI-generated and human faces. But Dawel wanted to go a step further to assess whether there might be a racial element to how people perceive AI faces. 

Related: AI's 'unsettling' rollout is exposing its flaws. How concerned should we be?

In the new study, the participants — all of whom were white — were shown 100 faces — some of which were human faces and some of which were generated using the StyleGAN2 image-generation tool. After deciding whether a face was AI or human, the participants rated their confidence in their choice on a scale from zero to 100. 

"We were so surprised to find that some AI faces were perceived as hyperreal that our next step was to attempt to replicate our finding from reanalyzing Nightingale & Farid's data in a new sample of participants," Dawel told Live Science in an email. 

The reason is simple: AI algorithms, including StyleGAN2, are disproportionately trained on white faces, she said. This training bias led to white faces that were "extra-real," as Dawel put it. 

Another example of racial bias in AI systems is the use of tools to turn regular photos into professional headshots, Dawel said. For people of color, AI alters skin tone and eye color. AI-generated images are also increasingly in use in areas such as marketing and advertising, or in making illustrations. Use of AI, if built with biases, may reinforce racial prejudices in the media people consume, which will have deep consequences on a social level, said Frank Buytendijk, chief of research at Gartner Futures Lab and an AI expert.

"Already, teenagers feel the peer pressure of having to look like the ideal that is set by their peers," he told Live Science in an email. "In this case, if we want our faces to be picked up, accepted, by the algorithm, we need to look like what the machine generates."

Mitigating risks

 But there's another finding that worries Dawel and could further exacerbate social problems. The people who made the most mistakes — identifying AI-generated faces as real — were also the most confident in their choices. In other words, people who are fooled most by AI are the least aware they are being duped.

Dawel argues that her research shows that generative AI needs to be developed in ways that are transparent to the public and that it should be monitored by independent bodies.

"In this case, we had access to the pictures the AI algorithm was trained on, so we were able to identify the White bias in the training data," Dawel said. "Much of the new AI is not transparent like this though, and the investment in [the] AI industry is enormous while the funding for science to monitor it is minuscule, hard to get, and slow."

Mitigating the risks will be difficult, but new technologies normally follow a similar pathway, in which the implications of new technology are realized gradually and regulations slowly kick in to address them, which then feed into the technology's development, Buytendijk said. When new technology hits the market, nobody fully understands the implications.

This process isn't quick enough for Dawel, because AI is developing rapidly and already making a huge impact. As a result, "research on AI requires significant resources," she said. "While governments can contribute to this, I believe that the companies creating AI should be required to direct some of their profit to independent research. If we truly want AI to benefit rather than harm our next generation, the time for this action is now."

Keumars Afifi-Sabet
Channel Editor, Technology

Keumars is the technology editor at Live Science. He has written for a variety of publications including ITPro, The Week Digital, ComputerActive, The Independent, The Observer, Metro and TechRadar Pro. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. He is an NCTJ-qualified journalist and has a degree in biomedical sciences from Queen Mary, University of London. He's also registered as a foundational chartered manager with the Chartered Management Institute (CMI), having qualified as a Level 3 Team leader with distinction in 2023.