Artificial Intelligence has gone a long way in recent years. It’s developed from just helping to analyze data but can also create text, images, and even video. A fascinating application is the creation of human faces.
Human brains are generally quite excellent at distinguishing between real and fake. But when it comes to this aspect, AIs are becoming extremely good at generating photo-realistic images of human faces.
In a recent study, Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley conducted experiments to evaluate whether some selected participants can distinguish state-of-the-art AI-synthesized faces from real faces and at what level of trust.
The research published in the academic journal Proceedings of the National Academy of Sciences confirmed how convincing artificial intelligence “faces” could be. People are unable to tell the difference between real faces and AI-generated faces, but they also seem to trust AI-generated faces more- and it could be a problem.
“Our assessment of the photorealism of AI-generated faces shows that artificial intelligence has crossed the uncanny valley and are capable of producing faces that are indistinguishable—and more credible than real faces,” the researchers write.
In that study, over 300 research participants were asked to evaluate whether a provided photo was an image of a real person or a fake generated by an AI. Only about half of the human participants got it right. That’s even worse than a coin toss.
The study results reveal a tipping point that feels shocking to anybody who thinks they are savvy enough to spot a deep fake when it’s put up against a genuine image. The AI-generated images of human faces seem just as real as actual photos of people.
The good and the ugly side of AI-synthesized faces
The research shows that this engineering feat “should be considered a victory for vision and computer graphics fields.” For example, it democratizes access to lucrative resources by allowing low-budget enterprises to generate real images for ads and commercials. But at the same time, AI-synthesized faces can be used for disinformation, fraud, propaganda, and even the non-consensual creation of synthetic porn. Hence, researchers “encourage those developing these technologies to consider whether the associated risks are greater than their benefits.” “We discourage the development of technology simply because it is possible,” they contend.
Neural networks are getting incredibly good
As a starting point for their investigation, the scientists used 400 artificial faces created using an open-source AI tool developed by NVIDIA. The program is a generative adversarial network, meaning it creates images using two neural networks. To begin, the “generator” generates an entirely random image. The “discriminator” employs an extensive collection of natural images to provide input to the generator. The image generator keeps improving for every back and forth movement of the two neural networks until the discriminator can’t tell the difference between real and fake images. As it turns out, humans aren’t much better.
Psychologists developed a gender-age-and-racially inclusive sample of 400 artificial images created by NVIDIA’s AI for the study. It featured 200 men and women and 100 faces from four racial groups: White, black, South Asian, and East Asian. The scientists chose demographically similar images for each artificial face to the discriminators’ training set.
In the first experiment, over 300 people were asked to examine a selection of 128 faces and decide whether they felt any of them were real or not. However, they were only able to get it right 48.2 percent of the time.
The participants didn’t have equal difficulty with all the faces they viewed. They did worse at analyzing caucasian faces, probably because the AI’s training data included far more photos of caucasian people.
In the second test, a new group of individuals received assistance. Before judging the pictures, the participants were given quick training on recognizing a computer-generated face. This is to help in their decision-making process.
In this second experiment, the participants perform slightly better, with an average score of 59 percent. Surprisingly, the improvements seemed to be from the tutorial rather than learning from the feedback. The participants did slightly worse during the second half of the experiment than during the first half.
On a scale of 1-7, participants were asked to judge how trustworthy they found each of the 128 faces in the final experiment. In a stunning result, they found on average that artificial faces seemed 7.7 percent more trustworthy than human-made faces.
These results, taken together, lead to the stunning conclusion that AIs “are capable of producing captivating and more trustworthy faces – than real faces,” the researchers say.
The implications could be huge
The results point to a future in which strange events involving recognition, memory, and a complete flyover of the Uncanny Valley are possible. They mean that “anyone can create synthetic content without specialized knowledge of Photoshop or CGI,” says Sophie Nightingale.
A relatively small study suggests that nefarious actors could use AI to generate artificial faces to trick people. The approach works in the same way for audio and video, meaning it might be used to spread disinformation.
The researchers theorized that someone could take advantage of these “deep fakes.” Take, for example, the current situation in Ukraine. Consider how quickly a video of Vladimir Putin — or Joe Biden, for that matter— declaring war on a long-time foe would go viral on social media. It could be tough to convince people that what they saw with their own eyes wasn’t real.
Technology also has significant implications for real photos
Another primary concern is synthetic pornography, which shows a person performing intimate acts that they never actually did.
“Perhaps the most harmful consequence is that in a digital environment where a picture or video can be manipulated, people might question the validity of any uncomfortable or unwelcome recording,” the researchers write.
While the sample size considered in the study was relatively small, there is a need for the findings to be replicated on a larger scale. The results were quite problematic, especially with how fast the technology has progressed. Researchers say that if we want to protect the public from “deep fakes,” there should be guidelines on how synthesized images are created and distributed.