New research from Australia suggests that our brains are adept at recognizing sophisticated deepfakes, even when we consciously believe the images we see are real.
The discovery further implies the possibility of using people’s neural responses to deepfake faces (rather than their stated opinions) to train automated deepfake detection systems. Such systems would be trained on the deepfake features of images not from confusing estimates of plausibility, but from our instinctive perceptual mechanisms for facial identity recognition.
‘[A]Although the brain can “recognize” the difference between real and realistic faces, observers cannot consciously differentiate between them. Our findings about the dissociation between brain response and behavior have implications for how we study false face perception, the questions we ask when we ask about false image identification, and possible ways to establish standards for protection against the misuse of false images.
The results emerged in a series of tests designed to gauge how people react to fake images, including images of obviously fake faces, cars, interior spaces and reverse faces (i.e. Upside down).
The paper states:
“Our results demonstrate that with only a brief glimpse, observers may be able to spot fake faces. However, they have a harder time distinguishing real faces from fake faces, and in some cases they think fake faces are more real than real faces.
“However, using time-resolved EEG and multivariate model classification methods, we found that it was possible to decode both unrealistic and realistic faces from real faces using brain activity. .
“This dissociation between behavior and neural responses for realistic faces provides important new evidence on false face perception as well as implications involving the increasingly realistic class of faces generated by GAN.”
The article suggests that the new work has “several implications” in applied cybersecurity, and that the development of deep learning classifiers should perhaps be driven by a subconscious response, as measured on EEG readings in response to many false images, rather than by the viewer’s conscious estimation. the veracity of an image.
The authors comment*:
“This is reminiscent of findings that people with prosopagnosia who cannot behaviorally classify or recognize faces as familiar or unfamiliar nevertheless display stronger autonomic responses to familiar faces than unknown faces.
“Similarly, what we showed in this study is that although we could accurately decode the difference between real and realistic faces from neural activity, this difference was not perceived behavioral. Instead, observers misidentified 69% of real faces as fake.
The new job is titled Are you serious? Decoding AI-generated realistic faces from neural activityand comes from four researchers from the University of Sydney, Macquarie University, Western Sydney University and the University of Queensland.
The results emerged from a broader examination of the human ability to distinguish between obviously fake, hyperrealistic (but still fake) and real images, made during two rounds of testing.
The researchers used images created by generative adversarial networks (GANs), share by Nvidia.
The data included 25 faces, cars and rooms, at rendering levels ranging from “unrealistic” to “realistic”. For face comparison (i.e., for proper non-falsified material), the authors used selections from source data from NVIDIA’s Flickr-Faces-HQ (FFHQ) source. database. To compare the other scenarios, they used equipment from the LSUN database.
Images would ultimately be presented to the test subject either right-side up or reversed and at a range of frequencies, with all images resized to 256×256 pixels.
Once all the material was assembled, 450 stimuli images were selected for testing.
The tests themselves were first carried out online, via jsPsych on pavlovia.org, with 200 participants judging various subsets of the total test data collected. Images were presented for 200ms, followed by a blank screen that persisted until the viewer decided whether the flashed image was real or fake. Each image was presented only once and the entire test took 3-5 minutes.
The second, more revealing cycle used in-person subjects fitted with EEG monitors and was presented on the Psychopy2 Platform. Each of the twenty sequences contained 40 frames, with 18,000 frames presented across the entire slice of test data.
The LDA classifier was the component capable of distinguishing between the brain’s reaction to false stimuli and the subject’s own opinion as to whether the image was false.
Interested in seeing if EEG test subjects could distinguish between fake and real faces, the researchers aggregated and processed the results, finding that participants could easily distinguish real faces from unrealistic ones, but apparently had difficulty identify realistic fake faces generated by the GAN. Whether the image was upside down or not seemed to make little difference.
However, the EEG data told a different story.
The paper states:
“Although the observers had difficulty distinguishing the real from the fake faces and tended to outclass the fake faces, the EEG data contained cue information relevant to this distinction that differed significantly between realistic and unrealistic, and this signal seemed to be limited to a relatively short processing step. ‘
The researchers conclude that although observers may have difficulty tacitly identifying false faces, these faces have “distinct representations in the human visual system”.
The disparity found led the researchers to speculate on the potential applicability of their findings for future security mechanisms:
“In an applied setting such as cybersecurity or Deepfakes, examining the ability to detect realistic faces might be better pursued using machine learning classifiers applied to neuroimaging data rather than targeting behavioral performance.”
“Understanding the dissociation between brain and behavior for false face detection will have practical implications for how we address the potentially detrimental and universal spread of artificially generated information.”
* My conversion of inline citations to hyperlinks.
First published July 11, 2022.