Our subconscious skills in deep fake detection could power future automated systems

New research from Australia suggests that our brains are adept at recognizing sophisticated deepfakes, even when we consciously believe the images we see are real.

The discovery further implies the possibility of using people’s neural responses to deepfake faces (rather than their stated opinions) to train automated deepfake detection systems. Such systems would be trained on the deepfake features of images not from confusing estimates of plausibility, but from our instinctive perceptual mechanisms for facial identity recognition.

‘[A]Although the brain can “recognize” the difference between real and realistic faces, observers cannot consciously differentiate between them. Our findings about the dissociation between brain response and behavior have implications for how we study false face perception, the questions we ask when we ask about false image identification, and possible ways to establish standards for protection against the misuse of false images.

The results emerged in a series of tests designed to gauge how people react to fake images, including images of obviously fake faces, cars, interior spaces and reverse faces (i.e. Upside down).

Various iterations and approaches for the experiments, which involved two groups of test subjects having to classify a briefly shown image as “fake” or “real”. The first round took place on Amazon Mechanical Turk, with 200 volunteers, while the second round involved a smaller number of volunteers responding to tests while hooked up to EEG machines. Source: https://tijl.github.io/tijl-grootswagers-pdf/Moshel_et_al_-_2022_-_Are_you_for_real_Decoding_realistic_AI-generated_.pdf

The paper states:

“Our results demonstrate that with only a brief glimpse, observers may be able to spot fake faces. However, they have a harder time distinguishing real faces from fake faces, and in some cases they think fake faces are more real than real faces.

“However, using time-resolved EEG and multivariate model classification methods, we found that it was possible to decode both unrealistic and realistic faces from real faces using brain activity. .

“This dissociation between behavior and neural responses for realistic faces provides important new evidence on false face perception as well as implications involving the increasingly realistic class of faces generated by GAN.”

The article suggests that the new work has “several implications” in applied cybersecurity, and that the development of deep learning classifiers should perhaps be driven by a subconscious response, as measured on EEG readings in response to many false images, rather than by the viewer’s conscious estimation. the veracity of an image.

The authors comment*:

“This is reminiscent of findings that people with prosopagnosia who cannot behaviorally classify or recognize faces as familiar or unfamiliar nevertheless display stronger autonomic responses to familiar faces than unknown faces.

“Similarly, what we showed in this study is that although we could accurately decode the difference between real and realistic faces from neural activity, this difference was not perceived behavioral. Instead, observers misidentified 69% of real faces as fake.

The new job is titled Are you serious? Decoding AI-generated realistic faces from neural activityand comes from four researchers from the University of Sydney, Macquarie University, Western Sydney University and the University of Queensland.

Data

The results emerged from a broader examination of the human ability to distinguish between obviously fake, hyperrealistic (but still fake) and real images, made during two rounds of testing.

The researchers used images created by generative adversarial networks (GANs), share by Nvidia.

GAN-generated human face images made available by NVIDIA.  Source: https://drive.google.com/drive/folders/1EDYEYR3IB71-5BbTARQkhg73leVB9tam

GAN-generated human face images made available by NVIDIA. Source: https://drive.google.com/drive/folders/1EDYEYR3IB71-5BbTARQkhg73leVB9tam

The data included 25 faces, cars and rooms, at rendering levels ranging from “unrealistic” to “realistic”. For face comparison (i.e., for proper non-falsified material), the authors used selections from source data from NVIDIA’s Flickr-Faces-HQ (FFHQ) source. database. To compare the other scenarios, they used equipment from the LSUN database.

Images would ultimately be presented to the test subject either right-side up or reversed and at a range of frequencies, with all images resized to 256×256 pixels.

Once all the material was assembled, 450 stimuli images were selected for testing.

Representative examples of test data.

Representative examples of test data.

Trials

The tests themselves were first carried out online, via jsPsych on pavlovia.org, with 200 participants judging various subsets of the total test data collected. Images were presented for 200ms, followed by a blank screen that persisted until the viewer decided whether the flashed image was real or fake. Each image was presented only once and the entire test took 3-5 minutes.

The second, more revealing cycle used in-person subjects fitted with EEG monitors and was presented on the Psychopy2 Platform. Each of the twenty sequences contained 40 frames, with 18,000 frames presented across the entire slice of test data.

The collected EEG data was decoded via MATLAB with the CoSMoMVPA toolbox, using a cross-validate leave-one-out diagram under linear discriminant analysis (ADL).

The LDA classifier was the component capable of distinguishing between the brain’s reaction to false stimuli and the subject’s own opinion as to whether the image was false.

Results

Interested in seeing if EEG test subjects could distinguish between fake and real faces, the researchers aggregated and processed the results, finding that participants could easily distinguish real faces from unrealistic ones, but apparently had difficulty identify realistic fake faces generated by the GAN. Whether the image was upside down or not seemed to make little difference.

Behavioral discrimination of real and synthetic faces, in the second round.

Behavioral discrimination of real and synthetic faces, in the second round.

However, the EEG data told a different story.

The paper states:

“Although the observers had difficulty distinguishing the real from the fake faces and tended to outclass the fake faces, the EEG data contained cue information relevant to this distinction that differed significantly between realistic and unrealistic, and this signal seemed to be limited to a relatively short processing step. ‘

Here, the disparity between the accuracy of the EEG and the subjects' reported opinion (i.e. whether the facial images were wrong or not) is not identical, with the EEG captures closer to the truth as the manifest perception of those involved.

Here, the disparity between the accuracy of the EEG and the subjects’ reported opinion (i.e. whether the facial images were wrong or not) is not identical, with the EEG captures closer to the truth as the manifest perception of those involved.

The researchers conclude that although observers may have difficulty tacitly identifying false faces, these faces have “distinct representations in the human visual system”.

The disparity found led the researchers to speculate on the potential applicability of their findings for future security mechanisms:

“In an applied setting such as cybersecurity or Deepfakes, examining the ability to detect realistic faces might be better pursued using machine learning classifiers applied to neuroimaging data rather than targeting behavioral performance.”

They conclude:

“Understanding the dissociation between brain and behavior for false face detection will have practical implications for how we address the potentially detrimental and universal spread of artificially generated information.”

* My conversion of inline citations to hyperlinks.

First published July 11, 2022.