Something Fascinating Is Wrong With the Eyes in Deepfakes

Something Fascinating Is Wrong With the Eyes in Deepfakes


Did researchers just find the smoking gun?

Upon Reflection

Researchers have figured out a fascinating new way to tell if a portrait of a human was AI-generated or not: by using the same techniques astronomers use to analyze observations of galaxies.

As detailed in research presented at this year’s Royal Astronomical Society’s National Astronomy Meeting, a team led by University of Hull masters student Adejumoke Owolabi found that light reflections in the eyes of deepfaked humans simply don’t line up.

“The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person,” said University of Hull professor of astrophysics Kevin Pimbblet in a statement.

It’s an ingenious and unorthodox application of scientific research that could have useful implications as AI image generators become eerily good at generating photorealistic images of people who don’t exist, blurring the lines between reality and a deepfaked alternative universe.

Image Credit: Adejumoke Owolabi

Square-Eyed

By using methods conventionally used to “measure the shapes of galaxies,” according to Pimbblet, his team found that deepfake images don’t have the same consistency in reflections across both eyes.

“We detect the reflections in an automated way and run their morphological features through the CAS [concentration, asymmetry, smoothness] and Gini indices to compare similarity between left and right eyeballs,” Pimbblet explained. “The findings show that deepfakes have some differences between the pair.”

The Gini coefficient measures the distribution of light in any given image of a galaxy. It orders the pixels by their brightness and compares the results to a perfectly even distribution.

Being able to reliably tell deepfaked images from real photos is more important than ever, given the potential to mislead, spread disinformation, and further political agendas.

Unfortunately, the researchers’ latest method isn’t foolproof.

“It’s important to note that this is not a silver bullet for detecting fake images,” Pimbblet said. “There are false positives and false negatives; it’s not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

More on deepfakes: YouTube Now Lets You Request the Removal of AI Content That Impersonates You



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.