Posted in

Detecting Deepfakes in 2026: A Guide to Spotting “Perfect” Nano Banana Images 

By a Digital Forensics Analyst 

I spent this morning analyzing a “leaked” photo of a Senator taking a bribe in a dim parking garage. 

Three years ago, this would have been easy. I would have looked for the hands. In 2023, AI couldn’t count to five. We used to laugh at the “spaghetti fingers” and the eyes that looked in two different directions. 

But this photo? The hands were perfect. The skin texture had pores. The text on the “bribe” envelope was legible. It was generated by Nano Banana Pro, Google’s latest model, and to the naked eye, it was flawless. 

But to me, it was obviously fake. 

I didn’t look at the face. I didn’t look at the fingers. I looked at the wall behind him. 

The shadow of the Senator was falling at a 45-degree angle. But the shadow of the concrete pillar next to him was falling at a 30-degree angle. 

Physics doesn’t lie. AI does. 

We have entered the “Post-Artifact” era of Deepfakes. The obvious glitches are gone. The models have conquered anatomy. But they still haven’t conquered Light Transport

If you want to spot a fake in 2026, you have to stop looking for broken pixels and start looking for broken physics. Here is my guide to spotting the “Perfect” fake. 

1. The Shadow Discordance (The Dead Giveaway) 

Generative AI models like Nano Banana or Flux are not 3D rendering engines. They don’t calculate light rays; they calculate pixel probability. 

They know that if there is a face, there is usually a shadow. But they often forget to check if all the shadows in the scene agree with the same light source.1 

The “Sundial Test”: 

Find two objects in the image. Draw a line from the top of the object to the tip of its shadow. Those lines should intersect at the light source (the sun or a lamp). 

In AI images, these lines often point to two completely different suns. 

In the Senator’s photo, the light hitting his face suggested a ceiling light to his left. The shadow on the floor suggested a light to his right. 

It’s a subtle cognitive dissonance. Your brain knows something is wrong (“The Uncanny Valley”), but you can’t place it. Usually, it’s the shadows fighting each other. 

2. The “Netflix Gloss” (Texture Repetition) 

AI has a specific aesthetic I call the “Netflix Gloss.” 

It’s too clean. It looks like a high-budget TV drama where the dirt has been designed by an art director. 

Real cameras have Sensor Noise.2 If you take a photo in a dim parking garage with an iPhone or a DSLR, there will be grain. That grain is random, chaotic, and specific to the camera’s ISO setting. 

Nano Banana simulates grain, but it simulates it too evenly. 

Zoom in 400%. 

  • In a real photo, the noise in the dark shadows is different from the noise in the bright highlights. 
  • In an AI photo, the “grain” is often a uniform filter applied over the whole image. It looks like a texture overlay, not a physical byproduct of a photon hitting a sensor. 

If the dirt on the floor looks “aesthetically pleasing,” it’s fake. Real dirt is ugly. 

3. The “Subsurface” Tell (The Waxy Ear) 

Hold a flashlight up to your hand. See how your fingers glow red? That is Subsurface Scattering—light entering your skin, bouncing around, and coming out. 

AI struggles with this. It treats skin like plastic.3 

Look at the ears. Look at the nostrils. 

In a real backlit photo, the ears should glow slightly red. 

In many Nano Banana generations, the ears look opaque, like wax fruit. Or, conversely, they glow too much, like a gummy bear. 

The model understands “Skin” as a surface texture, but it doesn’t fully understand “Flesh” as a translucent medium. If the person looks like they belong in Madame Tussauds, check the ears. 

4. The “Background Logic” Failure 

The foreground is where the user focused their prompt. 

“Senator taking a bribe, realistic, 8k.” 

The AI spends 90% of its compute power on the Senator. 

It spends the remaining 10% on the background. And that is where it gets lazy. 

Look at the architecture. 

  • Do the lines of the bricks meet? 
  • Is that door handle at a normal height? 
  • The “MC Escher” Test: Follow a staircase or a railing in the background. Does it go nowhere? Does it merge into a wall? 

In the Senator photo, there was a car in the deep background. It looked like a car… until you zoomed in. It was a “Dream Car.” It had the front of a Ford and the back of a Toyota, and one headlight was square while the other was round. 

The AI hallucinated a “Car-shaped object” to fill the space, but it didn’t bother to engineer a coherent vehicle. 

5. The “Infinite Focus” Problem 

Real cameras have a Focal Plane. 

If the Senator’s eyes are in focus, his ears might be slightly blurry. The wall behind him should be blurry (Bokeh). 

AI often creates images with Infinite Depth of Field. Everything is sharp. The button on the shirt is sharp. The car 50 feet away is sharp. 

This is physically impossible for a camera lens in a dark room (which would need a wide aperture, creating a shallow depth of field). 

If the image looks “crunchy” and sharp from front to back, be suspicious. 

6. Conclusion: Trust Your Gut (and the Metadata) 

We are entering a dangerous time. 

Tools like Google’s SynthID embed invisible watermarks into these images, but bad actors can scrub them.4 

Metadata (EXIF data) is easily faked. 

The only defense left is Visual Literacy. 

You have to train your eyes to see the physics. 

  • Light travels in straight lines. 
  • Shadows must converge. 
  • Real skin is translucent. 
  • Real cameras have noise. 

The “Perfect” image is the lie. The imperfection is the proof of humanity. 

If it looks too good to be true, it’s probably a Banana.