Why This Person Is Not Real Is Actually Changing How We Trust The Internet

Why This Person Is Not Real Is Actually Changing How We Trust The Internet

You’ve probably seen the face before. It looks like a normal guy from suburban Ohio or a woman waiting for a bus in London. The skin has that slightly oily sheen, the hair is a bit messy, and the eyes look... well, they look alive. But they aren't. Not even a little bit. That’s the chilling, fascinating reality of This Person Is Not Real, a concept that started as a viral website and blossomed into a massive debate about the death of digital evidence.

It’s weird.

If you go to the site, you get a fresh face every time you refresh. These aren't composites made by a human artist in Photoshop. They are hallucinations of a StyleGAN (Generative Adversarial Network) model developed by Nvidia. Essentially, two neural networks are fighting each other. One tries to create a fake face, and the other tries to spot the fake. They do this millions of times until the "faker" gets so good that even the "judge" can't tell the difference.

We’ve reached a point where the "uncanny valley"—that creepy feeling we get when robots look almost-but-not-quite human—is basically gone for still images.

The Tech Behind the Illusion

Philip Wang, a software engineer, launched the original site back in 2019 to show people just how powerful AI had become. He used the StyleGAN code released by Nvidia researchers like Tero Karras. It’s funny because back then, the tech was a novelty. People would spend hours clicking refresh just to see if they could find a glitch. You’d find a "monster" every now and then—a floating ear or a background friend who looked like a melting Salvador Dalí painting.

But things changed fast.

The underlying architecture uses a latent space. Think of it as a massive, invisible map of "human-ness." One coordinate represents eye color, another represents the shape of a chin, and another represents the texture of a sweater. By picking a random point on this map, the AI can render a unique person who has never existed in the history of the universe.

It Isn't Just For Fun Anymore

Businesses are obsessed with this. Think about it from a corporate perspective. If you need a diverse group of people for your website's landing page, you usually have to hire a photographer, rent a studio, pay models, and deal with usage rights. Or, you can just go to a site like Generated Photos and buy 10,000 "people" for a fraction of the cost.

No royalties. No expiration dates. No human drama.

Honestly, it’s a bit depressing for struggling actors and models. We are seeing the "democratization" of stock photography, but it comes at the cost of actual human employment. Some companies are even using these faces to create "synthetic influencers." These are Instagram accounts with hundreds of thousands of followers, but the person behind the brand is just a server in a cooling room in Northern Virginia.

The Dark Side: Scams and Synthetic Identities

Here is where things get genuinely scary.

The FBI and various cybersecurity firms have been sounding the alarm on "synthetic identity fraud." Scammers don't need to steal your photo anymore. If they use your photo, you might find it and report them. If they use a face from This Person Is Not Real, there is no one to complain.

They use these AI-generated faces to build incredibly convincing LinkedIn profiles. They look professional. They have a suit, a nice smile, and a blurred office background. These "ghost" employees then reach out to real workers at tech companies to phish for data or install malware. It works because we are hardwired to trust a human face. We see a person, and our brain says, "Okay, this is a peer."

LinkedIn has had to delete tens of millions of fake accounts over the last few years. Many of them were using StyleGAN-generated profile pictures. It's a constant arms race between the AI that creates the faces and the AI that tries to detect the specific pixel patterns left behind by the generator.

How To Spot The Fake (For Now)

Even though the tech is good, it’s not perfect. If you’re looking at a profile and you suspect This Person Is Not Real, check the ears. For some reason, GANs really struggle with symmetry in earlobes. One ear might have a piercing, and the other might be a weird, fleshy blob.

Look at the background too.

The AI is great at faces but terrible at logic. The background usually looks like a chaotic dreamscape of blurred colors and nonsensical shapes. If the person is wearing glasses, look at the frames. Often, the frames won't quite match on both sides, or they’ll melt into the skin.

Also, look at the teeth. Early versions of this tech often gave people "unitooth"—a single, long row of white without clear gaps between the incisors. It’s subtle, but once you see it, you can’t unsee it.

The Philosophical Mess

What does this do to our collective psyche?

If you can't trust a photo, what can you trust? We are moving into an era of "zero trust" media. We used to say "seeing is believing," but that's a dead concept now. This has massive implications for journalism and the legal system. If a photo can be generated in milliseconds, its value as "proof" drops to zero.

We are seeing a push for "content provenance"—sort of like a digital watermark or a "nutrition label" for images. Organizations like the C2PA are trying to create standards where your camera stamps a cryptographically signed bit of data into the photo the moment you take it. This would prove the photo came from a real lens and a real sensor, not a math equation.

💡 You might also like: Why Thumbs Up Down Images Still Rule Our Digital Body Language

Actionable Steps for Staying Safe Online

The reality is that these "non-existent" people are already among us. You've likely scrolled past dozens of them today without realizing it. Staying safe isn't about being paranoid; it's about being observant.

  • Reverse Image Search is your friend: If someone reaches out to you with a business proposal and their face looks "too perfect," run it through Google Lens or Yandex. If it’s a fake, it might not show up anywhere else, or it might show up on a "GAN-detect" database.
  • Check the metadata: If you're suspicious of an image file, use an online EXIF viewer. AI-generated images often lack the standard camera metadata (like ISO, aperture, or camera model) that a real photo would have.
  • Look for "The Glitch": Check the edges of the hair. AI often struggles to blend fine strands of hair with a complex background, resulting in a strange "halo" effect or hair that seems to grow out of the forehead.
  • Verify through other channels: If you're talking to someone online, ask for a quick video call. While deepfake video is getting better, it’s much harder to pull off in real-time than a static profile picture.
  • Be skeptical of "New" Experts: If a profile was created last month, has 500+ connections, and uses a high-resolution, perfectly lit headshot, be wary. Scammers use these "non-real" people to build instant authority.

This technology isn't going away. If anything, it’s going to get more integrated into our lives. We’ll see it in video games, where every NPC (non-player character) has a unique, photorealistic face. We’ll see it in movies, where "extras" are just digital files. The key is recognizing that the line between the physical world and the latent space has finally, permanently, blurred.