This Instagram Experiment Proves We Can No Longer Distinguish Between What is Real and What Is Fake
It's official, we can no longer tell the difference between real people and fake ones generated by machine learning and its scary.
Last week I ran an experiment on my instagram stories after seeing Jamie Barlett twitter post about a website which endlessly generates new faces of people who do not exist called thispersondoesnotexist.com.
The experiment was a very simple game: I put two faces side by side and my followers had to choose which person was real and which person was fake.
It was a very popular little game with more than 25% of all story viewers participating. And not to my surprise, my followers only correctly identified two out of five real people correctly. The other three they labeled the fake people as real.
Now someone well versed in this field would have been able to notice some of the obvious idiosyncrasies in the images.
But that wasn't the point. The point was to see whether or not the average person could tell the difference between what is real and what is fake. And as a result, it is now obviously that we can no longer distinguish between the real and the fake.
Less than 3% of those who participated answered all five correctly (the people you want to be friends with just incase there is an AI apocalypse) while more than 7% got all five wrong.
How does this work?
These faces are created used a type of machine learning called Generative Adversarial Networks (GANs). It works by having two neural networks play a game. A generative network trained by hundreds of thousands of real images creates fake images and the discriminative network evaluates whether or not its real. The generative network then adjusts based on the feedback from the discriminative network, continuing until it can trick the network into thinking that a false image is real.
This specific type of GAN, a Style GAN, was based off the research of NVIDIA principal research scientist Tero Karras while the website was built by Uber's Philip Wang. Click here to learn more about the science behind this specific machine learning process.
Why is this important?
Technology advancements can always be used in two ways: for good and for evil. Imagine being able to instantly generate original icons or graphics or photographs for your website. Or being able to quickly turn a picture of a chair or a video of a road into a 3D asset/environment. Or being able to automatically generate new rap instrumentals. Now imagine being able to instantly generate photos or videos that incriminate an adversary. Or being able to generate millions of fake tinder/linkedin/facebook/twitter profiles. Or being able to generate false documents. As always, orsificators will vie for the former while learning to identify and combat the latter.
Now that's over with, lets enjoy these fake images of cats generated by https://thiscatdoesnotexist.com/
Enjoyed this? Follow Me Here On Instagram to be part of more fun experiments