We already know DeepFakes can be quite believable, but just how believable are they? Kaggle's Deepfake Detection Challenge (DFDC) recently sought an algorithmic answer to this question of detecting fakes. The description on the Kaggle Website explains, "AWS, Facebook, Microsoft, the Partnership on AI’s Media Integrity Steering Committee, and academics have come together to build the Deepfake Detection Challenge (DFDC). The goal of the challenge is to spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media." The winners of the Kaggle Competition will be awarded $1,000,000 in total.
Rather than fine-tune the best machine learning model for this Kaggle competition, we are curious about strategies and techniques for building public awareness of DeepFake technology and helping ordinary people think critically about the media that they consume.
From 100,000 DeepFake videos and 19,154 real videos hosted on the public Kaggle competition, we trained a series of neural networks to detect DeepFakes. These videos show actors who consented to the DFDC making manipulations of their likenesses. Based on our machine learning model's performance, we filtered the collection of videos to the 3,000 on which the model was most confidently wrong. In other words, we chose the videos that the model predicted with high certainty that they were real videos when in fact these videos were manipulated by AI. These videos are not just hard for a machine learning model to discern, but they are hard for many people to discern between fake and real. These videos are particularly useful for highlighting the nuances of DeepFakes because they showcase a diversity of people, lighting conditions, and algorithmic manipulation techniques. All manipulations are facial or audio manipulations. Many of the manipulations are very subtle.
We hypothesize that the exposure of how DeepFakes look and the experience of detecting subtle computational manipulations will increase people's ability to discern a wide-range of video manipulations in the future. As such, we hosted a website called Detect Fakes to display thousands of these curated, high-quality DeepFake and real videos publicly.
The Detect Fakes experiment offers the opportunity to learn more about DeepFakes and see how well you can discern real from fake. When it comes to AI-manipulated media, there's no single tell-tale sign of how to spot a fake. Nonetheless, there are several DeepFake artifacts that you can be on the look out for.
- Pay attention to the face. High-end DeepFake manipulations are almost always facial transformations.
- Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes are often incongruent on some dimensions.
- Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes often fail to fully represent the natural physics of a scene.
- Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes often fail to fully represent the natural physics of lighting.
- Pay attention to the facial hair or lack thereof. Does this facial hair look real? DeepFakes might add or remove a mustache, sideburns, or beard. But, DeepFakes often fail to make facial hair transformations fully natural.
- Pay attention to facial moles. Does the mole look real?
- Pay attention to blinking. Does the person blink enough or too much?
- Pay attention to the size and color of the lips. Does the size and color match the rest of the person's face?
These eight questions are intended to help guide people looking through DeepFakes. High-quality DeepFakes are not easy to discern, but with practice, people can build intuition for identifying what is fake and what is real. You can practice trying to detect DeepFakes at Detect Fakes.