Article

Are humans better than AI at detecting deepfakes? It’s complicated.

By Annie Rauwerda

Within the past decade, image-altering technology has thrust us into the unnerving world of deepfakes. The forged videos, named after a Redditor called ‘deepfake’ who popularized the practice, use machine learning tools to create startlingly convincing face-swap videos. Gone are the days when advanced forgery was limited to big movies with huge CGI budgets: now, anyone with a working knowledge of neural networks and a consumer-grade GPU can take part.

Some deepfakes are relatively harmless, such as Donald Trump’s face plastered onto Kevin’s character in The Office, an alternate Game of Thrones ending, or the very inexplicable “Dr. Phil but everyone is Dr. Phil.” But others are ethical catastrophes that amount to financial fraudnational security threatsfake celebrity porn, and more.

While industry and governments have put forth efforts to limit deepfake use, bans are almost impossible to enforce because, at this time, no one can accurately say whether a video is real or not. The best deepfakes leave no pixelated evidence of a messy edit; their artificiality is virtually undetectable.

Related Content