And attempts to weaken section 230 of the CDA in the name of addressing the threat posed by deepfakes would create a whole cascade of unintended and damaging consequences to the online ecosystem. Rulings that are insufficiently protective of deepfake targets could leave people without a mechanism to combat deepfakes that could be extraordinary harmful. Rulings that confer overly broad protection to people targeted by deepfakes risk running afoul of the First Amendment and being struck down on appeal.
It won’t be easy for courts to find the right balance.
On the other side of the ledger are the protections conferred by the First Amendment and the “ fair use” doctrine in copyright law, as well as (for social networking services and other web sites that host third-party content) section 230 of the Communications Decency Act (CDA). Frameworks that can potentially be asserted to combat deepfakes include copyright, the right of publicity, section 43(a) of the Lanham Act, and the torts of defamation, false light, and intentional infliction of emotional distress. The legal landscape related to deepfakes is complex. More fundamentally, will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms-or different people-render conflicting verdicts regarding whether a video is genuine? Legal and Legislative Remedies Given the distributed nature of the contemporary ecosystem for sharing content on the internet, some deepfakes will inevitably reach their intended audience without going through detection software. Another challenge is that technological solutions will have no impact when they aren’t used. As a result, in the deepfakes arms race, even the best detection methods will often lag behind the most advanced creation methods. The number of researchers focusing on deepfake detection has been growing, thanks in significant part to DARPA’s Media Forensics program, which is supporting the development of “technologies for the automated assessment of the integrity of an image or video.” However, regardless of how far technological approaches for combating deepfakes advance, challenges will remain.ĭeepfake detection techniques will never be perfect. A team including researchers from UC Riverside and UC Santa Barbara has developed methods to detect “ digital manipulations such as scaling, rotation or splicing” that are commonly employed in deepfakes. Professor Edward Delp and his colleagues at Purdue University are using neural networks to detect the inconsistencies across the multiple frames in a video sequence that often result from face-swapping. Creating a deepfake involves manipulation of video data-a process that leaves telltale signs that might not be discernable to a human viewer but that sufficiently sophisticated detection algorithms can aim to identify.Īs research led by professor Siwei Lyu of the University at Albany has shown, face-swapping (editing one person’s face onto another person’s head) creates resolution inconsistencies in the composite image that can be identified using deep learning techniques. While AI can be used to make deepfakes, it can also be used to detect them.
What can be done? There’s no perfect solution, but there are at least three avenues that can be used to address deepfakes: technology, legal remedies, and improved public awareness. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.īecause they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes can be used in ways that are highly disturbing.