Deepfakes are becoming very easy to create. As shown in the work by scientists from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, videos can be fine-tuned to audiences of different backgrounds. All this is done purely by editing scripts, automating previously tedious manual work.
The authors themselves note:
However, the availability of such technology — at a quality that some might find indistinguishable from source material — also raises important and valid concerns about the potential for misuse. Although methods for image and video manipulation are as old as the media themselves, the risks of abuse are heightened when applied to a mode of communication that is sometimes considered to be authoritative evidence of thoughts and intents. We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.
The technology at this early stage has limitations (e.g., occlusions of the face breaks the algorithm) that will proably overcome quite soon. We’ll have to deal with the consequences of software that lets anybody change what people say in videos.