Have you ever wondered how Hollywood makes a ‘fake’ video look real? They do it using computer-generated images (CGI). Today, anyone can create fake videos without taking much time without having any technical background or knowledge. All one requires is access to the internet and a PC. Such tech advancement has made things easier for people but also more difficult for others. So much so that now even professionals are being fooled by the fake contents [such as photos and videos].
Yes! You heard it right. Professionals have no clue when they come across something that seems real but isn’t. This confusion arises because of deepfakes – digital parodies created with machine learning algorithms that allow editors to imitate a person’s face in videos realistically.
So, What Exactly Are Deepfakes?
It’s a deep learning algorithm that uses a neural network for the image-based generation of photorealistic images and videos.
Deepfake was first used in a deep learning research paper published at arXiv in June 2017 by Nvidia researchers. The purpose was to generate synthetic images of humans based on actual photographs. The deepfake process begins with gathering lots of photos of the subject’s faces along with their corresponding video clips. Then deep learning algorithms are trained on this data to learn how different facial movements contribute to changes in an individual’s appearance. With this knowledge, deepfake algorithm then generates new video clips by adjusting pixels in the video frame using what it has learned.
The technology allows you to create entirely fictional photos, such as that of a non-existent Bloomberg journalist named “Maisy Kinsley” who had an active LinkedIn and Twitter profile. Another example is Katie Jones, who was probably created for foreign spy operations to have plausible cover stories on social media networks like Facebook or Instagram.
Once the fake image is created, the deepfake algorithm can be used to create new ones from a vast number of possible sources, which include:
- Youtube videos uploaded by different users/artists;
- Facebook profile pictures; and
- Hollywood movies featuring celebrities with digitally enhanced looks.
The Processes Involved In Creating Deepfakes
To produce deepfakes, machine learning is used to train a neural network on many hours of real video footage. This way, the computer can understand what someone looks like from different angles and under various lighting conditions. Once this training has been completed, it’s possible to combine these networks with other techniques to superimpose an actor onto another person for accurate results that are virtually impossible to tell apart as being fake or not!
Some Of The Alarming Uses
Today the uses of deepfakes are for both legitimate and illegal purposes:
- For legitimate purposes: deepfake tech is being utilized by various researchers, artists, and developers for things like music video parodies [Can you spot which one is real or deepfake], realistic face-swapping etc. Deep fakes can also be employed in the education sector by using this technology on historical videos/images of our ancestors [make them look realistic]. This would result in more engaging content that will be able to help us learn about historical events.
- Illegal uses of this involve deepfake porn, media manipulation, and deepfaking terrorist videos.
Deepfake Porn involves a celebrity’s face superimposed on another person’s body. Moreover, deepfaking terrorist videos involve creating false clips that appear authentic but designed to deceptively endorse certain ideas and ideologies. Or to campaign propaganda.
Deepfake technology has caused an uproar with the revelation that it is easy to create and distribute fake videos of someone, but these fakes are not as convincing when looked at closely. Poorly done deepfakes may have bad lip-synching or patchy skin tones. In addition to this, flickering around areas where there are transposed faces such as hair. Fine details like this can easily are easy to spot. Looking close enough, which makes them less believable than a high-quality video
Technology is becoming more difficult to detect as it gets better. Nonetheless, recently, researchers in the US found that deepfake faces do not blink properly because of a lack of data, with blinking images being used most often and algorithms never learning about such an action. However, as soon as any weakness in the system comes up, users began making fake videos with people blinking in them very quickly, showing that humans are unable to keep up with AI’s evolution. It also shows how human ingenuity can outmatch some technical problems.