Deepfake technology is now being used to create video effects, AR, and VR productions, raising great hopes for the development of a new industry. However, with these hopes also comes a great deal of concern about how fake videos can be used for crimes. This article takes a look at deepfake technology—whether good or bad—and how this cutting-edge technology can be used in diverse ways in the future.
Seeing is believing? Now, you can’t believe even if you see it!
Many times, when we hear about something, we demand evidence, saying, “Show me!” or “Show me the proof!”. And, when we see a clip from a TV program or a YouTube video, we usually end up believing it without giving it a second thought. This is because humans process the outside world using 80% visual information, 10% auditory information, and 10% other information. Since video and audio information influences nearly 90% of our senses, we have an instinctively high level of trust in these sources of data.
However, we are now entering an era in which the credibility of video images is being threatened. This is being caused by the advent of deepfake technology, an AI technology in which a video is synthesized and its original video and audio contents are replaced with other people’s facial expressions and voices. In order to synthesize a deepfake video, the corresponding AI program must identify and reconstruct thousands of image patterns. The more sophisticated and perfect the learning, the more difficult it is to distinguish the deepfake contents from the original video and audio contents.
How did deepfake come about?
In 2017, an online user with the username of “deepfakes” synthesized a pornographic video featuring a celebrity using TensorFlow Library, an open-source AI machine learning program. The video, which was then posted on Reddit—a huge online community in the US—is viewed as the beginning of deepfake. A short time later, a similar technology was used to make the actress Carrie Fisher, who had passed away, appear in a Star Wars movie as a young Princess Leia, providing diehard Star Wars fans with an impressive and moving experience. The video Iron Man Tom Cruise also drew a lot of interest when it was released, because it synthesized Tom Cruise’s face and voice with the face of the Avengers’ Iron Man, and it was difficult to distinguish the video image from the actual Tom Cruise.
Even though many people think that deepfake technology is only used in special circumstances, like in the aforementioned examples, anyone can now easily create deepfake contents using a program called “FakeApp,” even if they are not expert programmers. In the technology’s early stages, thousands of images were needed to make a deepfake video, but with the development of AI technology, we’ve now reached the point where only a single image is needed to create a deepfake video.
ⓒCollider Extras Youtube / DeepFake Theater ‘Tom Cruise as Iron Man in the MCU’ capture
As evidenced by the deepfake videos of familiar celebrities like Kang Ho-dong that are being broadcast on TV commercials, this technology is already being used in diverse fields, such as in movies, music, and marketing. However, negative uses of this technology—such as deepfake pornography and the creation of fake news—are also rapidly becoming a social problem.
Concerns regarding deepfake technology
Since the deepfake phenomenon started out with the synthesizing of fake porn videos, it comes as no surprise that pornographic websites are one of the greatest contributors to the spread and misuse of deepfake technology. According to the 2019 report, “The State of Deepfakes” published by Deeptrace, a Dutch cyber security research company, 96% of 14,698 deepfake videos around the world are pornographic. Out of these, 53% feature British and American actors, with an additional 25% featuring deepfakes of Korean stars.
Famous actors and celebrities are not the only ones being harmed and exploited by deepfake videos. In 2018, a deepfake video of former US President Obama that was uploaded to BuzzFeed—an online media outlet based in the US—was covered by many international news outlets. The video was produced to raise awareness about the dangers of fake news made with deepfake, and in the video, it was almost impossible to distinguish between the deepfake version of Obama and the real Obama. Currently, even text-based fake news is becoming a tremendous social problem, and many people expect that the continued rise of deepfakes will make this an even more serious issue. Deepfake technology, when in the wrong hands, can be used for psychological warfare to influence major political elections, cause social unrest, or for crimes that manipulate evidence or create false evidence.
Fortunately, tech companies like Microsoft and Intel have been working on ways to detect deepfakes. They analyze video details that are typically difficult for deepfake AI to emulate and look for abnormal blinking patterns or facial muscle movements to identify deepfake use. Meta (a Facebook company) and Amazon recently held a Deepfake Detection Challenge (DFDC) in collaboration with universities such as MIT, Oxford, and Cornell and other world-renown AI experts. Google and Twitter have also implemented measures to prohibit the sharing of manipulated contents and to support the research of deepfake detection technologies.
The future of deepfake
As seen in the case of Carrie Fisher’s Star Wars appearance, deepfake is expected to be widely used in the video contents and marketing businesses for synthesizing and producing images, but this technology is being applied in the medical sector as well. In 2019, the Institute of Medical Informatics at the University of Lubeck in Germany developed an AI for diagnosing diseases using deep learning algorithms. In order for the AI to learn, tremendous amounts of data must be secured. However, in the medical field, obtaining large amounts of data comes with concerns about invading patients’ privacy and the high cost of producing 3D images for medical use. As a result, deepfake technology was utilized. Researchers at the institute used a deepfake program to create medical videos that were extremely similar to real-life images, and these were used to teach the AI about disease diagnosis.
Deepfake technology is also used in the AR/VR sector. On one occasion, the Salvador Dali Museum in Florida in the USA collaborated with the American advertising company Goodby, Silverstein & Partners to use deepfake technology to “bring back” Salvador Dali from the dead. They cast an actor with a physique similar to Dali’s and another actor with a similar voice. They used deepfake technology to synthesize Dali’s face and create opportunities for visitors to the museum to take pictures with Dali and talk with him in a kiosk, and to hear him talk about his works.
A startup company in Korea used deepfake technology as an alternative to filming an actual person. Using a human cast inherently involves a certain amount of planning, casting, and scheduling, as well as post-production work such as mastering, inserting subtitles, and rendering. When done manually, they typically require a great deal of time and resources. Furthermore, if there is a problem with the filmed footage, the whole process often has to be repeated and filmed again to make the necessary corrections. However, with deepfake, if there is about an hour of video data of any one person, the AI can use the data to generate audio and video contents that simulate any tone, expression, and talking speed. In this way, deepfake technology makes it possible to produce videos of a person talking simply by entering text. Producing a video with a runtime of 10 minutes requires a crew of more than four people and more than four hours of production work, but with deepfake technology, one person can create a similar video in about 10 minutes without any actual filming.
Gartner, Inc., an American information technology research and advisory firm, published a research report in 2021 predicting how AI technology would impact society over the next several years. This report expressed concerns about the impact that AI could have on privacy and truth and claimed that businesses and governments must be prepared to respond quickly to these concerns. The article’s prediction that soon anything without an authenticated, digitally encrypted signature cannot be trusted raises serious concerns about the potential misuse of AI deepfake technology.
Nevertheless, deepfake technology will continue to develop into the future and become more widespread, becoming more frequently and intimately used in industrial fields in addition to people’s everyday lives. As with all scientific and technological advancements, deepfake has the potential to exert both negative and positive influences not only on individuals but also on society as a whole. However, stifling technological advancements because of fears about their adverse effects cannot and should not be done. It is our job as a society to set a positive direction for new technologies, and that is now the task that stands before us with deepfake technology.
By Cho Min-soo (Columnist, IT/Science/Business Sector)