Camera apps and picture editing software let us edit our images ourselves. We can make ourselves look fairer, hide that ugly zit, remove blemishes and dark circles, and even add animal nose and ears. But, the increasingly sophisticated technology has led people to create false videos, which look real. Too real.
This technology is called deepfake. The latest Facebook Transparency Report has found that 2.2 billion fake Facebook accounts were removed from January 2019 to March 2019 because of a rise in the automated scripted attacks. The threat of fake images and videos are on the rise. An average person cannot identify 40% of deepfakes videos. Also, about 3.6 trillion YouTube views, every year, come from fake video sources.
Deepfakes are forged videos/images using refined artificial intelligence. This leads to fabricated images and sounds created in such a way that they seem real. Deepfakes are getting better and better with time, not to mention accessible.
The deepfake technique imposes a manipulated image over the real source image and creates a fake model. This could perform however you want, and say things however you want. Deepfakes are created using artificial intelligence methods and machine learning. That’s why they are hard to identify. Deepfakes are spreading at a fast rate and deceiving the public into thinking they are the real deal and altering their objectivity via incorrect information.
How do deepfakes work?
Deepfakes use a deep-learning system capable of producing a persuasive fake image by deeply studying the videos and photographs of the target person from different angles, and then imitating its speech patterns, gestures, and behavior.
Deepfake technology works on two components of the GAN AI system – a discriminator and a generator. The generator is responsible for creating a false clip and then sends it to the discriminator to identify if the clip is real or fake. After producing a basic fake video, GANs (generative adversarial network) make it believable for the audience. GANs find flaws in the forgery and lead to improvements and address the flaws.
The generator and discriminator work together and are in a constant battle against each other. It is the discriminator’s job to distinguish the fake video. On the other hand, the generator is responsible for creating the video which the discriminator cannot distinguish as fake. A deepfake video gets completed after multiple sessions of detection and improvement.
According to a technology report by MIT, the device preparing deepfakes could be an ideal weapon for the fake news spreaders who want to influence everything, including elections and stock prices.
Detecting Deepfake videos
Although AI plays a key role in making deepfakes, you could use AI to detect them. Technology has become easily accessible to everyone, and therefore, more researchers are concentrating on efforts to detect deepfake and find ways to curb its spread.
There are slight visual aspects that look off if you take a very close look. Anything ranging from eyes or ears to the border of the face or too smooth skin could assist you in detecting deepfakes. Spotting the telltale signs is becoming more intricate as the technology is becoming advanced and realistic.
The big names of the industry, such as Facebook and Microsoft, have stepped up to detect and eliminate deepfake videos. A few weeks ago, Google released thousands of deepfake videos that are going to help the researchers build tools using artificial intelligence to detect forged videos. These videos could result in cyberbullying, corporate sabotage, and political misinformation. The technology is going to help catch deepfakes the same way spam filters catch email spam.
But technology is going to play a part in the solution because like it or not, deepfakes are expected to improve a lot faster than detection methods. Also, both human expertise and intelligence are required for recognizing the deceptive videos in the coming future. Researchers are also focusing on a solution to detect deepfakes by automated techniques for identifying videos that are manipulated using human intelligence and AI. These detections tools largely rely on machine learning and massive amounts of training data.
The research arm of the Defense Department, Darpa, runs a program that provides funds to the researchers focusing on automated forgery detection tools for identifying deepfakes.
Facebook and Google made it public this year that they are going to create a huge database of fake videos for research teaming up with the top universities in the United States. The companies plan to release this database to the AI researchers at a conference in December. This proves that the tech giants are also seeking ways to get a technical solution to the increasing problem of deepfakes.
Deepfake Technology – The Dangers
According to security experts, the dangers and threats of deepfake technology are real and could result in devastating effects. Some telltale signs indicate a spread of misleading political information and could turn towards harming businesses too.
Digital attacks have become easy and cheap. And as the quality of fake content gets better and better, differentiating the real thing from fake has become increasingly difficult. Deepfake technology is dangerous and poses a threat to education, media, show-business, politics, and other fields in the worst ways possible.
The biggest threat regarding deepfakes it its integration in pornographic videos. Any enemy could plaster a celebrity or any other well-known person’s face on the body of a porn star. It could lead to a huge mess in a political debate, and malign reputation as well.
Deepfakes can also show a public figure as drunk or engaging in immoral activities – proving him incapable. Deepfakes use a simple technique. The real video’s speed is cut about 75% of its natural speed. The voice pitch also goes higher to make it sound genuine. The slow speed of the video gives the impression that the person in the video is drunk.
Deepfakes involving famous people could yield dramatic impacts as well. Take the example of Mark Zuckerberg’s deepfake video, announcing his control over the people’s stolen information. It spread like wildfire. Fake news, fakes studies, fake scientific research results, and survey statistics could have a devastating impact on people’s mental health. Also, it could leave scholars, researchers, politicians, and famous personalities ruined, broken, and in disgrace even. Deepfake videos could be the ultimate way for some to take revenge.
Deepfake Technology – Opportunities
Apart from using it to damage someone’s repute and spread lies, deepfakes could be used for educational, scientific, and social purposes. GANs manufacture faces using photorealistic imaginary personalities. The same technologies could be used to do some amazing editing like sharpening blurred pictures and add color to black-and-white photographs.
Also, scientists could utilize this technology from virtual molecules for chemical research, which could speed up the scientific and medical discoveries. Scientists can observe the behavior of the molecules while generating new ones.
To sum up, deepfake technology has more drawbacks than advantages. Both the U.S. and Great Britain government have taken initiatives to monitor everything linked with GANs use. Also, education and awareness are extremely important to curb and combat the negative consequences of this model. Critical thinking and digital literacy should be included in the school curriculum, so that people can learn to differentiate between a real image/video from a deepfake one, at a young age.