‘DeepFakes’, as the term is self-explanatory stands for fake videos and audios manipulated using Artificial Intelligence. Such videos often play with mass psyche to master the political realm and shift the balance of power in favor of a particular leader. The political news is full of leaders such as Barack Obama, Vladimir Putin, Mark Zuckerberg, Morgan Freeman, Kim Kardashian and the recent circulation of such a video by Mamata Banerjee in India. The article understands how deepfakes can prove to be disastrous in unmaking of political leaders and spread of misinformation. Further, the increasing percolation of technology and use of AI to manipulate videos threatens democracy.
DeepFake is a result of an AI algorithm called GAN which stands for generative adversarial networks. As this livemint report argues, “GANs are high-level machine-learning systems that were initially designed for “unsupervised learning”—where the AI learns on its own”. Chesney and Citron have pointed out in their work, ‘Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics’, that deepfakes is an outcome of advances in a form of artificial intelligence also called ‘deep learning’. Deep learning sets algorithms called ‘neural networks’ learning to infer rules and replicating patterns by filtering out large data sets. “Deepfakes then emerge from specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks” or GAN” (Chesney and Citron, 148, 2015) . Explaining how fake video is generated, they argue, one algorithm the ‘generator’ creates content modeled on source data (like making an artificial picture of lions from a large database of pictures of real lions), while the second algorithm, the ‘discriminator’ tries to spot the artificial content. Since algorithms are in rapid movement against one another, realistic and fake is content is produced simultaneously. The technology has the potential of proliferating widely, thus touching every sphere of human life.
DeepTrace, a report produced by Netherlands based Tech company, points out that DeepFakes is the most recent technology to create new kinds of synthetic media. The public awareness about DeepFakes have grown with such searches being 1000 times higher in 2018 than in 2017. The report also predicted that deepfakes are likely to have high profile, catastrophic impact on events in 2019-2020.
Politically, Deepfakes are disastrous because of the time of their arrival. The contemporary time is full of examples when one cannot separate fiction from reality. A large amount of information flows from social media these days. Platforms such as Facebook, Twitter and Instagram are quick and users tend to create their experiences so that they mostly encounter perspectives they already agree with. The display of advertisements on our timelines is already influenced by how our social media and algorithms interact. The times are such where we are fed with an ample amount of information without double checking its authenticity. Some political examples in this regard are- Russia influencing 2016 US presidential election by spreading divisive and politically inflammatory messages on facebook and twitter. The impact of deepfakes is not bound by our borders and defense forces and therefore, cannot be regulated easily. Another political use of deepfakes was in May 2018 when a Flemish socialist party posted a video apparently showing Trump criticising Belgium for remaining in the Paris climate agreement. The video concludes with Trump saying, “We all know that climate change is fake, just like this video”. Interestingly, this sentence was not even subtitled in Flemish Dutch.
The political negatives of DeepFakes has been pointed out by Republican senator Marco Rubio, quoted here, “deepfakes would be used in “the next wave of attacks against America and western democracies”. Chestney and Citron have also argued that deepfakes will be highly useful for non-state actors such as insurgent groups and terrorist organisations. These groups who have historically lacked resources to spread their word, now have accessible social media to incite and engage in provocative actions to target their audiences.
DeepFakes has been utilised by Russia to spread a word against French President Emmanuel Macron and create inflammatory messages from the campaign of ‘Black Lives Matter’. Such incidences only prove that artificial intelligence can in many ways threatens the very fabric of democracy. The technology is already being used to create social and ideological divisions and can further what Chestney and Citron argue create a ‘liar’s dividend’, which means citizenry will tend to trust lying politicians even if there is sufficient evidence against them.
Possible to Fix DeepFakes?
The first way that has been discussed in various research papers is a legal and technological approach of detecting forgeries. In June 2018, computer scientists at Albany, SUNY announced that they had created a program that detects Deepfakes by scrutinising abnormal eyelid movements when the subject of video links. However, the invention has been criticised on two accounts. Firstly, such an innovation many times in technological context pave a way for other counter inventions or the next wave of innovation. Secondly, by the time this innovation will locate deepfake, the fast world of social media would have spread the word far and wide.
The second way is authenticating content before it spreads also called “digital provenance” solution. Companies like Truepic are developing mechanisms to digitally watermark audio, photo, video content with the help of metadata logged immutably on a distributed ledger or what is also known as blockchain. Although touted as ‘ideal fix’ such solutions have to be universally deployed on laptops and smartphones. Their use would make it mandatory for facebook and twitter to issue a precondition before uploading content and if these terms and conditions are not met, their accounts would be suspended or blocked. This solution might move in a direction of excessive online censorship.
The third way that has been under speculation is called ‘authenticated alibi services’. In coming times, the private companies might make use of technological devices to keep a record of politician’s life. This has been called as enhanced forms of ‘lifelogging’ — a record of nearly every aspect of one’s life to prove what they were doing and when. This technology is not only futuristic but can lead to massive amount of surveillance on the lives of reputed individuals or people in general.
A more probable and nation-state friendly solution to deepfakes could be providing for a robust legal regulatory framework. The laws can be such that sharing deepfakes could amount to punishment to cut down the sharing, and distributing of such content. In that sense, the evil if not nipped in the bud it can be controlled and curtailed at a later stage of spread. Another solution could be entering into an agreement with social media companies to stop the circulation of such content. Social media websites can identify such content before hand and alert the users so that a democratic risk could be avoided.
This piece is written by Manisha Chachra. Manisha is Associate Researcher at Govern.