Deepfakes - The Next Big Threat to Democracy?

Deepfakes have been lurking on the internet for a number of years now and, while entertaining for some at first, have evolved beyond the film industry and meme culture to become a powerful and dangerous tool used to sway public opinion. In a growing “infodemic” of fake news, the challenges posed by deepfakes to democracy must be taken seriously.

Formidable progress in Artificial Intelligence can, and has, led to a positive impact on peoples’ lives but is simultaneously enabling novel forms of manipulation and deception. The recent emergence of AI-generated deepfakes has led to growing concerns over risks of disinformation and media manipulation by malicious entities and individuals.

While technology can still be used to detect synthetic content, once viewed it could have a harmful impact on politics and society at a time when fake news spreads faster than the truth and thus people’s reasoning becomes tainted by bias.

 

What Are Deepfakes?

Deepfakes - a combination of the terms “deep learning” and “fake” - are computer-generated videos through the use of artificial intelligence or photo/audio editing softwares to create new footage depicting statements or events that in reality never occurred.

In other words, a deepfake refers to the digital manipulation of an image, a video or of sound to impersonate someone in such a realistic way that it becomes near impossible for a non-expert audience to detect the deception.

 

The democratisation of technology and leaps in AI have led to legitimate concerns over deepfakes being used to create fake news involving political figures.”

 

For the time being, the vast majority of deepfake videos remain amateurish and are unlikely to fool anyone, but while the technology used to create deepfakes is relatively new, it is advancing at a rapid pace and it has become increasingly difficult to distinguish whether a video footage is real or not. It is only a matter of time before deepfakes become indistinguishable from real content.

After emerging in 2017 on Reddit and initially mostly used to share online realistic AI-manipulated pornographic videos featuring celebrities, the democratisation of technology and leaps in Artificial Intelligence have led to legitimate concerns over deepfakes being used to create fake news involving political figures.

This could have the potential to shake entire nations and cause great harm to democracy by being a “weapon for purveyors of fake news who want to influence everything from stock prices to elections” according to a report published by MIT Technology Review.

 

What Political Threats and From Whom?

An Amsterdam-based cyberscurity firm Sensity has so far detected 70,547 visual threats - which include deepfakes and other forms of maliciously manipulated images and videos - and numbers are on the rise.

At the moment, women in the entertainment sector are mostly affected by manipulated pornographic videos disseminated online by what many consider as socially awkward and frustrated individuals. Yet, deepfake pornograpy can also be politically motivated by targeting activists, journalists and political figures.

It could well become a threat to national security as well as to democracy with the potential to affect elections and the power to manipulate public opinion, and where truth may no longer be believed. 

 

“Deepfakes could well become a threat to national security as well as to democracy.”

 

Deepfake technologies will be, as summarised in a Brookings Institution report, capable of “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals”.

Naturally, deepfakes are a blessing for hostile foreign intelligence agencies involved in domestic affairs meddling and always eager to develop new covert weapons of mass distraction. A prime example is the Russian “troll farm” (or, Russian Web Brigades) which thrust the fake news trend into the spotlight during the 2016 US presidential elections and the 2016 Brexit vote.

In addition, the rapid and more or less global democratisation of specialised tools, techniques as well as technology will allow for a wide range of actors to have the ability to generate persuasive and possibly damaging deepfake footage.

 

What Response to Deepfakes?

There is no silver bullet for this powerful and inexpensive threat. While the technology does not seem to have been used to disrupt political processes thus far, it does however remain available.

One possible response could be to increase media literacy among the population helping people to spot fake news shared online by using critical thinking. However, self-awareness alone will not contribute towards an era of trust as technology never ceases to evolve, making it near to impossible for human beings to detect all fraudulent manipulation.

Additionally, many will argue that this technology-related threat could be countered by corresponding technological solutions, following the logic: “if AI can be used to create deepfakes, AI can also be used to detect deepfakes”. However, as correctly observed by Brookings, detection techniques “often lag behind the most advanced creation methods” resembling a virus/anti-virus dynamic.

An excellent illustration of the above is the case of deepfake blinking. When in 2018 researchers found that  unnatural blinking patterns allowed for the detection of deepfakes, this imperfection was soon enough corrected, rendering the detection algorithm futile.

 

“Social media platforms should inevitably become our first line of defence against politically-motivated deepfakes.”

 

Instead, social media platforms should inevitably become our first line of defence against politically-motivated deepfakes, and encouraging signs are already emerging with investments in platforms for deepfake detection.

Facebook has recently taken a decisive step by releasing the largest ever data set of deepfakes to teach AI how to spot them and is increasing its crack down on deepfake content  by removing AI-altered content aimed at misleading viewers; however the ban does not include parody or satire which could possibly become an exploitable loophole.

Similarly, Twitter has announced earlier this year a ban on “deceptively shared” synthetic or manipulated media which are “likely to cause harm” while TikTok’s US General Manager announced on August 5 a similar ban on “synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm”. 

These measures are crucial to curb the genuine looming threat posed by deepfakes, but while detection technology develops, education and critical thinking remain crucial.

About author: Jean-Patrick Clancy

Partners

Tento web používá k analýze návštěvnosti soubory cookie. Používáním tohoto webu s tím souhlasíte. Další informace