/

/

/

The Malicious Exploitation of Deepfake Technology: Political Manipulation, Disinformation, and Privacy Violations in Taiwan

The Malicious Exploitation of Deepfake Technology: Political Manipulation, Disinformation, and Privacy Violations in Taiwan

Cybersecurity Masthead
Tags
The Malicious Exploitation of Deepfake Technology: Political Manipulation, Disinformation, and Privacy Violations in Taiwan

The rapid advancement of artificial intelligence (AI) has facilitated the creation of hyper-realistic yet entirely fabricated text, audio clips, videos, and images, commonly referred to as “deepfakes.” The emergence of deepfake technology has provided malicious actors with a potent tool for spreading propaganda, disseminating misinformation, damaging reputations, manipulating public perception, and eroding trust in institutions. This technology allows these groups to create highly realistic yet false narratives, posing a significant threat to both national and global security.

In the case of Taiwan, deepfake technology is also exploited for malicious purposes: including political manipulation and reputational damage, by both independent malicious actors and state-sponsored groups. Accordingly, this study examines how deepfake technology has been used by these actors in Taiwan, providing specific examples to illustrate its impact.

What is a Deepfake and How to Make One

Recent advancements in artificial intelligence (AI) have resulted in the development of algorithms capable of generating highly-realistic synthetic images, videos, audio, and text. Among these innovations, deepfake technology is particularly noteworthy. Although definitions may vary, deepfakes generally refer to AI-generated media that manipulates perceived reality using deep-learning techniques to fabricate events, statements, or actions. [1] Deepfake creation often relies on generative adversarial networks (GANs), which comprise two neural networks: a generator that creates synthetic content, and a discriminator that assesses and refines it until the difference between real and fake content becomes nearly indistinguishable. [2]

Originally developed for entertainment and creative applications, deepfake technology is increasingly being misused for deceptive purposes. With the growing accessibility of deepfake tools such as FakeApp and Nudify, malicious actors can now easily generate highly convincing fake content that poses significant risks of reputational damage, disinformation, and manipulation.

How Malicious Actors Use Deepfakes

Numerous academic studies indicate that malicious actors frequently exploit deepfake technology to create non-consensual sexual videos and images. According to one report, 95,820 sexually explicit deepfake videos were circulating on the internet in 2023. These deepfake videos are reported to be extremely popular on the internet, having garnered approximately 134 million views. Similarly, a study by Van der Nagel asserts that 96 percent of deepfake images, videos, and content consist of non-consensual pornographic material. 

In recent years, South Korea and the United States, along with many other countries, have experienced a significant increase in deepfake sexual content disseminated by malicious actors across the internet and social media platforms such as Telegram, often without the victims’ consent. More than one in ten students in the United States reported knowing friends or classmates who use deepfake technology to create sexually explicit images of other students. For instance, in the United States, a female high school student was victimized by deepfake pornography created by her classmates using a nudifying app called “Clothoff,” and the manipulated video was subsequently shared on Snapchat and Instagram. Similarly, a notable case involved a high school teacher in Busan, South Korea, whose face was superimposed onto explicit images and shared in a Telegram group with 1,200 members under the hashtag “Shame Your Teacher.”

Beyond its use in non-consensual pornography, deepfake technology is increasingly being leveraged for fraudulent activities. Thousands of individuals have been deceived by deepfake content featuring fabricated likenesses of well-known figures, including Elon Musk and Brad Pitt. In early 2024, the British engineering firm Arup became a victim of a deepfake-enabled fraud scheme, resulting in a financial loss of approximately USD 25 million. 

Manipulative deepfake content involving political figures represents another area where malicious actors frequently exploit this technology. For instance, politicians in Pakistan, Slovakia, and Türkiye have been targeted with deepfake content designed to manipulate public perception. In 2024, more than 30 high-profile female British politicians were targeted with deepfake technology, with their images uploaded to a sexually explicit website prior to the UK general election. Similarly, a deepfake video falsely claiming that Ukrainian President Volodymyr Zelenskyy’s wife purchased a luxury car during her visit to Paris in June 2024 was circulated on social media. Shared by pro-Russian influencers, the video garnered nearly 18 million views within 24 hours. Additionally, in March 2025, a fraudulent deepfake video of Indonesian President Prabowo Subianto was circulated through at least 22 different TikTok accounts, reportedly misleading thousands of viewers.

Screenshot 2025 05 06 at 10.14.37 AM

Image: Still images from a fabricated video posted to Youtube in the lead-up to Taiwan’s January 2024 elections, which featured statements falsely attributed to then-presidential candidate Lai Ching-te. (Image source: Taiwan FactCheck Center)

Usage of Deepfake Technology for Malicious Purposes in Taiwan

With the rise of deepfake technology, Taiwan has encountered challenges akin to those faced by many other countries. Deepfake content has resulted in significant consequences, including the dissemination of disinformation, the manipulation of political discourse, and violations of personal privacy. 

The first major issue is the use of deepfake technology for political manipulation. During and after the 2024 Taiwanese presidential election, there was a surge in disinformation campaigns driven by deepfake technology, which featured fabricated videos and manipulated audio clips intended to discredit political figures. For instance, deepfake videos falsely accused leaders of the ruling Democratic Progressive Party (DPP, 民進黨) of corruption, and aimed to undermine the credibility of the party’s presidential candidate Lai Ching-te (賴清德) by fabricating a private conversation between Lai and then-President Tsai Ing-wen (蔡英文)  using AI-generated deepfake technology. Furthermore, sexually explicit deepfake videos featuring some political figures were circulated to further damage their reputations. These examples highlight the growing impact of deepfake technology on electoral interference, where synthetic media is utilized to sway public opinion, inflict reputational harm, and disrupt democratic processes in Taiwan.

False and misleading content generated using deepfake technology has also targeted the Taiwanese military. For instance, fabricated texts created with deepfake technology have been circulated on the internet and social media platforms to spread false and deceptive information about the Taiwanese military.

In addition to disinformation targeting Taiwanese politics, political figures, and the military, Taiwan has also experienced the malicious use of deepfake technology for personal attacks. YouTuber Chu Yu-chen (朱玉宸), who used deepfake technology to superimpose women’s faces onto pornographic videos for profit, serves as a particularly alarming example. Despite more than 100 victims coming forward, legal loopholes initially enabled him to evade severe punishment. This case prompted Taiwanese lawmakers to introduce new legislation in 2023, explicitly criminalizing deepfake-generated sexual imagery and strengthening protections for victims of AI-driven privacy violations. Moreover, the case of Robert Tsao (曹興誠), a businessman who was targeted with deepfake-generated sexual images, serves as a significant example of how deepfake technology can be weaponized to damage reputations and inflict emotional distress.

Beyond these individual cases, Taiwan has been at the forefront of efforts to combat deepfake-related disinformation and cyberattacks, particularly in the context of cross-Strait tensions. Reports indicate that foreign actors, particularly from China, have exploited AI-generated media to undermine Taiwan’s political stability, erode trust in its institutions, and weaken the military. Given Taiwan’s strategic geopolitical position and its vulnerability to disinformation campaigns, the government has worked closely with social media platforms, fact-checking organizations, and civil society groups to curb the spread of deepfake-related disinformation. Despite these efforts, the rapid evolution of deepfake technology continues to present challenges, underscoring the need for ongoing legal, technological, and policy-driven responses.

Taiwan’s experience demonstrates how deepfake technology can be manipulated for a range of harmful purposes, including election interference, reputational damage, and privacy violations. Although legislative measures and digital literacy initiatives have been introduced to counter these threats, the rising accessibility of AI-powered tools necessitates a collaborative effort among governments, technology companies, and civil society to effectively address the escalating misuse of deepfake technology. Additionally, Taiwan could leverage its existing science and technology cooperation with the United States, Australia, Canada, France and Germany to work on finding collaborative solutions to tackle the issue of deepfakes globally.

The main point: Deepfake technology has been used maliciously for the purpose of political manipulation, disinformation, and reputational damage. In Taiwan, deepfakes have been used in election interference, attacks on political figures and the military, as well as violations of personal privacy. To counter these threats, there needs to be collaboration among government, technology companies and civil society—in addition to international partnerships.


[1] Mary Ellen Bates, “Say What? ‘Deepfakes’ Are Deeply Concerning,” Online Searcher 42, no. 4 (2018): 64.

[2] Ahmet Yiğitalp Tulga, “A Comprehensive Analysis of Public Discourse and Content Trends in Turkish Reddit Posts Related to Deepfake,” Journal of Global and Area Studies 8, no. 2 (2024): 257-276.

Search
CHECK OUT OUR TWITTER!