/

/

/

Securing US Elections from AI-Enhanced Foreign Influence: Lessons from Taiwan’s 4C Strategy

Securing US Elections from AI-Enhanced Foreign Influence: Lessons from Taiwan’s 4C Strategy

PubOpinion2 Masthead
Tags
Securing US Elections from AI-Enhanced Foreign Influence: Lessons from Taiwan’s 4C Strategy

As the US election looms, nation-state adversaries are ramping up efforts to undermine election integrity. A recent Microsoft report highlights cyber operations by Iran, Russia, and China that are all aimed at disrupting the upcoming November polls. Even more concerning, these actors are now leveraging advanced AI tools to amplify disinformation campaigns, creating more sophisticated and widespread content designed to exploit vulnerabilities in US defenses. Given these persistent threats, what lessons can the United States learn from other democracies that have faced similar challenges this year?

One notable example is Taiwan, an island nation on the frontlines of resistance against authoritarian influence. For over a decade, Taiwan has endured a relentless deluge of foreign disinformation and, according to V-Dem, has been the most targeted country for 11 consecutive years. Yet, despite these challenges, Taiwan upholds a vibrant democracy—ranking 10th globally and first in Asia—and successfully held a fair presidential election on January 13 this year, even in the face of heightened manipulation through generative AI.

So, how does Taiwan safeguard its democracy against digital authoritarianism? The answer lies in a holistic approach it has adopted: the 4C Strategy–Cutting production, Clarifying falsehoods, Curbing dissemination, and Cultivating digital media literacy. This comprehensive framework engages government, businesses, and civil society, each fulfilling its role to safeguard Taiwan’s digital information infrastructure, protecting it from exploitation by malicious foreign actors at every stage of disinformation campaigns.

AI-Driven Disinformation in Taiwan’s Elections

A disinformation campaign typically moves through three stages to distort public discourse and election processes: the production, dissemination, and reception of misleading content that ultimately sways the audience’s beliefs and perceptions. Generative AI profoundly affects both the production and reception phases because it helps to enhance the scale and sophistication of content. As these tools mature, malicious actors can autonomously generate larger volumes of realistic text, images, or videos across various languages and genres without a significant increase in labor costs.

In the following, I will focus on two case studies from the Research Institute for Democracy, Society and Emerging Technology (DSET, 科技、民主與社會研究中心), a government tech policy think tank in Taiwan, to illustrate how foreign actors strategically employed AI for information manipulation during Taiwan’s presidential election. 

The first involved a YouTube channel named “TrueTJL,” which released a video three weeks before an election. It featured an AI-generated voice, allegedly that of retired investigator Zhao-lun Lin (林昭倫), accusing presidential candidate Lai Ching-te (賴清德) of being an informant during Taiwan’s martial law period. This video ignited discussions across platforms, including PTT, a well-known online forum, as well as on political influencers’ social media and numerous YouTube channels, many of which were newly created or previously inactive.

Another instance occurred on January 2, when an anonymous user uploaded a 318-page PDF titled The Secret History of Tsai Ing-Wen to Zenodo. This document quickly spread across various platforms, including X, Facebook, TikTok, Wikipedia, PTT, and Taiwanese sites like Vocus (方格子) and Mirror Fiction (鏡文學). Following this dissemination, a wave of new YouTube accounts featuring AI-generated images and profiles emerged between January 4 and January 10. According to the Australian Strategic Policy Institute (ASPI), an Australian think tank, these accounts posted up to 490 videos with virtual AI anchors mimicking news reporting styles and detailing scenarios from the document.

In both cases, it is evident that foreign actors are leveraging AI to enhance their manipulation efforts. The YouTube channel TrueTJL, whose website is linked to a server in China, employed AI-generated audio to falsely accuse a prominent presidential candidate. Similarly, ASPI disclosed that The Secret History of Tsai Ing-Wen was crafted using software developed and widely used in China, and was distributed through videos mass-produced by AI, made with Chinese-owned software and featuring idioms typical of simplified Mandarin. These tactics underscore AI’s ability to create plausible content and significantly escalate content production for disinformation campaigns.

Moderate Impact and Taiwan’s Countermeasures

According to DSET, however, the impact of these campaigns was considered moderate. First, the informant accusation yielded only limited dissemination. In fact, the spike in shared posts stemmed from the Investigation Bureau’s (法務部調查局) clarifying press release, not the initial accusation itself. Similarly, while the AI-driven content in the second case achieved considerable reach across platforms, the volume of discussions swiftly declined following official clarifications and had virtually disappeared before election day.

To address these manipulative attempts, Taiwan has implemented a range of countermeasures. Initially, government agencies debunked the disinformation promptly, with the Investigation Bureau discrediting the fabricated voice in the informant accusation within three days of the video’s release. A week after the e-book about President Tsai Ing-wen (蔡英文) was uploaded, national security authorities clarified in news interviews that the relevant videos contained extensive AI-generated falsehoods. 

Furthermore, social media platforms played a crucial role in curtailing the spread of disinformation by downgrading or removing fake content and suspending suspicious accounts based on alerts from authorities or independent fact-checkers. Media outlets also refrained from covering questionable sources until obtaining verified information. Fact-checking organizations like MyGoPen and Taiwan FactCheck Center (TFC, 台灣事實查核中心) helped enhance citizens’ ability to recognize AI fabrications by publishing detailed reports. Lastly, law enforcement traced the disinformation’s origins and pursued criminal charges to deter the perpetrators.

Cutting Production

These measures reflect a broader framework for combating information manipulation in Taiwan: the 4C strategy. Firstly, Taiwan aims to cut the production of AI-generated disinformation by deterring malicious actors and increasing their costs of engaging in disinformation production. In 2023, Taiwan’s Legislative Yuan amended the Public Officials Election and Recall Act (公職人員選舉罷免法) and the President and Vice President Election and Recall Act (總統副總統選舉罷免法), raising the maximum sentences for those found guilty of intentionally disseminating false and harmful information about candidates through deepfakes. Taiwan’s prosecutors’ offices have also established special task forces to swiftly trace the origins of deepfake content and initiate further investigation and prosecution.

However, it is important not to overestimate the effectiveness of criminal liabilities. To safeguard free speech and prevent the suppression of dissent and political competition, penalties for falsehoods must be meticulously crafted and restricted to easily verifiable facts. This narrow focus limits their ability to fully address the diverse forms of disinformation campaigns. Moreover, the slow pace of court proceedings fails to adequately mitigate the rapid spread of disinformation, and more critically, these measures are significantly less effective when malicious actors operate outside of legal jurisdiction.

To disrupt disinformation production, providers of digital infrastructure must take on greater responsibility. Given that the breakthroughs in generative AI enable the scaling up of disinformation output, efforts to elevate production costs must also target the design and operation of these models. AI providers should prioritize accuracy and ensure that AI-generated content is easily identifiable. This involves securing reliable sources for their models, enforcing strict content policies and establishing technical safeguards against falsehood creation, and integrating features like provenance and watermarking to trace origins and verify AI involvement through machine-readable signals. 

These measures could be implemented through tech companies’ voluntary compliance; however, governments should also explore leveraging existing regulatory tools to nudge businesses to advance these objectives. Earlier this year, major tech companies signed an accord aimed at combating deceptive AI in elections. One key commitment is to develop and implement policies and technologies that facilitate identifying and certifying AI-generated content and its origins. The European Commission then issued new guidelines incorporating these measures as part of the risk mitigation obligations for large platforms under the Digital Services Act framework. These international trends are worth considering for the United States to devise effective strategies to impose more barriers on the production of AI-driven disinformation.

Clarifying Falsehoods

The second pillar of the 4C strategy, clarifying falsehoods, is founded on a belief that the best way to counter false speech is through more speech and fact-checking. To this end, Taiwan’s government has implemented the “222 principle,” which requires every government agency to debunk misinformation within two hours using two images and 200 words, crafted to be especially sharable and understandable on social media. 

Additionally, prosecutorial, police, and national security agencies are committed to enhancing their capacity to swiftly identify deepfake videos and audio during the election period. The Executive Yuan (行政院) has also been collaborating with Taiwan’s major online platforms since 2019 to improve the filtering, labeling, and debunking of misinformation. This initiative includes a collaboration with LINE, the leading messaging platform, to develop user-friendly fact-checking channels and instant clarification pages within its system.

However, the task of fact-checking should not fall solely on the government. Its media reach is limited, and more importantly, government-provided clarifications often struggle to gain acceptance across the political spectrum, especially in a highly polarized society like Taiwan. Therefore, the broader success of this strategy depends crucially on the involvement of third-party fact-checking organizations. 

In Taiwan, TFC, MyGoPen, and Cofacts are the three major civic groups having tirelessly worked to debunk disinformation. For instance, TFC had published over 200 online verifications and analyses during the 2024 elections alone. Their findings were extensively shared across media partners’ sites such as Yahoo! News and the Central News Agency. Digital platforms like Meta and Google have also committed to better labeling AI-generated content and providing additional information to users who encounter false content flagged by their fact-checking partners. Collaborative efforts among government, media, platforms, and civic groups have enhanced the effectiveness of fact-checking in dispelling falsehoods.

Curbing Dissemination

But even this seemingly effective approach has its limitations. Counter-speech theory hinges on the assumption that individuals can and will discern truth from falsehood. Yet, people often accept information without scrutiny unless it challenges their preexisting beliefs. The overwhelming tide of disinformation and algorithmic filter bubbles can prevent clarifications from capturing individuals’ scarce attention. Thus, curbing the dissemination of disinformation is essential, and social media platforms, which control the flow of information in the digital public sphere, must assume greater responsibility.

In this context, Taiwan’s legislature enacted a “deepfake clause” in 2023, requiring that broadcasters and online platforms remove or limit access to officially verified deepfake content targeting election candidates, with violations subject to fines. Yet, broader legislative initiatives like the 2022 Digital Intermediary Services Act have stalled. As a result, Taiwan’s government has primarily relied on platform self-regulation and cooperation, such as TikTok’s reporting channel with authorities to flag and remove unlawful content. Similarly, Meta emphasized its commitment to reducing the reach of content flagged as false by its fact-checking partners. 

Despite these voluntary measures showing some effectiveness, the need for robust regulatory solutions continues, especially as platforms frequently prioritize budget reductions over maintaining teams that combat online disinformation.

Cultivating Digital Media Literacy

Finally, cultivating citizens’ digital media literacy stands as the last safeguard against disinformation campaigns. According to a research report published by the Carnegie Endowment for International Peace, there is ample evidence that media literacy training helps individuals recognize false content and dubious sources. In Taiwan, civil society groups have proactively contributed to this cause. 

For instance, the TFC initiated the Taiwan Media Literacy Project in 2021, aiming to educate a wide range of individuals, particularly those in need of improved skills. This initiative includes collaborations with organizations like the National Association for the Promotion of Community Universities, which focuses on the elderly and middle-aged, and FakeNewsCleaner, which imparts fact-checking skills in rural and vulnerable neighborhoods. These groups incorporate media literacy into community activities and develop online materials and training for the general public. Additionally, to address the rising threat of AI-generated disinformation, TFC incorporated AI literacy into their annual workshops, collaborating with MyGoPen to promote awareness and provide practical detection techniques in their fact-checking reports.

However, literacy education faces challenges in reaching large numbers of people, especially those most vulnerable to disinformation, within limited budgets and timeframes. This task is too extensive for civil society to handle alone. In response, Taiwan’s government issued the “White Paper on Media Literacy Education in the Digital Age” in 2023 as policy guidelines. Since 2019, efforts have included integrating media literacy into the compulsory 12-year education system, supplemented with specialized materials and teacher training. Moreover, the government provides courses and funding for public servants and community colleges, and bolsters media literacy resources through public libraries and radio stations. 

These initiatives are worth considering for governments worldwide, and can be strengthened by legislation that ensures adequate funding, develops detailed strategies, and establishes specialized task forces to coordinate interdepartmental efforts in promoting digital media literacy.

Conclusion

There is no silver bullet to combat AI-enhanced foreign influence operations. Solely relying on criminal deterrence is slow and largely ineffective against actors beyond borders. Fact-checking, though crucial, struggles with the overwhelming volume of falsehoods and limited verification resources. Reducing disinformation’s spread is impossible without robust verification and clarification. Over-reliance on individual literacy skills unfairly shifts responsibility from governments and media to citizens. 

An effective response requires a holistic strategy that includes government, civil society, and digital infrastructure owners to address disinformation at all stages—production, dissemination, and reception. Taiwan’s 4C strategy offers a proactive model that democracies, including the United States, can consider to strengthen their defenses and fortify election resilience against technology-enhanced foreign manipulation.

The main point: For democratic countries that are working on countering foreign disinformation, particularly ahead of upcoming elections, Taiwan’s 4C strategy—which is a holistic defense that addresses disinformation at all stages by cutting production, clarifying falsehoods, curbing dissemination, and cultivating digital media literacy—offers a proactive model.

Search
CHECK OUT OUR TWITTER!