/

/

/

On the Front Line of Foreign Influence: Enhancing Taiwan’s Information Resilience

On the Front Line of Foreign Influence: Enhancing Taiwan’s Information Resilience

FIMI_Masthead
Tags
On the Front Line of Foreign Influence: Enhancing Taiwan’s Information Resilience

As a target of the People’s Republic of China’s (PRC) political warfare, Taiwan’s information environment is constantly flooded with foreign information manipulation and interference (FIMI). For instance, in the lead-up to Taiwan’s 2024 presidential elections, Doublethink Lab recorded over 10,000 pieces of suspicious information that aligned with false narratives spread by the PRC. Such influence operations typically amplify political polarization and skepticism towards the United States, which undermine Taiwanese citizens’ trust in their government, democratic systems, and US-Taiwan relations. 

Taiwan’s civil society has worked to develop countermeasures, such as robust fact-checking and more accessible media literacy education. Nonetheless, an evolving global landscape and technological advancements have brought about new challenges, elevating the threat for both Taiwan and the international community. From November 2023 to November 2024 alone, the European External Action Service (EEAS) detected at least 500 FIMI incidents that targeted 90 countries and spanned 38,000 unique channels. These findings suggest that PRC FIMI campaigns are not limited to Taiwan, and may affect dozens of countries throughout the world. In order to enhance international resilience, it is clear that knowledge exchange on countermeasures to deal with FIMI must take priority. While Taiwan is on the front line of PRC FIMI, global democracies must recognize this mutual threat and hear the call to action.

Taiwan’s Civil Society

The Taiwanese government first began to recognize FIMI as a growing issue in 2017. By 2018, the Executive Yuan (行政院) initiated a special task force that established a three-element framework to evaluate FIMI through malicious intent, falsified content, and harmful results, and a four-step framework to respond: detect, debunk, contain, and discipline. At the same time, civil society organizations initiated their own responses. Some fact-checking efforts by Taiwanese civil society organizations preceded government initiatives—the fact-checking blog MyGoPen (麥擱騙) was founded in 2015 and the LINE bot Cofacts (真的假的) started in 2017. In 2018, the Taiwan FactCheck Center (TFC, 台灣事實查核中心) was founded by Taiwan Media Watch and the Association for Quality Journalism. Shortly thereafter, Doublethink Lab and the Taiwan Information Environment Research Center (IORG, 台灣資訊環境研究中心) were also established with the aim of researching the overall Mandarin-language information environment and pinpointing larger trends related to disinformation narratives.

Through the work of these civil society organizations, Taiwan was able to mount a multifaceted response to disinformation—including FIMI. PRC-initiated disinformation campaigns were debunked through fact-checking, and enhanced media literacy education helped Taiwanese citizens detect and contain FIMI narratives. According to Doublethink Lab, Taiwan’s whole-of-society response could also be replicated by other countries. This approach is structured around five main characteristics—purpose-driven, organic, whole-society, evolving, and remit-bound—that together make up the “Taiwan POWER Model” (as depicted in the image below). Still, despite Taiwan’s overall success in countering FIMI, Doublethink Lab admits that there are still vulnerabilities in the island’s model. The civil society organization argues that Taiwan’s counter-FIMI efforts need to be aligned with global efforts, since “PRC operations against Taiwan also take place in other countries.” To truly protect Taiwan’s international space, those countering FIMI operations need to take into account PRC disinformation narratives that circulate both in Taiwan and abroad. Additionally, challenges—such as limited funding and the impact of generative-AI and large language models (LLMs)—are not only shared globally, but call for transnational cooperation on countermeasures.

unnamed

Image: Doublethink Lab’s visualization of their “Taiwan POWER Model.” (Image source: Doublethink Lab.)

Growing Challenges: PRC Narratives, AI, and Funding

AI-Powered Information Operations 

The growing prevalence of AI has created unique challenges when it comes to countering disinformation. While monitoring Taiwan’s 2024 elections, Doublethink Lab found that usage of generative AI allowed malign actors to broaden their efforts by reducing the manpower needed to create posts. By creating posts and posting schedules that were not completely identical, generative AI helped to mask coordinated inauthentic behavior (CIB). This allowed FIMI actors to carry out large-scale attacks that were more difficult to counter, despite the fact that the underlying falsehoods in the posts were not much more sophisticated than in the past. As FactLink’s Summer Chen and Mary Ma pointed out during the Global Taiwan Institute’s June 2025 Enhancing Taiwan’s Information Resilience event: while AI-generated disinformation is often not difficult to debunk, the sheer volume of such posts make their impact difficult to contain.

Rapid Response Mechanism Canada (RRM Canada) also detected similar methods being used ahead of the 2025 Canadian elections. Under the strategy of Spamouflage—which refers to the dissemination of “spam-like content and propaganda hidden among more benign, human-interest-style content”—FIMI actors used generative AI to create deepfake videos of a Chinese-speaking political commentator, who had previously released content that was critical of the PRC. Within the videos, the AI-generated commentator accused the Canadian government of corruption, sexual scandals, and bribery. The EEAS also found that Russian agents used AI-generated deepfake videos to interfere with elections in Moldova—illustrating that Russia and the PRC often mimic each other’s strategies when it comes to FIMI and political warfare.

While AI generation has allowed FIMI to become more sophisticated, these same tools can be used for detecting AI-generated content. Many organizations countering FIMI already use some level of AI-assistance. For instance, IORG uses a human-AI collaboration model that pairs work by human researchers with open-source tools like CKIP Tagger and OpenAI’s Whisper to detect and process CIB. As AI-generated disinformation continues to improve, it is crucial that  counterefforts advance at a faster pace. Special Competitive Studies Project (SCSP) Executive Director Ylli Bajraktari has recommended that NATO invest in content authenticity and transparency tools—such as LLMs—that can then be used to identify AI-generated or altered content. This recommendation has been echoed by Ethan Tu, who noted that Taiwan AI Labs’ AI-driven platform Infodemic can also be used to detect disinformation. In 2024, Taiwan AI Labs signed memoranda of understanding (MOUs) with two Lithuanian companies—Turning College and Oxylabs—specifically for collaboration on AI solutions to cognitive warfare.  

Biases in Large Language Models

Advancements in AI have also led to a more widespread usage of chatbots, which raises questions regarding the political narratives that may be consumed through these platforms. In particular, many have expressed concerns regarding the PRC-based company Deepseek. In a comparison of the ChatGPT o3-mini-high and Deepseek-R1 models, Taiwan AI Labs discovered that Deepseek-R1 embedded PRC-aligned propaganda in 23.3 percent of its responses when answering Simplified Chinese queries related to geopolitics. Additionally, when answering Simplified Chinese queries related to politics, sentiments critical to the United States were embedded in 23.8 percent of responses. 

Some of the propaganda embedded in DeepSeek’s models is more overt. An investigation by Investigative Journalism Reportika (IJ-Reportika) found that DeepSeek uncritically parroted official PRC narratives when questioned about issues that the PRC government had strong positions on—such as Tibet’s status, China’s “debt-trap diplomacy,” and Taiwanese independence. As of October 2025, 125 million global users reportedly used DeepSeek tools on a monthly basis. While the majority of these users were based in China (35 percent), there were also a substantial number of users based in India (20 percent), Indonesia (8 percent), and the United States (5 percent). In spite of international pushback against DeepSeek’s models, such as restrictions on its use by government workers in Taiwan, India, South Korea, Australia, and the United States, global usage remains relatively high. In addition to the risk of inadvertently consuming PRC propaganda, DeepSeek and other Chinese AI platforms could theoretically share user data with the PRC government and FIMI actors to enhance the sophistication of information operations in foreign countries.

Training and Funding 

When it comes to countering biases in large language models and AI-powered information operations, limited training and funding remain central challenges. These obstacles are not new; many independent media and civil society organizations already face restrictions in their funding sources, particularly if they must forgo government funding in order to maintain their independence and nonpartisanship. Meanwhile, many Taiwanese news organizations often lack the resources for fact-checking or to maintain strict editorial standards. During GTI’s June 2025 Enhancing Taiwan’s Information Resilience event, Min Mitchell noted that many Taiwanese media organizations do not employ dedicated fact-checkers. To fill this gap, FactLink’s Summer Chen and Mary Ma proposed that one solution could be to form AI-verifying communities where tech experts and journalists collaborate on detecting FIMI. However, such a solution still necessitates that civil society organizations receive enough resources to maintain this collaboration.

Opportunities for Global Cooperation

Global cooperation is important in order to spread international awareness of PRC influence operations and to keep PRC propaganda narratives from gaining dominance. Additionally, global cooperation allows countries to pool resources and exchange best practices in order to promote more effective responses to FIMI. As a result, this author presents the following policy recommendations:

  • Enhance international cooperation on countering FIMI – International initiatives that counter foreign interference and protect human rights, such as G7’s Digital Transnational Repression Detection Academy, should work together with Taiwanese civil society organizations to counter FIMI on a global scale. Simultaneously, Taiwanese civil society should continue to pursue and expand regional partnerships, such as the Taiwan FactCheck Center’s existing cooperation with Factcheck Initiative Japan.
  • Strengthen domestic data privacy laws – To prevent Chinese AI models from collecting and providing users’ personal information to FIMI actors, governments should review and strengthen data privacy laws within their own countries.
  • Promote partnerships between Taiwanese civil society and private technology companies – Taiwan civil society organizations should continue to enter into cooperative agreements with private companies, akin to the MOUs between Taiwan AI Labs and the two Lithuanian technology firms. For instance, Microsoft, Google, and Meta all maintain initiatives to counter influence operations, and could partner with Taiwanese civil society. Additionally, past iterations of Taiwan’s CYBERSEC Expo have focused on the question of countering FIMI, suggesting that CYBERSEC could also be a forum for building international and cross-industry cooperation on FIMI.

 

The main point: Taiwan’s civil society has long been praised for its efforts to counter PRC foreign information manipulation and interference (FIMI), yet challenges still remain due to limited civil society funding and the growing usage of AI in FIMI operations. It is essential for Taiwanese civil society to foster strong international and cross-industry partnerships to meet these challenges head-on.


The author would like to thank 2025 Fall Intern David Dichoso for his research assistance when writing this article.

Search
CHECK OUT OUR TWITTER!