Society

Cognitive Warfare: Taiwan's Information Battlefield Enters the AI-Industrialization Phase

In December 2024, the "China United Front Documentary" broke 2 million views. In Q4 2025, 1,076 CCP state-media accounts on Douyin posted 560,000 videos and identified 57 Taiwanese figures via facial recognition. In October 2025, Puma Shen was placed under criminal investigation by Chongqing police for "secession." Cognitive warfare has entered a new phase of AI industrialization and "using Taiwanese to attack Taiwan" — but the very term "cognitive warfare" is itself under tension as it gets overused inside Taiwan.

Language

30-second overview:
The CCP's cognitive warfare against Taiwan entered a new phase in 2024-2026: "AI industrialization × using Taiwanese to attack Taiwan × named retaliation." In December 2024, Pa Chiung and Chen Po-yuan released the China United Front Documentary exposing Fujian Province's united-front bases, with the upper episode breaking 2 million views1. A year later, in December 2025, an AI fake "Doctor Chen Chih-ming" appeared, impersonating doctors at NTU Hospital and Taipei Veterans General Hospital — Taipei VGH Vice Superintendent Lee Wei-chiang personally clarified the matter2. Reuters and IORG found that in the same quarter, 1,076 CCP state-media accounts on Douyin published 560,000 videos and used facial recognition to identify 57 Taiwanese figures appearing in them3. But this is a two-way battlefield: in October 2025, Chongqing police opened a criminal investigation into legislator Puma Shen for "secession"4. And at the height of the counter-attack, UNLV political scientist Wang Hung-en cautioned: "A cluster of accounts that always come online together for moral support might genuinely just be a group of friends."5 What this entry tries to map is a three-sided cognitive battlefield where one must be alert to CCP operations, alert to homegrown labeling, and acknowledge the limits of research tools themselves.

A LINE group on a parent's phone

On 28 December 2025, Lee Wei-chiang, Vice Superintendent of Taipei Veterans General Hospital, took a media inquiry: who exactly was the white-coated, confident-sounding man on the YouTube channel "Doctor Chen Chih-ming," who claimed to be a Taipei VGH doctor?2

The answer: no such person.

Lee told reporters, "After the hospital verified, no such doctor exists." The channel had even featured two completely different "Chen Chih-ming" faces2. The other side used AI-generated virtual personas to impersonate doctors at medical centers like Taipei VGH and NTU, claiming pseudoscientific therapies like "treating diabetes with bacterial infection" to lure elders into sharing the videos. After technical analysis, the Taiwan FactCheck Center (TFC) said this was AI video "produced by a multimodal, audio-driven character animation generation model"2. By the time it was exposed, the channel had accumulated 23,100 subscribers, with the top single video at 150,000 views6.

📝 Curator's note: The most critical detail of this case is that it could be publicly clarified by name — the vice superintendent spoke in person, TFC published its check, and the Ministry of Health and Welfare announced a regulatory plan2. This entire structure of accountable response is exactly what the Chinese side of Douyin cannot produce. Verifiability itself is the texture of a democratic society.

That same year, the New Taipei Department of Social Welfare received calls asking how to apply for "elder subsidies of NT$8,000+ a month." Verification revealed it was another AI-generated video circulating on YouTube and LINE groups — unnatural facial expressions, illogical pause patterns, vocal tone lacking real emotion7. The household of entertainer Wang Jen-fu suffered a voice-deepfake scam: his daughter first received a fake market-research call and her voiceprint was harvested; days later, his wife Jenny received a "daughter calling for help" demanding a money transfer8. The Criminal Investigation Bureau later cracked Taiwan's first AI-voice-fraud case, revealing the scale of industrialization: an illegal investment-advisory ring designed more than 20 conversational scenarios, with at least 70 people deceived — one elderly woman lost NT$20 million9.

The three scenes share the same point: making you "no longer trust."

Scale: from individual cases to a system

The National Security Bureau's 2025 Analysis of CCP Cognitive Warfare Operations Against Taiwan revealed: that year, more than 45,000 anomalous accounts were tracked, more than 2.31 million pieces of contentious information collected, and over 3,200 reports filed to government agencies10.

A joint Reuters-IORG (Information Operations Research Group) investigation published in April 2026 showed: in Q4 2025, 1,076 accounts operated by Chinese state media on Douyin published about 560,000 videos, of which around 18,000 involved Taiwan-related topics3. By using facial recognition on 2,730 of these videos, they identified 57 Taiwanese figures appearing in them; among the top 25, 13 were KMT-affiliated. The most-exposed was then-KMT chair Cheng Li-wun — she appeared in 460 videos posted by 68 Douyin accounts, accumulating over 5 million interactions3.

Another IORG tracking study found that of 84 "America-skeptic" narratives, 70 were initiated or amplified by CCP state media11. Doublethink Lab's 2025 survey indicated that heavy TikTok users tend to lean more pro-China and more agreeable to "populist skepticism" stances12.

But sheer quantity does not equal "battle results." The observation by criminologist and Kuma Academy co-founder Puma Shen (沈伯洋) is worth placing alongside: "In China's information war on Taiwan, 80% has nothing to do with true or false — most of it is narrative attack, the manufacture of a perspective."13 This sentence shifts the focus from the first reflex of "fact-checking" to a harder second-order problem: when what the other side is delivering is a perspective and a narrative frame, fact-checking has insufficient range.

Pull back two electoral cycles and the axis becomes clearer. IORG's tracking found that the "ballot-rigging" narrative first appeared in late 2019, recurring across the 2020 election, the June 2020 Han Kuo-yu recall, and the 2021 referendum. The 2020 election also saw a new phenomenon: direct intervention by Chinese officials, with attacks even targeting the legitimacy of the Central Election Commission14. After Lai Ching-te's 2024 election, the NSB report flagged new features: AI virtual newsreaders spreading fake Tsai Ing-wen news, deepfake video, election betting markets15. In 2025, AI-altered footage even surfaced of Lai Ching-te "swearing"16.

The five coordinated tactics dissected by the NSB

The NSB breaks down CCP cognitive warfare against Taiwan into five tactics10:

Tactic Specific tools / companies Representative case
1. Data-analytic monitoring of public sentiment "Zhongke Tianji," "MeiyaPico," "Wo Min Gao Xin" — crawlers gathering politicians' personal data and polling Pinpointing public-opinion hotspots
2. Multi-channel injection of contentious information "Haixunshe," "Haimai," "Huya" PR firms imitate international media sites; nurture "Wubianjie Group" to set up Facebook content farms Laundering origin
3. Anomalous account infiltration Public Security Ministry's "Longqiao" online troll group across 180 platforms; "Zhongke Click," "Beijing Starlight" controlling 10,000+ fake accounts Volume injection
4. AI-generated audio-visual content "Mobvoi," "iFlytek" develop intelligent voice systems; place ads on Taiwanese sites to lure citizens into recordings, harvesting voiceprints Fake doctors, voice fraud
5. Network hacking of local accounts During the April 2025 military exercises, more than 10 PTT accounts were hijacked to amplify narratives like "CCP blockading natural-gas shipping" Pushing CCP narratives in Taiwanese voices

The significance of this breakdown is replacing the vague phrase "external forces" with a structural map that allows accountability to be traced down to corporate-entity level — each tactic maps to specific Chinese companies and government units. But it also means the complexity of fact-checking and prosecution rises sharply: videos mass-produced by overseas AI factories, reposted by anonymous Taiwan-local accounts, then pushed back to Taiwanese readers by the algorithms — there is no single speaker in the whole chain.

How a user can "read" a suspicious channel

In February 2025, Threads user @derek_foxx posted17 a record of how he judged that the YouTube channel "Unicorn Calls Like That" was problematic:

  1. Accent cues: the speaker's voice carried a non-Taiwanese accent, deliberately mimicking Taiwanese usage, with the details slipping out
  2. Pivot signs: the channel's earlier content was financial-tips and lifestyle farm content; about a year ago it pivoted to a "MythBusters"-style format
  3. Wind-direction sync: content topics aligned with the livelihood-issue critiques of certain opposition camps
  4. Anonymity: the operator didn't disclose identity; later content fully switched to AI voiceover

This warning was reposted and discussed by TFC and other users18, who noted that the channel had over 340,000 subscribers, with topics concentrated on livelihood-sensitive issues like "Taiwan's milk is more expensive than Europe's and America's," "Taiwan tilapia is a re-naming scam of Mozambique tilapia," and "currency manipulation causes low wages" — using editing and decontextualization to amplify emotion19.

📝 Curator's note: This passage needs careful handling. Taiwan.md is not an intelligence agency — it cannot arbitrarily designate any specific channel as a CCP direct operation. As of the time of this entry, no research institution or government unit has listed this channel as a verified case of overseas operation. The reason this passage stays is to demonstrate how an ordinary reader can ask questions from observable features: accent, anonymity, algorithmic trajectory, topic pivot — these four cues, combined, can at least let someone hit pause on "wait, before I share this." Certifying who is a CCP account is the work of judicial and research institutions; training one's reading instinct is what every individual can do.

The AI cost cliff

How low has the tooling threshold for contemporary cognitive warfare fallen? Before the 2024 New Hampshire primary, political operative Steve Kramer hired magician Paul Carpenter to use ElevenLabs to synthesize former president Biden's voice and place thousands of robocalls telling Democratic voters not to vote20. The aftermath: a US$6 million FCC fine, with Kramer indicted on 13 charges of voter suppression (felonies) plus 13 counts of impersonating a candidate20. The production cost: US$1, 20 minutes, US$150 paid by Kramer to Carpenter.

That same January, AI-generated pornographic deepfakes of Taylor Swift appeared on X. A single image surpassed 47 million views, and X temporarily suspended search for the keyword "Taylor Swift"21. The incident pushed US Congress to pass the TAKE IT DOWN Act in 2025, mandating takedowns of non-consensual intimate images22, and in January 2026 the DEFIANCE Act passed the Senate by unanimous consent, granting deepfake-pornography victims civil-litigation rights23.

This cost curve means that future election-period cognitive battlefields don't need Chinese backing to be launched — anyone willing to spend US$1 can do it. The defensive focus shifts from "specific source" to "this category of tool has become commoditized."

Two counterattack lines: the camera and the institution

Taiwan is not only on the defensive. Since 2024, at least two clear counterattack lines have appeared — one is YouTuber-style camera reportage; the other is the building of academic and civil-defense institutions. Both have drawn precision retaliation from the CCP.

Camera reportage: Pa Chiung and Chen Po-yuan's _China United Front Documentary_

On 6 December 2024, the YouTuber Pa Chiung (Wen Tzu-yu) of "Photographer Diary Fun TV" and former Little Pink Chen Po-yuan ("Min-nan Wolf PYC") released the upper episode of China United Front Documentary. Chen Po-yuan went undercover in Fujian Province to document how the CCP-set "Taiwanese youth entrepreneurship base" works — startup funds that don't need to be repaid, assistance with applying for a "Chinese ID card" for Taiwanese youth, oversized bank loans1. Cumulative views of the upper episode exceeded 2 million, and the lower episode released on the 28th of the same month — further detailing the Xiamen entrepreneurship base — broke 1.17 million views24. Pa Chiung claimed in the film that, through these programs, the CCP had absorbed Taiwanese who had taken Chinese ID cards, allegedly reaching 200,000 people (this figure is asserted in the film; the Mainland Affairs Council only said it would "strictly investigate united-front tactics," without directly confirming the specific number)25.

The cost of the narrative arrived immediately. Chen Po-yuan's Weibo was swarmed by "Little Pinks" calling him a "Taiwan-independence dog"; his Douyin account "Min-nan Wolf PYC" was deleted by Chinese authorities; the show China Bosses in which he had earlier participated had a cumulative 1 billion views in China — but he never got royalties24. The tension of this story: the anti-CCP figure was once a product of the CCP, and neither side of the strait readily welcomes a "former Little Pink."

Institutional counterattack: Puma Shen's enemy medal

The other line is the building of academic and civil-defense institutions. Kuma Academy was co-founded in 2021 by Puma Shen and Ho Cheng-hui, with UMC honorary chairman Tsao Hsing-cheng donating NT$600 million in 202226. IORG was co-led by Puma Shen and Yu Chih-hao starting in 2019, publishing the annual Taiwan Information Environment Report tracking the spread of "America-skeptic" narratives27.

The retaliation followed the same axis. On 14 October 2024, the TAO listed Puma Shen as a "stubborn Taiwanese-independence element," banning him and his family from entering China, Hong Kong, and Macao28. On 28 October 2025, Chongqing's Public Security Bureau issued a police notice formally opening a criminal investigation against Puma Shen for "the crime of secession," alleging that through "Kuma Academy" he had engaged in secessionist criminal activity, citing the PRC Criminal Code4. CCTV later aired a nearly 8-minute "exposé" feature, threatening global apprehension via Interpol; Shen replied: "One country on each side. Don't even think about reaching your hand into Taiwan."29 (Worth noting: CCTV's threat of an Interpol warrant is the language of a TV program plus statements by Renmin University Law School professor Cheng Lei, not a formal warrant from China's Ministry of Public Security or the courts.)

This is the first time in Taiwanese history that a sitting legislator has been put under criminal investigation by name by the CCP4. Columnist Huang Peng-hsiao titled it: "Congratulations to DPP legislator Puma Shen on receiving the 'Enemy Medal.'" The irony: the man teaching Taiwanese about cognitive warfare has had the value of his work certified by the CCP's state apparatus.

The contrast between these two lines reveals both the strategic difference of counter-attack (Pa Chiung's camera goes for emotional infiltration and personal advocacy; Shen's institution-building goes for academic research and civil-defense training) and the precision of CCP retaliation: for influencers, account deletion; for institution-builders, criminal filings.

Beyond the CCP: domestic shadows, platform responsibility, terminological tension

In addition to defense, an honest account must acknowledge the shadows of domestic information operations and the tension created by the over-use of the term "cognitive warfare" inside Taiwan.

Domestic cyber armies: Lin Wei-feng, "1450," the "47-account fiasco"

In 2021, pro-green writer Lin Wei-feng was caught using the PTT account bj26bj to ironically post pro-China content, then using his real name on Facebook to accuse PTT of being infiltrated30. The Investigation Bureau received over 70 complaints; the case went on for months. Lin's spouse Yang Min was at the time deputy director of the DPP's Department of Online Communities30.

The term "1450" refers to the DPP's online auxiliary arms. The official version is that in 2019, the Council of Agriculture earmarked NT$14.5 million for online marketing — hence the name; the opposition version directly accuses the DPP of running an organized troll operation31. Another case: the DPP authorities once listed 47 online accounts as Chinese "agents" — but verification later showed most of them were the DPP's own supporters32.

The Kansai Airport Incident: a triangle of truth and falsehood as historical study material

The September 2018 Kansai Airport Incident is the hardest-to-handle but most instructive case at the boundary of "cognitive warfare" to date.

On 6 September that year, Typhoon Jebi struck Kansai Airport. Afterward, Taiwanese internet was flooded with the false claim that "China's embassy sent buses to pick up Chinese travelers," with criticism aimed at Taiwan's Osaka office for inaction. Puma Shen later traced the source, confirming that the false footage first appeared on the Chinese Weibo account "Floods, Fierce Beasts, Baby" and was turned into news by Guancha.cn a full 24 hours before Taiwanese news33.

But that's only half the story. On the Taiwanese domestic side, Yang Hui-ju provided internet access to Tsai Fu-ming, who used PTT account idcc to viciously attack the Osaka office as "rotten to the bone, a remnant of the party-state"; through a LINE group called "Kaohsiung Group" she directed cyber-army operatives to push the narrative within one minute34. On 14 September, Su Chii-cherng (蘇啟誠), head of the Osaka office, took his own life at his official residence. His suicide note didn't directly mention fake-news pressure — it only said "I do not wish to suffer humiliation" and "I am unwilling to bear punishment, transfer, or demerits over groundless charges"35. In November 2021, Yang Hui-ju was sentenced in the first instance to 6 months36; she later petitioned for a constitutional review, and in March 2025 the Constitutional Court ruled the offense "insulting public office" unconstitutional, after which Yang's case was finally dismissed37.

📝 Curator's note: The value of this case is that it lays open the boundary between "cognitive warfare" and "domestic cyber army." The same incident can simultaneously have a Chinese-origin disinformation source and a Taiwanese-domestic cyber-army relay. Any simplification disrespects the dead. Su Chii-cherng's family has yet to recover, while the Constitutional Court's decision led to dismissal of charges — judicial truth, factual truth, and grief: these three lines cannot be aligned.

Wang Hung-en's caution: the three elements of cognitive warfare

University of Nevada, Las Vegas (UNLV) political-science assistant professor Wang Hung-en, in a long essay on Medium, listed the three elements of "cognitive warfare": originating overseas, coordinated attack, and a specific motive — all three required5. He also cautioned:

"A cluster of accounts that always come online together for moral support might genuinely just be a group of friends."

"People of different colors have different probabilities of having their messages categorized as fake news."

Wang's stance is not to absolve the CCP — he himself studies CCP cognitive warfare. He's pointing out a methodological double-standard risk: academic researchers can usually only access public data and cannot reach financial flows or communications; the government, simultaneously the punisher and the holder of information, has an agency problem. TPP legislator Chang Chi-lu has said something similar: "Treating everyone who criticizes the government as a CCP fellow-traveler or directed by China makes people feel this is the green camp's own cognitive warfare."38

Why the Digital Intermediary Service Act was withdrawn

On 29 June 2022, the National Communications Commission (NCC) approved a draft of the Digital Intermediary Service Act. At the third public hearing on 18 August, Meta, Google, and PTT pushed back hard, arguing that the act would chill freedom of speech; on 20 August Premier Su Tseng-chang intervened to delay the hearing; on 7 September the NCC announced it was withdrawing the draft to its internal task force for review, with no timetable39. The opposition's core concern: a platform-responsibility regime might lead operators to over-delete content to avoid liability, becoming the "speech gatekeepers."

This withdrawal is the most concrete marker of the tension between "cognitive-warfare governance" and "freedom of speech." As of April 2026, the Digital Intermediary Act has not been resubmitted.

Meta's US$16 billion threshold

Facebook's platform responsibility deserves a look too. A November 2025 Reuters exclusive revealed that Meta internally estimated about US$16 billion in 2024 (10% of global revenue) came from scam ads40. The platform pushes 15 billion "high-risk" scam ads daily, generating US$7 billion annually; Meta only blocks an ad once its automated systems are 95% certain it's a scam, because broadly imposing advertiser verification would reduce revenue.

But there's a detail with positive significance for Taiwan: Meta only implements advertiser verification in jurisdictions like Singapore and Taiwan where it is "legally mandated"40. This means Taiwan's Anti-Fraud Special Act has actually pushed Meta to add an extra verification step — legal agency is not zero.

The governance layer: government, law, technology

In the face of this complex ecosystem, government responses are scattered across multiple agencies.

MOHW: finding account applicants under the _Physicians Act_

In response to the AI fake-doctor case, the Ministry of Health and Welfare (MOHW) is planning multi-layered measures: completing the digital-applicability of medical regulations, strengthening cross-agency cooperation, establishing law-enforcement criteria for social posts and AI content, planning rapid takedown mechanisms, complaint and reporting channels, and adding safety reminders for medication and emergency-care information41. Following the US model, MOHW plans to require Taiwanese doctor channels to disclose certified physician identity in their channel descriptions; and under the Physicians Act, to find the people who applied for the accounts and have local health departments impose fines; while seeking cooperation from Google and Meta42.

Ministry of Digital Affairs: an "Anti-Fraud Reporting and Lookup Network"

Former Minister of Digital Affairs Huang Yen-nien announced during his term that within three months the Ministry would build an "Anti-Fraud Reporting and Lookup Network," using AI to fight AI: members of the public can paste suspicious messages into a platform, where AI analyzes and routes them to relevant agencies (health issues → MOHW, agriculture → Ministry of Agriculture), with collaborative defense mechanisms established with e-commerce and telecom operators, and a push to require real-name registration for online ads43. Notable in the governance philosophy: "The truth or falsity of content should not be judged by the government." The Ministry of Digital Affairs' planned disinformation committee initially has NGOs as committee members; if takedown is required, public power must still execute it43. Huang resigned in August 2025 at the end of his secondment, returning to Academia Sinica44.

TikTok: banned in the public sector, not banned for ordinary citizens

TikTok's legal status in Taiwan is often misunderstood. The Executive Yuan's 2019 "Principles for Restricted Use of Products Endangering National Cyber Security by Government Agencies" bans TikTok, Douyin, and Xiaohongshu on public-sector ICT equipment and in associated premises; ordinary citizens and personal devices of civil servants are not within the restriction45. In 2024, the draft Anti-Fraud Crime Hazard Prevention Act further required that TikTok must comply with Chinese-investment regulatory provisions to set a legal representative — at most "blocking the network"46.

C2PA: making real content verifiable

On the technology front, the C2PA (Coalition for Content Provenance and Authenticity) protocol attempts to attach an unforgeable "ID" to digital content. Its steering committee includes Adobe, Microsoft, OpenAI, Google, Intel, BBC, and Sony47. OpenAI automatically embeds Content Credentials in DALL-E 3 outputs48. The Google Pixel 10 phone, released on 10 September 2025, automatically embeds C2PA Content Credentials at the native camera-app level, achieving Assurance Level 2 — currently the highest in the C2PA Conformance Program — and is the first smartphone to adopt the standard at the mobile-camera layer49.

The logic of this direction is to make real content verifiable — shifting the burden of proof from "the reader verifies" to "the content carries provenance." But C2PA is not yet widespread on short-form-video platforms like TikTok and YouTube Shorts, nor has it penetrated encrypted messaging like LINE groups.

Taiwan's anti-cognitive-warfare infrastructure

Beyond government tools, Taiwan has built a distributed verification ecosystem — the distribution itself is a form of resilience design:

Organization Founded Core output
Taiwan FactCheck Center (TFC) 2018 In 2025, produced 540+ fact-checks, 85 explanatory reports, 200+ Instagram cards, 46 video pieces; the "5 July Japan earthquake prophecy" check broke 1 million views50
IORG (Information Operations Research Group) 2019 Annual Taiwan Information Environment Report, America-skeptic narrative tracking, Douyin investigations27
Doublethink Lab 2019 Co-conducted the TikTok youth survey with NTHU Sociology and Academia Sinica Sociology12
Cofacts 2016 A crowdsourced LINE bot developed by the g0v community51
MyGoPen 2015 The only LINE service that simultaneously checks text, images, video, and audio52
LINE Fact Check Official LINE's official portal integrating the four major fact-check platforms53
Kuma Academy 2021 Co-founded by Puma Shen and Ho Cheng-hui; NT$600M donation from Tsao Hsing-cheng in 2022; civil-defense courses include cognitive warfare and information-warfare techniques26
INDSR (Institute for National Defense and Security Research) 2018 Established the Cybersecurity and Decision Support Research Institute; publishes the National Defense and Security Biweekly54

The design significance of this distributed ecosystem is separation of powers: TFC operates by journalism standards, IORG by academic research, Cofacts by crowdsourcing, LINE official integration by platform layer — no single institution can monopolize the definition of "what is true." This matters more than any "Anti-Disinformation Act," because the latter will always run into the perpetual question, "who decides?"

Cross-generational dialogue: where to start

There is a counterintuitive observation at the conversational layer. LINE has more than 18 million users in Taiwan, with usage rates exceeding 90% in both the 40-49 and 50-65 age groups55. Adults over 65 make up 20.25% of fraud victims55. But multiple surveys suggest that, when receiving suspected disinformation, about 30% of older adults actively verify — more proactive than parts of the younger generation56.

The stereotype of "elders = victims" needs correction. The audience for information-literacy education spans the entire society; every age group has gaps in algorithmic literacy. The AI Society Research Institute at Soochow University proposes the concept of "Critical Algorithmic Literacy," defined as "Should we use it? On whose terms?" — a higher-order angle of scrutiny than "digital literacy," "information literacy," or "media literacy"57.

A concrete starting point for talking to elders can be the "Fact Check" account on LINE that they're already using — LINE's official portal integrating the four major fact-check platforms; you paste in the message and it gets checked53. Cofacts' LINE bot is similarly install-free and crowdsource-verified51. What these tools share is lowering the interface threshold, so elders can get checking results within their familiar social space.

International experience: from Ukraine to the EU

Taiwan is not alone. After the 2014 Crimea events, Ukrainian journalist Yevhen Fedchenko and faculty and students at the Kyiv-Mohyla Academy School of Journalism founded StopFake; by the time the war broke out in earnest, they had verified more than 3,000 Russian disinformation items58. After full-scale war broke out in 2022, Western pro-Ukraine netizens spontaneously organized the North Atlantic Fellas Organization (NAFO), using satirical memes to rapidly counter Russian propaganda58 — the first appearance on a cognitive battlefield of purely civilian-organized, tactical-level countermeasures.

On the legislative front, Article 50 of the EU AI Act, taking effect in August 2026, mandates that deepfakes must be clearly disclosed as AI-generated/manipulated; AI-generated text on matters of public interest must be labeled; providers must ensure outputs are marked in machine-readable form59. The European Commission released the first draft of a transparency code of practice on 17 December 2025; a second draft in March 2026; final version in June 202660.

US legislation is also progressing: the 2025 TAKE IT DOWN Act was signed into law by Trump, mandating platform takedowns of non-consensual intimate images22; the DEFIANCE Act passed the Senate by unanimous consent in January 2026 and is now in the House, granting deepfake-pornography victims civil-litigation rights23.

Cross-border fraud has reached new scale. On 14 October 2025, the US and the UK simultaneously sanctioned the Cambodia-based Prince Group, seizing Bitcoin assets worth US$15 billion (the largest in US-UK history); Taiwan seized 26 luxury cars (worth NT$400 million); the sanctions list included 9 Taiwanese companies and 3 Taiwanese nationals; the group's offices were located in Taipei 101 and the Heping Daadi residence61. This case ties AI deepfake tools, transnational fraud, and Taiwan's local structure into a single thread.

One structure, three distances

Wang Hung-en said: "A cluster of accounts that always come online together for moral support might genuinely just be a group of friends." This sentence has to be read alongside Puma Shen's "80% has nothing to do with true or false; it's narrative attack." The first reminds researchers and government not to treat "synchronization" as a sole accusation; the second reminds readers not to treat "fact-checking" as the sole defensive line. Together, the two lines outline the hardest two faces of cognitive warfare: alert and yet restrained.

Back to the structure. A workable answer is composed of three distances cooperating:

The state layer — the government's job is to set the floor, not to define truth and falsehood. MOHW finding account applicants under the Physicians Act, MoDA building the Anti-Fraud Reporting Network, the Anti-Fraud Special Act pushing Meta into one extra verification step — these are structural moves at the legal layer. What the government must avoid is becoming the "judge of truth" — whoever's on stage will have their own version of truth. The lesson of the Digital Intermediary Act's withdrawal still stands.

The community layer — TFC, IORG, Doublethink Lab, Cofacts, MyGoPen, Kuma Academy, INDSR, plus LINE's official Fact Check — this is separation of powers. Eight organizations each go by their own standards, supervising one another and patching one another's blind spots; no single capture is fatal.

The individual layer — every person's 15-second action. The next time a "doctor's recommendation" or "official subsidy" video lands in your LINE group, spend 15 seconds clicking into TFC or MyGoPen and searching the keywords to see whether it's been fact-checked; or paste it into LINE's official Fact Check bot and let the machine check. The cost of the action is low, but multiplied a thousand- or ten-thousand-fold, it slows the spread of disinformation.

When an elder forwards an AI fake-doctor video, it's because they care about their family's health; when a young person shares an "Unicorn Calls Like That" clip, it's because they have real grievances about Taiwanese prices. What cognitive warfare exploits with precision are these irreplaceable real emotions. It then uses fake authority or exaggerated conclusions to translate emotion into a directional cognitive bias.

"Doctor Chen Chih-ming" has disappeared from YouTube. Pa Chiung and Chen Po-yuan are still filming new documentaries. Puma Shen is still attending the Legislative Yuan. TFC's fact-checkers, Cofacts volunteers, Kuma Academy lecturers, IORG researchers — all of them will be at their posts again tomorrow.

The ultimate battleground of cognitive warfare is the relationship of trust itself. And that relationship of trust isn't on Douyin, isn't in Facebook's ads back end, isn't in CCTV's eight-minute features. The relationship of trust lives in the moment when you're willing to spend 15 seconds searching keywords, then turn to your family and say, "I checked — this one's fake."

Further reading

  • Mountain-Makers: A Century's Wager (造山者:世紀的賭注) — Hsiao Chu-jen's 2025 documentary, five years interviewing 80+ semiconductor pioneers; in 2026 it travels to Purdue / Wisconsin / Michigan, the three CHIPS Act investment hubs

  • Threads in Taiwan — Taiwan's social-media migration history and the platform structure of the information battlefield

  • Miin (迷音) — The open-source ecosystem to which g0v's Cofacts belongs

  • Taiwan Online Community Migration — Understanding the role of platforms like PTT, Dcard, and Threads in cognitive warfare

  • Taiwan Media and Press Freedom (台灣媒體與新聞自由) — News ecosystem and platforms' responsibility in cognitive warfare

  • Puma Shen — One of the leading researchers on cognitive warfare; in 2025, the first elected Taiwanese politician to be placed under criminal investigation by China for "the crime of secession"

  • Poisoned Potatoes: Beyond the 200 ppm — There's 30 ppm, 14 Days, and 15 Years of Food-Safety Scars — A dissection of how the TAO's April 2026 "offering" narrative stepped precisely onto the 15-year food-safety scar accumulated since the 2011 plasticizer scandal

References

Footnotes

  1. YouTuber Pa Chiung's new documentary exposes CCP united-front against Taiwan — Central News Agency, 28 December 2024; details of Pa Chiung + Chen Po-yuan's China United Front Documentary upper episode
  2. AI fake doctor impersonates medical center; Taipei VGH clarifies no such person — CNA, 28 December 2025; main reveal report and Lee Wei-chiang Vice-Superintendent statement, plus TFC technical analysis
  3. Reuters: CCP uses Taiwanese local voices to wage information war on Taiwan — CNA citing Reuters, 17 April 2026; the 1,076 accounts, 560,000 videos, 57 Taiwanese figures data
  4. Chongqing PSB opens criminal investigation against Puma Shen — China's Ministry of Public Security website, 28 October 2025; verbatim notice on "secession" criminal investigation
  5. Free speech, cognitive warfare, research limits, moral dilemmas, and double standards — Wang Hung-en on Medium; long reflection essay by the UNLV political-science assistant professor
  6. YT-viral doctor exposed as fake; 23,000 viewers all duped — Storm Media, 28 December 2025; subscriber and view count details
  7. New Taipei Department of Social Welfare confirms elder-subsidy AI fake messageLiberty Times, 2025; specific identification cues
  8. Entertainer Wang Jen-fu's family hit by AI voice fraudGlobal Views Monthly; concrete Taiwan case of voice-cloning fraud
  9. Taiwan's first AI voice-fraud case crackedMirror Weekly; CIB's 20+ conversational scenarios, 70 victims, single-case NT$20M loss
  10. NSB reveals 5 CCP cognitive-warfare tactics; over 2.31M contentious items in a year — Storm Media, 2025; public summary of the NSB's 2025 Analysis of CCP Cognitive Warfare Operations
  11. IORG: America-skeptic narratives and where they come from — Information Operations Research Group; 84-narrative tracking research
  12. Doublethink Lab TikTok youth survey 2025 — Doublethink Lab; abridged version of three interlinked reports
  13. Puma Shen: 80% of public-opinion warfare has nothing to do with true or false; it's narrative attack — Foundation for Excellent Journalism Award; record of Puma Shen's lecture
  14. IORG: Disinformed democracy — Taiwan disinformation research — Dissection of the 2020 election-interference techniques; direct involvement of Chinese officials as a new phenomenon
  15. NSB: AI virtual newsreaders become a CCP election-interference tool — Epoch Times, January 2026; new features post-2024 presidential election
  16. AI-altered Lai Ching-te "swearing" videoLiberty Times, 2025; DPP's Hsu Shu-hua: "the disinformation profits the CCP"
  17. Threads @derek_foxx warns about Unicorn Calls Like That — Threads, 18 February 2025; user demonstrates how to reverse-engineer suspicious channels by accent and trend
  18. Threads @derek_foxx follow-up observations — Follow-up post documenting fan-base behavior patterns of the suspicious channel
  19. Threads user @kuilam409 observations — Another user lists common controversial topics on the channel
  20. Steve Kramer admits faking Biden's voice in NH primary robocall — NBC News, 2024; judicial details of the US case
  21. Taylor Swift deepfake pornography controversy — Wikipedia; the 2024 incident and the legislative push for the DEFIANCE Act
  22. TAKE IT DOWN Act — Wikipedia; signed into law by Trump on 19 May 2025, mandating platform takedowns of non-consensual intimate images
  23. DEFIANCE Act S.1837 in Senate — US Congress; passed the Senate by unanimous consent in January 2026
  24. Taiwanese YouTuber releases sequel documentary exposing CCP's united-front tactics — VOA Chinese, December 2024; lower-episode view count + Chen Po-yuan backlash details
  25. Pa Chiung's documentary reveals CCP united-front; MAC promises tough investigationLiberty Times; the 200,000 Taiwanese with Chinese ID cards is a claim made in the film
  26. Puma Shen — Wikipedia — Kuma Academy scale, the Tsao Hsing-cheng NT$600M donation, education and career
  27. IORG official site — Background and research scope of the Information Operations Research Group
  28. TAO publishes "stubborn Taiwanese-independence elements" list — TAO, 14 October 2024; Puma Shen on the list
  29. CCTV ~8-min exposé of Puma Shen, threatens global apprehensionLiberty Times, October 2025; Puma Shen's "one country on each side" reply
  30. Lin Wei-feng pretend-troll case: pro-green writer posts pro-China on PTTLiberty Times, 2021; a domestic Taiwanese information-operation case
  31. 1450 cyber army — Wikipedia; both the official and opposition versions of "1450"
  32. Scholars criticize green camp's labeling of dissent; the 47-account fiasco — The News Lens; other-party auxiliary forces and labeling controversies
  33. Puma Shen 2019 traces the Kansai Airport incident's source — Newtalk; Puma Shen's public lecture exposing the Chinese Weibo source
  34. Yang Hui-ju first-instance 6 months: PTT setting the wind direction — CNA, 12 November 2021; the LINE group "Kaohsiung Group" one-minute mobilization detail
  35. Su Chii-cherng — Wikipedia — Suicide-note content; event timeline
  36. Yang Hui-ju Kansai Airport case first-instance ruling — CNA; same as [^34]
  37. Yang Hui-ju, who ran cyber-army insulting the Osaka office, granted dismissal — ETtoday, 21 February 2025; charges dismissed after the Constitutional Court ruled the offense "insulting public office" unconstitutional
  38. TPP's Chang Chi-lu: labeling critics may itself be the green camp's cognitive warfare — UDN, 2021; TPP stance
  39. Digital Intermediary Service Act controversy — Wikipedia; full timeline of the 2022 withdrawal
  40. Meta estimated 10% of 2024 sales from scam and fraud ads — CNBC citing Reuters, 6 November 2025; US$16 billion, 95% threshold, Taiwan and Singapore legal-mandate verification
  41. MOHW: AI fake doctors fined under the Physicians Act — CNA, 28 December 2025; cross-agency cooperation plan
  42. MOHW reviewing strengthened AI-content enforcement — TechNews, 29 December 2025; following the US model of disclosing physician identity
  43. Ministry of Digital Affairs begins testing the anti-fraud reporting and lookup network — iThome, 2024; "AI vs. AI" routing architecture; the policy philosophy that NGOs serve on the disinformation committee
  44. Huang Yen-nien resigns at end of secondment, returns to Academia Sinica — Newtalk, 22 August 2025
  45. TikTok's legal restriction status in Taiwan — Ministry of Digital Affairs clarification; banned in the public sector vs. unrestricted for ordinary citizens
  46. Anti-fraud crime hazard prevention act draft TikTok provisions — The News Lens; legal-representative and network-blocking clauses
  47. C2PA official site — Coalition for Content Provenance and Authenticity; steering-committee members
  48. OpenAI joins C2PA steering committee — C2PA official; Content Credentials embedded in DALL-E 3
  49. Pixel 10 brings C2PA Content Credentials to photos — Google Security Blog, 10 September 2025; achieving Assurance Level 2
  50. TFC 2025 annual review — Taiwan FactCheck Center annual output statistics (site temporarily unstable at time of writing; URL is correct)
  51. Cofacts — g0v community crowdsourced fact-check LINE bot
  52. MyGoPen — Fact-check service across text/image/video/audio
  53. LINE Fact Check — LINE's official portal integrating the four major fact-check platforms
  54. Institute for National Defense and Security Research — Founded 1 May 2018; Cybersecurity and Decision Support Research Institute
  55. Senior LINE dependence and information fraudCommon Health; 18 million users; 90% usage rate at 40-65
  56. Content-farm survey: older adults actively verifyThe Reporter; 30% of older adults actively report
  57. Critical Algorithmic Literacy — AI Society Research Institute, Soochow University (RCAIS)
  58. StopFake and NAFO: Ukraine's civilian self-organizationThe Reporter; civilian tactical-level countermeasures from 2022 onward
  59. EU AI Act Article 50 Transparency Obligations — EU AI Act; deepfake and AI-generated-content labeling obligations
  60. EU Code of Practice for Transparency in AI Content — European Commission; transparency code-of-practice timeline
  61. Prince Group transnational criminal organization case — Wikipedia; 14 October 2025 US-UK sanctions, US$15B Bitcoin, Taiwan seizure details
About this article This article was collaboratively written with AI assistance and community review.
Cognitive Warfare Information Warfare Disinformation AI Deepfake TikTok CCP Against Taiwan Media Literacy Fact-Checking Platform Responsibility
Share this article