The Republic of Agora

Combating Disinformation


An Agenda for U.S.-Japan Cooperation

Christopher B. Johnstone and Leah Klaas | 2024.07.08

Disinformation is a daunting challenge for the United States and its allies and partners. This report assesses Chinese and Russian disinformation efforts in the Indo-Pacific and charts an agenda for the United States and Japan in combating it.

Introduction

Disinformation and other forms of “information operations” have become tools of choice for authoritarian regimes seeking to coerce and influence adversaries in the “gray zone” below the threshold of military conflict. Russia famously employed these tools in its effort to influence the U.S. presidential election in 2016 and continues to use them as part of its war of aggression against Ukraine. China uses disinformation to influence political developments across the Indo-Pacific region, including through an effort to influence the results of Taiwan’s presidential election in January 2024. As Japan’s National Security Strategy notes, “Grey zone situations over territories, cross-border cyberattacks on critical civilian infrastructures, and information warfare through spread of disinformation, are constantly taking place, thereby further blurring the boundary between contingency and peacetime.”

This report explores the disinformation threat environment in the Indo-Pacific region, with a particular focus on China and, to a lesser extent, Russia. It examines tools and methods for combating disinformation and seeks to chart an agenda for deeper collaboration between the United States and Japan in addressing this growing challenge. This report incorporates the results of a closed-door conference held at CSIS in February 2024 which featured experts on this subject from the United States and Japan.

Disinformation and other forms of “information operations” have become tools of choice for authoritarian regimes seeking to coerce and influence adversaries in the “gray zone” below the threshold of military conflict.

Scope and Definitions

There are many terms used to describe the manipulation or selective use of information for malign purposes, many of which are used interchangeably. These terms carry specific connotations, which “can lead to assumptions about how information spreads, who spreads it, and who receives it.” Therefore, it is important to foster a common lexicon of terms for this report:

  • Information operations (also referred to as “influence operations”) encompass various strategies aimed at influencing a targeted audience’s decisionmaking, beliefs, and opinions or otherwise affecting their mindset, as defined by Kuwahara Kyoko. John Garnaut further describes these as a type of “political interference”: covert, corrupt, or coercive actions designed to manipulate public opinion. Information operations serve as the umbrella concept that encompasses the other terms identified here.

  • Disinformation, a key component of information operations and the focus of this report, is characterized by a deliberate intention to deceive and manipulate. Perpetrators deliberately and often covertly disseminate false information, making attribution challenging. Externally originated disinformation narratives often intertwine with domestic ones and commercial sensationalism, further complicating identification. It is crucial to recognize that disinformation may not be entirely false; effective disinformation typically blends truth (verifiable facts) with fabrication (unverifiable information).

  • Misinformation, on the other hand, refers to verifiably false information disseminated regardless of intent to mislead.

  • Malinformation involves sharing genuine information with the intent to cause harm, often by exposing private details to the public sphere.

  • Lastly, propaganda, which often is overt in origin, is orchestrated by operators seeking to influence public perception through information that can be true but is typically of a biased or misleading nature.

Assessing the Threat Environment

China and, to a lesser extent, Russia are the primary sources of information operations and disinformation in the Indo-Pacific. This section highlights the objectives and strategies of China and Russia and describes specific narratives they seek to propagate.

China

Chinese information operations have expanded significantly in scope and sophistication in recent years, particularly after the onset of the Covid-19 pandemic as Beijing sought to shape international views of the virus’s origin. Beijing is working to promote both positive views of China in the region and to stimulate negative views of the United States and its allies. Narratives that China seeks to spread vary regionally and closely relate to its larger strategic objectives, including degrading the willingness of partners to work closely with the United States and weakening Southeast Asian unity on regional security issues such as territorial disputes in the South China Sea. Recent research indicates that China’s primary goals — and the associated narratives used to achieve them — are as follows:

  1. Promote a positive view of China and of the Chinese Communist Party (CCP), including bolstering Xi Jinping’s image. For example, Beijing depicted itself as a benevolent aid giver during the Covid-19 pandemic. China frequently attempted to arrange handover ceremonies and statements of gratitude from countries where it sent medical supplies and vaccines. In fact, however, 99 percent of its medical supply exports and 96 percent of its vaccine exports were sales, not donations.

  2. Promote China’s style of governance as a model for developing countries to emulate, weakening the international liberal world order and the influence of democracy. Xi Jinping is known to assert, for example, that China’s system offers “a new option for other countries and nations who want to speed up their development while preserving their independence.”

  3. Encourage openness to Chinese investment and strategic engagement abroad by highlighting the benefits of the Belt and Road Initiative and spreading the narrative that cooperation with China produces “tangible economic benefits.” In one such instance, China emphasized the connection between population surveillance and economic growth to Nigerian state-level officials when pitching a comprehensive “safe city” package, a move designed to make people associate norms of surveillance and information control with prosperity.

  4. Marginalize, demonize, or suppress anti-CCP voices and commentary that presents the Chinese government and its leaders in a negative light. Examples of this include the recent sentencing of Australian scholar Yang Hengjun and the detainment of Hokkaido University professor Iwatani Nobu. Yang is a pro-democracy activist who was arrested for espionage in 2019. Iwatani is a Japanese historian apparently arrested for possession of materials related to the Second Sino-Japanese War (1937−1945) — suggesting that Beijing is focused on controlling historical narratives related to the origins of the People’s Republic of China.

  5. Foster doubts about U.S. intentions and commitment to the region and undermine U.S. relationships with key allies and partners. For example, between 2018 and 2020, China used a network of 161 Facebook and Instagram accounts to promote negative public opinion in the Philippines of the U.S.-Philippines alliance. The disinformation campaign portrayed U.S. actions in the South China Sea and Taiwan Strait as aggressive, praised Chinese naval operations in the South China Sea, and promoted Filipino leaders with favorable opinions toward China’s regional policies.

These various narratives are perpetrated and supported by the Chinese government, but they also emerge organically both inside China and among populations abroad who have been influenced by the CCP’s strategic messaging. A prime example of this dynamic is the disinformation surrounding the release of treated water from the Fukushima nuclear power plants (discussed in more detail later in this report). While Beijing has sought to promote a false narrative that the water release presents a health risk to the Indo-Pacific, similar narratives have emerged organically, both inside Japan and in the region.

Chinese information operations have expanded significantly in scope and sophistication in recent years, particularly after the onset of the Covid-19 pandemic as Beijing sought to shape international views of the virus’s origin.

Chinese Disinformation Strategies: Four Main Lines of Effort

Analysis shows that China uses a variety of techniques to spread preferred narratives and influence the information environment. Its strategies and tactics reflect increasing sophistication and employ novel techniques and goals:

  1. Generate pro-China multimedia content that is accessible and compelling and that appears credible. Beijing creates approved content and makes it readily accessible to Chinese media outlets around the world, including in the Mekong region, Kenya, Brazil, and many other nations worldwide; in some cases, it offers content for free or at reduced rates. China also influences foreign opinions through content-sharing agreements with media outlets in countries ranging from Australia to Zimbabwe to Peru to Thailand.

  2. Amplify pro-China content and maximize channels for distributing it. All major state media outlets have accounts on X (formerly Twitter) and Facebook, and some make use of YouTube and Instagram. Collectively, these accounts have substantial followings, in some cases in the tens of millions. These approaches are complemented by the development and deployment of social media networks such as WeChat and TikTok, which are increasingly influential globally. WeChat also performs a surveillance function, monitoring politically sensitive conversations among users outside of China. The U.S. Department of State’s Global Engagement Center found that as of August 2023, China has roughly 333 “diplomatic and official media accounts,” with nearly 65 million followers.

  3. Influence local media to suppress criticism and promote China-friendly narratives while spreading narratives critical of the United States and other countries China sees as adversaries. China employs fear to suppress information and narratives it finds unfavorable, in some cases by threatening physical violence, taking legal action, or restricting access to the lucrative Chinese market for foreign media outlets, journalists, and academics. Punishment of academics or journalists who write content critical of the CCP’s leadership incentivizes self-censorship both within China and internationally. Opportunistic reporting is another tactic, in which China amplifies stories that undermine its adversaries and stoke historical animosities. China has sought to disseminate news stories exaggerating safety concerns in communities surrounding U.S. bases in Japan, including crimes involving U.S. military personnel in Okinawa, and issues relating to the history of Korean “comfort women” during the Japanese colonial period.

China’s disinformation campaigns are increasingly sophisticated and far-reaching. Prior to Taiwan’s presidential election in January 2024, for example, Beijing employed a comprehensive strategy that ranged from framing the choice between the Democratic Progressive Party (DPP) and the Kuomintang (KMT) as one of “war” versus “peace” to exploiting local issues and using local proxies to promote rumors of scandal involving DPP leaders. The effectiveness of China’s disinformation efforts remains open to question, however. The campaign against Taiwan, which has developed a robust rapid-response infrastructure to combat disinformation, was ultimately a failure, and there are numerous examples elsewhere of propaganda not resonating due to a lack of credibility, anti-China sentiment, and competition from foreign media outlets with well-established reputations. A recent report from the Center of Analysis for Democracy finds that pro-China propaganda on popular streaming platforms in Latin America and Spain has failed to attract Spanish-speaking audiences. Research on strategic messaging in Tibet shows how anti-China sentiment limits the efficacy of the CCP’s “patriotic education programs,” even inside of China. And the Center for Naval Analyses highlights widespread suspicion of China in the Mekong region.

The effectiveness of China’s disinformation efforts remains open to question.

It is also unclear whether China itself possesses tools for measuring the effectiveness of its information operations; the bureaucratic apparatus surrounding these campaigns appears to emphasize quantity over impact. For example, China has expanded state-owned media outlets globally over the past decade, reaching 181 bureaus in 142 countries as of August 2021. Some of these outlets produce massive numbers of articles, such as the bureau in Nairobi, whose 150 journalists and 400 staff produce 1,800 stories a month. However, the rapid expansion of state-owned outlets has not corresponded to a general increase in readership or perceived credibility.

That said, quantity can take on a quality of its own. In 2020, Google suspended tens of thousands of YouTube accounts linked to Chinese information operations. Many of them pushed disinformation that was poorly created and easy to identify as false. But by repeatedly exposing users to the same false narratives, these accounts sought to create a sense of familiarity and thereby convince the public of the accuracy of their narratives. As a report by the Institute for Strategic Research at the French Ministry for the Armed Forces explains, “Actors involved in disinformation campaigns will not always bother adopt [sic] an appearance of authenticity for the information they propagate. This is the reason why Chinese actors, in particular, seem to put quantity before quality.”

Russia

Although Russia’s information operations have been thoroughly studied, its efforts to shape the information environment in Japan and the Indo-Pacific are less well known.

Russia’s global efforts currently focus on undermining support for Ukraine by pushing the narrative that Ukraine was always “Russian” and sowing doubt about U.S. leadership, commitment, and intentions — narratives that also impact the Indo-Pacific. The U.S. Department of State’s Global Engagement Center (GEC) analyzed seven Kremlin-aligned disinformation proxy sites that amplified false narratives critical of the United States around the world. Japan and Australia were among the Indo-Pacific nations where narratives resonated most and where accounts sharing the content were concentrated. For example, content from the Russian-registered website SouthFront, which pushed “deep-state” conspiracy theories on the origins of the Covid-19 pandemic and U.S. border-wall funding, was particularly popular among Japanese media users.

Although Russia’s information operations have been thoroughly studied, its efforts to shape the information environment in Japan and the Indo-Pacific are less well known.

Regarding Japan, Russia attempts to spread the narrative that the two countries’ dispute over the Northern Territories — which the United States recognizes as Japanese — is an artificial creation and that Japan is a remilitarizing nation acting at Washington’s behest. Like China, Russia seeks to promote self-determination narratives in Okinawa to sow division domestically. It leverages anxieties in Japan about China’s military buildup by asserting that Tokyo should maintain good relations with Moscow to avoid pushing Russia toward China — a narrative that is far less effective since the full-scale invasion of Ukraine in 2022.

Russia’s disinformation and propaganda ecosystem is made up of a “collection of official, proxy, and unattributed communication channels and platforms.” The GEC breaks these channels into five pillars: official government communications, state-funded global messaging, cultivated proxy sources, weaponized social media, and cyber-enabled disinformation.

Russian disinformation tactics also encompass “offline” strategies aimed at influencing foreign journalists, academics, and policymakers by inviting them to Russia, granting them access to high-ranking officials, and promoting narratives favorable to Moscow through platforms such as the Valdai Discussion Club. Political commentator Tucker Carlson’s February 2024 interview of Vladimir Putin is an obvious example of this. As “agents of influence,” these individuals serve as (potentially) unwitting conduits for Kremlin narratives.

Collaboration between China and Russia

Experts on disinformation have an ongoing debate about the degree to which China and Russia explicitly coordinate information operations. Hosaka Sanshiro argues that cooperation in this realm has historically been a “forbidden zone,” with intelligence agencies from both nations protecting their independence and avoiding coordinated efforts, in part due to diverging geopolitical interests. The GEC’s 2023 special report argues the opposite, noting that in 2020 China and Russia agreed to “jointly combat disinformation [and] offer an accurate account of facts and truth.” The report outlines numerous examples of where China’s messaging amplified or mirrored Russia’s messaging, particularly on the war in Ukraine.

Both nations seek to capitalize on underlying tensions in Okinawa related to the U.S. military presence there, hoping to sow division and undermine support for the United States. There are anecdotal examples of Russian and Chinese platforms echoing similar talking points or relying on the same proxy voices, such as an Indian geopolitical expert who has contributed to both the Global Times and Sputnik. At the very least, there is clear evidence that both countries opportunistically leverage disinformation and narratives that align with their geopolitical agendas.

While it is unclear if these examples reflect explicit and deliberate cooperation, the increasing sophistication of China’s information operations clearly reflects learning from Russian strategies and techniques. A 2020 Freedom House report highlights the rapid evolution of Chinese disinformation methods, indicating a deliberate effort to adapt and refine strategies. Chinese attempts to meddle in social media, starting as early as 2017, appear to have drawn inspiration from Russia, although they appear less sophisticated. This gap may narrow as China continues to refine its disinformation techniques, potentially rivaling Russia’s expertise in the future.

The increasing sophistication of China’s information operations clearly reflects learning from Russian strategies and techniques.

Furthermore, China’s sponsorship of journalists who parrot official stances on Xinjiang reflects a concerted effort to control narratives and shape perceptions, mirroring Russia’s use of state-sponsored media outlets. Chinese authorities have also delved into “offline disinformation” methods taken from Russia’s playbook, exemplified by the CCP United Front Work Department’s activities in Australia and New Zealand, which aim to silence dissent and exert influence through financial largesse and coercive networks. A 2016 conference in Beijing that focused on Okinawa offers another example of how China uses in-person engagements strategically to exchange tactics and narratives.

CHINESE AND RUSSIAN INFORMATION OPERATIONS IN AND AROUND JAPAN

Given Japan’s geographical proximity to China, economic significance, and position as a U.S. ally, it is an inevitable target for Chinese information and disinformation operations. However, China’s efforts within Japan appear to be largely ineffective. Disinformation alone has been unable to offset the increasingly negative perceptions of China’s behavior among the Japanese public. Indeed, annual opinion surveys conducted by the Japanese government indicate historically high levels of confidence in the United States — and historically high levels of distrust toward China. Other factors, such as the limited presence of foreign media in Japan and the country’s distinct cultural and linguistic characteristics, contribute to the diminished impact of such influence operations.

However, Japan has become the target of Chinese disinformation campaigns elsewhere in East Asia, Southeast Asia, and the Pacific. Beginning in January 2023, China began a disinformation campaign to inflame international public opinion regarding Japan’s release of treated wastewater from the Fukushima nuclear reactor meltdown. In social media posts and news articles, China referred to “nuclear-contaminated wastewater” instead of “treated wastewater,” even though the International Atomic Energy Agency (IAEA) determined that the release would have little impact on humans or the environment. Between January and August 2023, when the first batch of wastewater was released, posts by Chinese officials, state media, and pro-China influencers mentioning “Fukushima” increased by more than 1,500 percent.

Given Japan’s geographical proximity to China, economic significance, and position as a U.S. ally, it is an inevitable target for Chinese information and disinformation operations.

Chinese disinformation related to Fukushima also targeted non-Japanese audiences. For example, the Global Times published around 126 English-language articles between January and August 2023 about the wastewater release. Similarly, Chinese state media ran at least 22 advertisements in languages such as English, German, Portuguese, and Khmer, emphasizing risks associated with the release.

Most concerningly, China amplified narratives questioning the accuracy of the IAEA report on the treated wastewater, implying that Japan had bribed the agency to support the release. On June 21, 2023, a South Korean media report surfaced that falsely claimed that Japan had made a €1 million ($1.1 million) political donation to the IAEA to resolve differences of opinion about the safety of the wastewater. Japan’s Ministry of Foreign Affairs responded the following day with a press release unequivocally denying these claims. Yet, as reported in the Global Times, China’s Ministry of Foreign Affairs continued to call for Japan to explain why these allegations surfaced.

Strategies for Combating Disinformation

The growing threat of disinformation across the globe poses a daunting challenge for the United States and its allies and partners. Liberal democracies, with their commitment to freedom of expression and open media environments, are particularly vulnerable and face an asymmetric disadvantage. While banning the use of many Western social media platforms internally, for example, China uses those same platforms to influence the information environment around the world. To address this challenge, scholars and experts have identified various strategies and techniques involving technology, education, and regulation which, when used in combination, can help detect and mitigate information operations.

Liberal democracies, with their commitment to freedom of expression and open media environments, are particularly vulnerable and face an asymmetric disadvantage.

Public Education and Media Literacy

Education and media literacy aim to address the root causes of political polarization and division that contribute to the spread of deceptive narratives. Recognizing that disinformation can both influence and be influenced by existing polarization, it is crucial to tackle underlying societal divisions to effectively mitigate its impact. While disinformation usually originates from malicious actors, its dissemination is often facilitated by unwitting individuals, particularly those dissatisfied with the existing political status quo.

Both the United States and Japan have a long way to go in improving media literacy and building resilient societies capable of critically assessing information. As of 2021, they ranked 15th and 16th, respectively, out of 44 countries worldwide in terms of media literacy education.

In the United States, there are currently no nationwide initiatives for enhancing media literacy and preparing Americans to think critically about mis- and disinformation. However, the nonprofit group Media Literacy Now has reported progress at the state level, with 19 states passing legislation providing funding for or mandating media literacy education in primary schools and in professional development programs. In addition, the Center for Information, Technology, and Public Life has developed a freely accessible syllabus for professional disinformation researchers or higher-education institutions in the United States that explicitly addresses how social inequalities and power structures shape disinformation. The Digital Inquiry Group has also partnered with Google to develop Civic Online Reasoning, a free website with curricula and research on countering information operations. However, these efforts still pale in comparison to concerted nationwide efforts in other countries, including Australia, Belgium, Canada, Finland, and Singapore.

Japan also lacks a comprehensive nationwide effort to promote media and information literacy, particularly in the context of mis- and disinformation. As in the United States, there are some isolated efforts to address the challenge. The Japan Broadcasting Corporation (NHK) has introduced an online resource for teaching media literacy to elementary and middle school children, for example. The SmartNews Media Research Institute generates similar resources. In addition, the Japan Association for Educational Media Study has begun incorporating the topic of media literacy into its annual conferences for educators.

In Europe, initiatives such as the regular defense courses offered by the International Centre for Defense and Security to opinion leaders — some of which have focused on elements of countering information operations — offer a blueprint for fostering high-level policy dialogue and curricula that can help maintain vigilance against disinformation. Nongovernmental organizations in Estonia, Latvia, and Lithuania have launched a multilateral disinformation-monitoring platform focused on the Russian threat. Similarly, NATO has established the Strategic Communications Centre of Excellence to monitor Russian information operations and share research on the latest technological developments that Russia uses to spread disinformation. However, in the Indo-Pacific region, the absence of a larger multilateral security apparatus like NATO complicates coordinated responses to disinformation.

Technology-Based Approaches

Artificial intelligence (AI) and other emerging technologies are a double-edged sword in dealing with information operations. While there is increasing attention to how AI can help combat disinformation, “deepfakes” and other forms of synthetic content can also be used to amplify false narratives, creating highly convincing yet fabricated content at significant scale. With at least 64 countries worldwide holding elections in 2024, deepfakes impersonating political candidates, aimed at swaying voting habits, are a particular concern.

While emerging technologies can exacerbate the mis- and disinformation problem, they also can play a pivotal role in the ongoing battle against it, offering various tools and capabilities to identify, track, and combat deceptive content. Defensive AI systems are designed to detect patterns of behavior indicative of coordinated disinformation efforts, contributing to the early identification and mitigation of false narratives. AI tracking mechanisms deployed by platforms such as Overwatch and Google enable the monitoring of online activities to identify suspicious behavior and content. Companies and platforms such as Inflection AI, Google Gemini, and Blackbird AI offer narrative-identification and monitoring tools that recognize cultural nuances, allowing automatic response mechanisms to analyze comments and engage more effectively with diverse audiences, responding convincingly in real time — thereby combating disinformation at its source. Additionally, AI-enabled social-monitoring platforms can categorize narratives and extract insights from articles, enabling the identification of common threats and the development of targeted responses. AI and other technologies can play an integral role in advancing the following methods of combating influence operations.

FACT-CHECKING AND DEBUNKING

Fact-checking and debunking methods play crucial roles in combating disinformation. While fact-checking typically adopts a broad and impartial approach, debunking is more strategic and targeted, aiming to address specific actors or topics.

Given the longstanding nature of the Russian threat, European allies have pioneered efforts to fact-check and debunk disinformation. The NATO Strategic Communications Centre of Excellence delineates six primary goals of debunking, including asserting the truth, cataloging evidence of false information, and attributing sources of disinformation. Journalists and civil society organizations are also instrumental in leading regional fact-checking initiatives, collaborating with governments and private-sector entities such as Google. Examples of such organizations in the Indo-Pacific include FactCheck Initiative Japan (FIJ), Thai Fact-Checking Network, Annie Lab at the University of Hong Kong’s Journalism and Media Studies Centre, Rappler IQ in the Philippines, and CekFakta in Indonesia.

AI-supported fact-checking — which incorporates manual, human verification of automated checks — can be critical for an effective response. Organizations such as The Factual, Check by Meedan, Logically, Full Fact, and FIJ all implement AI tools for fact-checking. These tools rely on natural language processing (NLP) and information retrieval techniques such as speech recognition, reverse image searching, video forensics, and semantics analysis to assess whether information is likely to be false or misleading. Fact-checking — whether supported by AI or not — also serves as a fundamental prerequisite for prebunking efforts (discussed in the next section).

Japan is somewhat late to the game in developing a fact-checking ecosystem. This is due in part to a lack of urgency — at least until recently — as disinformation was perceived more as a problem for the United States and Europe. The relatively delayed digitalization of mainstream media, coupled with the significant power held by Japanese broadcasters and news outlets, further slowed progress in fact-checking efforts. Nevertheless, there has been increased awareness and action, including the establishment of the Japan Fact-Check Center, which receives funding from tech giants such as Google and Yahoo. Moving forward, efforts to expand the scope of countermeasures are essential for effectively combating disinformation in Japan.

PREBUNKING

Prebunking involves issuing proactive forewarnings and preemptively refuting false narratives. A dramatic example was the United States’ use of declassified intelligence information in the run-up to the Russian invasion of Ukraine. The timely and repeated release of information on Russia’s military build-up and intentions limited Moscow’s ability to shape the media environment in advance; for example, this strategy undermined Russia’s plans to use a false-flag operation as an excuse to launch the invasion. Recent research suggests that prebunking is more effective than fact-checking, especially among populations who were previously misinformed or particularly susceptible to mis- and disinformation.

The scale of the disinformation challenge limits the practicality of prebunking as a strategy to some degree. Not every narrative can be preempted. Still, like-minded governments can focus more on anticipating the narratives that adversaries might employ, such as China’s efforts to exploit the Fukushima water release. When resources are limited, selective prebunking is a practical and efficient strategy to address the most concerning narratives. Credibility is a critical factor in successful prebunking, with the perceived trustworthiness of cited sources often more important than the expertise behind it. Resources such as The Debunking Handbook 2020 and A Practical Guide to Prebunking Misinformation provide valuable insights and guidance for successfully implementing prebunking strategies, including effective narrative structures and technology-based tools for active and passive prebunking.

REMOVING FALSE NARRATIVES (CONTENT MODERATION)

Removing fake content or restricting access to it is a strategy employed by platforms such as X (formerly Twitter), which has previously acted against inauthentic accounts originating from China and removed tens of thousands of posts. By tracking Internet Protocol (IP) addresses, platforms can identify the source of such content, including those from Chinese military bases. However, despite efforts to identify and remove fake content or inauthentic accounts algorithmically, this approach is not effective alone, as perpetrators often find ways to circumvent these measures; tactics include simply creating new accounts, operating from locations where IP addresses do not raise suspicion, or employing locals to disseminate false information. Moreover, a lack of transparency on how algorithms identify and remove content has sparked backlash among users and accusations of censorship in democratic societies. In part due to these challenges and financial streamlining, many companies are severely limiting or removing content moderation policies altogether. However, when used in tandem with other tactics, content removal can be an effective component of an overall strategy.

LABELING, TRANSPARENCY, AND RISK MONITORING

Efforts to strengthen digital content labeling have gained momentum globally. Some tech companies are developing digital stamps to indicate AI-generated content, particularly to target the threat posed by deepfakes. For instance, the Coalition for Content Provenance and Authenticity aims to authenticate media by issuing “icons of transparency,” essentially a digital watermark for verified content. Transparency initiatives extend beyond identifying artificially generated content. Google took steps as early as 2018 to label government-funded content on YouTube, alerting viewers to sources that might amplify propaganda or conspiracy theories.

A Key to Success: Go on the Offensive

The strategies and techniques listed above are defensive, aimed at resisting and debunking adversaries’ disinformation campaigns. But ultimately success will hinge on pairing these approaches with an offensive strategy of strategic messaging to promote narratives that accurately and effectively reflect U.S. objectives, priorities, and values — and doing so in close coordination with like-minded allies and partners. An offensive strategy would center on both promoting affirmative narratives about U.S. power and purpose, as well as refuting the self-serving messages that China and Russia promote about themselves — such as that China does not interfere in the internal affairs of other countries or that China offers a model for economic development that should be emulated elsewhere in the Global South.

Success will hinge on pairing these approaches with an offensive strategy of strategic messaging to promote narratives that accurately and effectively reflect U.S. objectives, priorities, and values.

Japanese Government Approaches to Countering Disinformation

The Japanese government is dedicating increased effort to addressing disinformation and information operations. Japan’s 2022 National Security Strategy, for example, acknowledges the urgent need for developing a posture for “information warfare,” which it defines as using “the spread of disinformation prior to an armed attack.” This definition is arguably too narrow, however. As this report shows, Beijing’s disinformation campaigns in peacetime are central to a larger strategy of propagating influence to enable China to “win without fighting.” Japan’s current efforts to combat disinformation also suffer from decentralization and a lack of coordination across the government — although it is hardly alone in this regard. In addition, budgets for combating disinformation and information warfare remain thin. Collectively, the Ministry of Foreign Affairs (MOFA), the Ministry of Defense, and the National Police Agency allocate just 7.72 billion yen (about $49 million) against the threat. Draft legislation to strengthen Japan’s cyber capabilities and establish authorities for active cyber defense has languished, with no clear timeline for passage. In the meantime, MOFA has begun to introduce AI-based technologies to assist with analysis of international developments that could pose national security challenges.

An Agenda for U.S.-Japan Cooperation

The United States and Japan have begun coordinating responses to the growing challenge posed by disinformation. In December 2023, the two governments signed a memorandum of cooperation on countering foreign information manipulation, laying the groundwork for shared approaches and activities such as information sharing and capacity building — and which Prime Minister Kishida Fumio and President Joe Biden endorsed at their meeting in April 2024. This is a good start, but far more needs to be done to address disinformation, a defining challenge of the modern era.

The United States and Japan now need to design an operational strategy for cooperation that also incorporates other key partners in the region. This project identified several avenues for enhancing collaboration between the United States, Japan, and other Indo-Pacific nations in combating disinformation. Unlike in Europe, where a variety of efforts to combat disinformation are underway through NATO, there is no existing multilateral structure that can be used to forge a common approach to the threat. Yet there are various areas of potential bilateral and multilateral cooperation in the Indo-Pacific:

  • Deepen cooperation on strategic messaging. Japan, the United States, and other like-minded countries in the Indo-Pacific should develop a coordinated offensive strategy for strategic messaging aimed at promoting narratives that support shared values and promote common objectives. This coordinated messaging should directly challenge the narratives, cited earlier in this report, that China and Russia seek to propagate. The raw material for this common strategy already exists — the Biden administration’s Indo-Pacific Strategy is a clear articulation of U.S. policy, as is Japan’s Free and Open Indo-Pacific concept. A coordinated messaging strategy should focus on amplifying and reinforcing those themes and countering the pro-China narratives pushed by Beijing. Such an effort is probably best undertaken by Japan’s Ministry of Foreign Affairs and the U.S. Department of State.

  • Launch a collaborative research program and share lessons learned. The United States and Japan should launch a multilateral research program in collaboration with like-minded allies and partners in the Indo-Pacific — such as Australia, South Korea, and the Philippines — to explore the evolving nature of Chinese and Russian disinformation tactics. This effort should also include universities and private research organizations. NATO’s Strategic Communications Centre of Excellence offers a blueprint for how to create a bilateral or multilateral framework for sharing research on these topics. In particular, Japan and the United States could quietly engage Taiwan to better understand both the tactics Beijing used to attempt to influence the 2024 presidential election and the strategies Taipei used to combat the threat. Building on the agreement at Camp David in 2023 between Japan, the United States, and South Korea to coordinate efforts to counter information manipulation, the allies should work to develop a multilateral framework for information sharing and lessons learned in countering disinformation. The Quadrilateral Security Dialogue — consisting of Australia, India, Japan, and the United States — is another potential venue for this work.

  • Expand cooperation on open-source intelligence collection and analysis. The explosion of open-source information represents a significant challenge to governments seeking to better understand national security threats. No single government has the capacity to collect, process, and assess all of the open-source information available on a given topic — a challenge amplified by the imperative of distinguishing between facts and mis- and disinformation. The United States, Japan, and other partners, such as Australia and South Korea, should launch an open-source intelligence partnership to share strategies, assessments, and tools, including technology-based approaches that utilize artificial intelligence.

  • Coordinate prebunking. The United States, Japan, and other partners should work together to prebunk problematic Chinese and Russian narratives. Like-minded governments should do more to both anticipate the narratives adversaries are likely to promote and undermine existing messaging that reflects biased views of China’s global role. The coordinated release of information on anticipated issues such as Chinese behavior in the South China Sea could also be an effective means of countering Chinese narratives.

  • Support education and training in third countries. Fact-checking institutions already do some work together, demonstrating ample potential for building connections and pooling resources in the digital age. Examples of such efforts include individual country programs, such as the Ukraine Crisis Media Center and StopFake, as well as multinational and regional collaborations, such as EUvsDisinfo, the NATO Strategic Communications Centre of Excellence, and balticdisinfo.eu. The United States, Japan, and other partners could work together to support education and media literacy programs across the Indo-Pacific.

  • Cooperation on content sharing. Taking a page from China’s own playbook and subsidizing trustworthy news sources to counter China-sponsored fake news sites could be a further opportunity for collaboration. The United States and Japan could consider content-sharing agreements with nations inundated with disinformation and propaganda from China and Russia. Kyodo News and the Associated Press are examples of credible wire services that Japan and the United States could quietly make more widely available to media markets in the Indo-Pacific.


Christopher B. Johnstone is senior adviser and Japan Chair at the Center for Strategic and International Studies (CSIS).

Leah Klaas is a research assistant with the Japan Chair at CSIS.

Made with by Agora