Russia’s Digital Battlefield: A New Front in Information Warfare
In 2022, a Russian disinformation campaign exploited Western artificial intelligence chatbots to disseminate pro-Kremlin propaganda, raising serious concerns about the weaponization of artificial intelligence. During the U.S.’ pause on cyber operations in Moscow, Russia’s actions highlight the growing risk of AI systems as tools for information warfare in the digital age and the vulnerabilities of even the most trusted technologies.
Russia is no stranger to the use of propaganda. At its inception, the Soviet Union developed an extensive propaganda network to promote the Communist Party. Throughout WWII, it was used to warn against Western ideology, and later during the Cold War when Soviet messaging relied on posters, films, and tightly controlled state media. The Bolsheviks used state-sponsored campaigns to push a persistent sense of external threat to the Soviet population. They framed Russia as a vulnerable nation to neighboring, hostile Western powers. Modern Russia continues this tradition, modernizing Soviet tactics using digital platforms and artificial intelligence domestically and abroad. Aggressive government propaganda remains a key feature of the Kremlin’s authority as a powerful tool of governance. Russia’s global disinformation efforts such as their interference in the 2016 United States elections underscores the scope and influence of the Kremlin’s modern technological information manipulation.
Today, Russia pushes the narrative that the West is threatening their sovereignty. Russia increasingly utilizes social media and new technologies to spread false narratives weaponizing platforms that many rely on as primary sources of information. Propaganda once spread through posters and state-run media is now broadcasted through AI and algorithms, allowing the Kremlin to disseminate misinformation faster and more broadly in the digital age. Artificial Intelligence can generate thousands of posts in the time it once took to design a single Soviet poster, allowing Russia to saturate social media with pro-Russian, anti-Ukraine ideology at an unprecedented pace and scale. In the past two years, viral claims such as accusations of Ukrainian President Volodymyr Zelensky’s wife buying a $5 million car or the U.S. funding Ukraine’s development of biological weapons have distorted global public opinion amidst the ongoing war. These tactics reflect a continuation of Russia’s legacy of information laundering strategy.
In 2024, the United States Justice Department disrupted a disinformation campaign by the Pravda network, a state-run operation to spread pro-Russian narratives. For the past two years, the network flooded AI large language models (LLMs) with Kremlin propaganda, successfully distorting the chatbots to spread these falsehoods. The network planted fabricated claims in sources cited by LLMs, including Wikipedia and major news outlets, effectively rewriting the narrative of the war in Ukraine. A study by NewGuard, a disinformation watchdog, found that the top ten AI chatbots repeated Pravda falsehoods 33 percent of the time. Seven of the chatbots went as far as directly referencing specific Pravda articles as their source. This operation resulted in the incorporation of 3.6 million propaganda articles into Western LLMs. Pushing misleading claims such as Ukraine as an aggressor or NATO as a destabilizer not only shaped AI but also affected worldwide public opinion. As 2024 marked a global election year, the Kremlin ramped up its propaganda to undermine international support for Ukraine in the ongoing war. While political discussions between U.S. President Donald Trump and Ukrainian President Volodymyr Zelenskyy unfolded, the bot farm employed AI to create over 1,000 fake American profiles on social media to further promote anti-Ukraine and pro-Kremlin narratives. For example, an account on X, (formerly Twitter), posing as a Minnesota resident, circulated false claims that parts of Ukraine, Poland, and Lithuania were gifted to Russia after WWII. Though X suspended many accounts for violating their terms and conditions, these accounts have already misled hundreds who followed and interacted with them.
Russia’s operation demonstrated how easily artificial intelligence can be exploited to mass-produce false content and amplify misinformation across platforms and audiences. Although bot-created social media accounts promote equally harmful propaganda, the manipulation of AI is particularly dangerous because machine-generated content often carries the illusion of objectivity and credibility. While social media users are often wary of obvious fake news or clickbait headlines, many place comparable trust in generative AI as search engines, such as Google. A Rutgers University survey found that 62 percent of Americans trust mainstream journalism and 48 percent trust AI-generated information. The manipulation of these generally “trusted” sources like news outlets and Wikipedia not only misleads readers but poisons the well from which information systems draw their knowledge, triggering a domino effect. AI models like ChatGPT, Gemini, or Claude do not perform independent fact-checks; they generate answers based on the large datasets and public internet sources they were trained on, which may include biased or manipulated sources. This distortion of information at the source level, known as data poisoning, can be inadvertently reinforced by generative AI. Russia’s data poisoning has expanded 150 domains in a variety of languages, laundering propaganda and advancing the Kremlin’s agenda through tools many people trust for information.
Russia’s misinformation campaign coincided with the United States' pause on cyber security operations in Moscow. In March of 2025, U.S. Defense Secretary Peter Hegseth ordered a pause on all cyber operations against Russia, despite the risk of more pervasive cyberattacks in the absence of oversight. This decision, unfolding alongside the Trump administration’s policy of detente with Russia, raises critical concerns. Did the pause reflect genuine strategic restraint, or did the administration create an opportunity for adversarial exploitation, allowing Russia to capitalize on reduced oversight and advance its cyber capabilities unchecked? With fewer disruptions from U.S. agencies like Cyber Command or the NSA, Putin’s influence operations will have free rein to flood Western language modes with his pro-Kremlin agenda. The shift in U.S. strategy creates a blind spot in cyber security. With a lack of counter-disinformation measures and an inability to anticipate nontraditional forms of cyber warfare, Russia is engineering a new digital battlefield shaped by AI and algorithm influence instead of traditional hacking and malware attacks. The rise in AI-generated misinformation underscores the urgent need for ‘vaccinated’ intelligence models to detect and resist data poisoning. Without safeguards, modern technology can amplify propaganda and threaten digital communication and the integrity of the global information network.