Skip to main content
Center for War Studies

A paradigm shift in (dis)information warfare: How TikTok content and deepfakes become threatening weapons in the Russia-Ukraine war

By PhD Fellow at DDC Selma Marthedal

Propaganda and disinformation are integrative parts of warfare. However, during Russia’s invasion of Ukraine, TikTok videos and deepfakes have emerged as new tools for pro-Russia disinformation to spread. I argue that the spread of misleading content through TikTok can have serious consequences on Russian public sphere which include 1) the loss of the receivers’ possibility to criticize the original sender due to lack of transparency of the platform, 2) the reinforcement of an algorithmic spiral of silence and 3) deepfakes ability to manipulate populations to achieve specific political (or military) goals. Lastly, I argue how social media’s response to this paradigm should not be the so-called “Russia ban” but instead well-developed strategies towards detecting misinformation and commercial transparency.

A new paradigm in information warfare

The Russian Government has developed several false narratives to justify the invasion of Ukraine:

  • Ukraine is aggressive and is a threat to Russia
  •  The West has pushed Ukraine toward a conflict
  • Russia is only displacing its forces near Ukraine not in Ukraine
  • The only goal of Russia’s “military operation” is to defend Russian people in Ukraine
  • NATO has broken a vow and seeks now to expand east
  • Russia’s safety is at risk if Ukraine become a NATO member

These narratives may appear random at a glance, but after closer scrutinization, they seem to follow a pattern. Almost all narratives proposed by the Russian Government suggest that the Russia population is in danger because of NATO and appeal to the Russian people’s fear for the possible loss of Russian sovereignty. Fear-framed narratives can be very persuasive, and it can be expected that these narratives could convince some Russians to support the invasion of Ukraine. As Kahneman & Tversky state, people tend to be more risk-seeking when they are presented loss-framed narratives.

The use of these types of narratives is not a new method for governments to mobilize people to war. However, the channels used to spread them have changed drastically in a short number of years. Traditionally, politicians could only reach crowds via traditional one-way communication platforms such as television, newspapers, posters or leaflets. In recent years two-way communication platforms such as Facebook, Twitter and Instagram have become essential for politicians to promote political messages.

However, the communication tools used by the pro-Russia agenda are different from traditional political social media approaches. An investigation made by the media VICE shows that a secret Telegram channel has paid Russian influencers on the media TikTok to spread the abovementioned pro-Russia narratives. TikTok is a video based social media platform with 36 million Russian users and 700 million users worldwide. The influencers received directions on what to post, where to post it and scripted directions on what to say in the videos. The influencers reinforced the fear frames and narratives of the Russian government including how “Kyiv was slaughtering the Russian-speaking population in Donbas”. The payment of these videos ranged between 2,000-20,000 rubles.

To use influencers with millions of followers to spread disinformation is a game changer in information warfare. There are several reasons for this. First, in contrast to other mass targeting tools on social media, the receiver of the video is not conscious of the fact that they are being targeted by political campaigning. Another criticized form of political targeting is microtargeting on social media. However, when a person receives an ad based on microtargeting, the receiver is usually aware that they are being targeted by a political ad but without the knowledge of what personal data the targeting is based on. Here, instead, the content of the video appears to viewers to be a genuine product of the influencer. At least two consequences of this strategy can be identified for the Russian political system: 1) as the receivers are not able to identify the sender of the message, they cannot critically assess the intentions behind the message; and 2) the receivers will not be able to hold the actual sender of the message accountable for their statements.

Second, this type of information warfare on TikTok can reinforce an algorithmic “spiral-of-silence”. Imagine you are scrolling through TikTok and out of curiosity you watch one of the pro-Russia videos from an influencer you are following. There is a great chance that TikTok’s  highly personalized algorithm will register this and tailor videos for you which match it by content. This can result in overrepresentation of the same pro-Russia content which can result in misconceptions to be widespread within the Russian public opinion. This, in turn, can lead people who do not agree with or question the pro-Russian narratives to censor their own opinions.

Another tool, which has been used in the spreading of the pro-Russia narratives are deepfakes. A deepfake is a video where a person’s face and body are digitally constructed parred with an audio track.  In March 2022 hackers released a deepfake of Zelensky where “he” demands the Ukraine army to surrender to Russia on the news site Ukraine24. No official in Russia has claimed to be the sender of this video, thus according to the Atlantic Council the deepfake was shared by a “pro-Russian telegram channel”. The deepfake is amateurish and it is possible to spot differences between the real President Zelensky and the artificial. However, the release of this deepfake emphasizes a shift in the spreading of disinformation and emphasizes how big consequences deepfakes can hold to democracies and warfare. A well-developed deepfake would potentially be able to have serious consequences on the outcome of modern war.  The use of deepfakes in the spreading of disinformation can cause serious manipulation of the public by spreading false statements by prominent figures.

Deep fake of Ukraine's president, Volodymyr Zelenskyy

The deepfake released by pro-Russian hackers was released on March 16

How TikTok’s Russia ban reinforces a zombification

In response to Putin’s new Fake News Law, TikTok recently banned all Russian users from uploading content to the app. However, some pro-Russia TikTok’ers have found loopholes in the ban and continue to upload content. Overall, the ban may turn in Putin’s favor. Not only is it not possible for Russian TikTok users to upload content, Russian users are not able to watch international content either, including video uploads from Western news stations such as CNN and BBC. News stations such as Russia Today and Sputnik are still accessible to Russian TikTok’ers, but  95% of the content which was available to Russian users is now unavailable. This adds to the current media-isolation the Russian population faces. This can reinforce persuasive effects of the pro-Russia narratives by decreasing the Russian populations access to frames that counter these narratives. This results in 144 million people now facing a serious media zombification by continually being exposed to disinformation about their own country’s warfare while being isolated to counter information from independent news stations.

Another consequence of the Russia ban is that Russian videos uploaded on TikTok prior to the ban are still available to international users. This implies that pro-Russia videos are still available on TikTok for all users. The pro-Russia narratives can reach European youth on a frequent basis due to personalized algorithms and convince them of these false narratives about the war. This can have serious consequences on the trust in political institutions and undermine deliberative democracy in Europe.

It is crucial that European policymakers acknowledge that our current media system is a hybrid between mainstream media and social media. Therefore, it is not sufficient to impose sanctions on Russian state media which spread disinformation such as Russia Today and Sputnik. They should also acknowledge that half of young Europeans use social media for news consumption on a daily basis. Misinformation on platforms such as TikTok can as manipulative as misinformation from state media. Therefore, European policymakers should approach TikTok to demand regulation of misinformation.

I argue that social media companies should regulate the spread of misinformation in three ways. First, it is important that social media platforms allocate resources for developing advanced tools which can identify misinformation instead of enforcing bans on nations that can lead to media isolation. Second, social media platforms like TikTok should enforce a transparency policy on paid content for users to be able to differentiate between tailored advertising and genuine political statement. This, I argue, is crucial for receivers to retain their right and ability to criticize senders of political messages. Third, all social media platforms should enforce a policy enabling users to identify deepfakes.  My suggestion for this is a fact-checking label which marks deepfakes as false content on social media platforms. This should be considered a minimum requirement of obligation for social media companies to avoid serious deliberative backsliding in Europe.

Editing was completed: 14.04.2022