In recent weeks, for example, clips from video games and scenes from old wars presented as views from the front lines of Ukraine have gone viral along with legitimate images. Heartbreaking videos of separated families have been posted thousands of times and then debunked. And official government reports from Ukraine and Russia have made unsubstantiated or misleading claims that are rapidly circulating on the Internet.
In a way, this is the latest in a long list of recent crises — from the pandemic to the Capitol riot — that have triggered the spread of potentially harmful misinformation. But disinformation experts say there are key differences between the war in Ukraine and other disinformation-related events that make false claims about the conflict particularly insidious and hard to debunk.
Perhaps most notably, Ukraine-related disinformation has been highly visible and spreads faster across borders, disinformation experts told CNN Business. The direct involvement of Russia, which is notorious for spreading disinformation online aimed at sowing discord and confusion, adds an additional layer of complexity. Emotional and internal The nature of the content also drives social media users to quickly hit the share button despite the complex landscape of misinformation.
“People feel helpless, they feel like they want to do something, so they scroll online and share things they think are true because they are trying to be helpful,” said Claire Wardle, Brown University professor and director of the American nonprofit anti-disinformation organization First Draft News. “[But] in these moments of upheaval and crisis, this is the time when we are at our worst in discerning what is true and what is false.”
“Stream of images and videos shared”
Unlike the ongoing The Covid-19 pandemic, when many viral false claims were based on text, much of the disinformation about the war in Ukraine was presented in the form of images and videos. And these visual formats are more complex and take more time for both automated systems and fact-checkers to evaluate and debunk, not to mention regular social media users.
verify image or video, fact checkers usually start by searching the internet to see if it has been posted previously, indicating that it is not related to the current crisis. If it does seem recent, they can use the tools to do things like shadow analysis or compare the area shown with satellite imagery to confirm if it was actually filmed in the location it is supposed to show.
“It will obviously take much longer,” said Carlos Hernández-Echevarría, public policy and institutional development coordinator for the Spanish fact-checking organization Maldita.es. By comparison, he says, “blatantly false stories about vaccinations, like ‘they cause autism’ … are all pretty easy to disprove.”
And while anyone can run a photo through a reverse image search engine like Google Image Search or TinEye to see where it may have appeared on the web in the past, it can be a lot harder for people to find video verification tools, René noted. . DiResta, head of technical research at the Stanford Internet Observatory. You might be able to track down the thumbnail that appears with the video, she said, but it’s much harder to find the full one. video via reverse image search.
This difficulty is evident when the video stream moves through apps like TikTok. These clips contain not only disinformation in its original form, but videos perpetuating the disinformation as users post their own reaction videos.
“I have opened TikTok several times and the popup video is not an accurate representation of what it claims to be,” DiResta said. “Facebook and Twitter have quite a lot of experience moderating content during crises; I think TikTok has to gain momentum very quickly.”
The speed with which false claims and narratives spread from one country to another has also increased, from a few weeks in the event of a pandemic and other recent crises to just it’s a matter of days or, in some cases, even hours, Hernandez-Echevarría said. This may be partly due to the fact that much of the content is visual and thus less dependent on a common language. Images and videos also often have more emotional appeal than text messages, which experts say makes it more likely that users will share them.
“Now there is a flood of images and videos being shared,” said Brandi Nonneke, director of the Policy Laboratory for the Center for Information Technology Research in the Public Interest (CITRIS) at UC Berkeley. “The more images move you, the faster they will spread across social media.”
In one recent example, a video of Ukrainian soldiers emotionally saying goodbye to their families was viewed thousands of times on Instagram and shared on various Facebook pages. However, AFP Fact Check found that the video was filmed in 2018 and showed US Marines returning home to their families. Instagram and some Facebook pages have since labeled the video, warning users that it is “partially false,” but the video is available on at least one other unlabeled Facebook page. (Facebook parent Meta did not immediately respond to a request for comment.) Russia’s coordinated efforts to spread false narratives have also become more open and visible since the start of the war. Russia’s false claim that the United States is developing bioweapons in Ukraine and that Russian President Vladimir Putin stepped in to save the day has recently resurfaced and gained momentum, first among QAnon supporters and more recently on more mainstream platforms and even among some legislators. There is also a disturbing new trend of videos exposing false, pro-Ukrainian images and videos that are themselves fake and designed to sow confusion and doubt about Russia’s actions, ProPublica reported last week. Some on the Ukrainian side are spreading misleading Information. Earlier this month, as Russian troops fired on Ukraine’s Zaporozhye nuclear power plant, Europe’s largest, Ukrainian Foreign Minister Dmytro Kuleba tweeted that “if (the plant) explodes, it will be 10 times the size of Chernobyl,” meaning largest nuclear power plant. disaster in history. But while the experts expressed serious concerns, they also said that the more modern plant was built differently and safer than Chernobyl, and was unlikely to be in danger of explosion.
In many cases, false or misleading narratives are propagated through slightly conspiratorial videos or images. Each individual piece of content may not be harmful enough to break platform rules, Wardle says, but when users watch hundreds of videos a day, they can get a distorted view of what’s happening on the ground.
“Here, the broader narratives that shape the war, shape people’s perceptions of Europe, NATO and Russia, are less about the individual TikTok video. forcing people to form their understanding,” she said.
Platforms fight disinformation
Major social media platforms have taken steps to provide users with information about the Ukraine-related content they see. For example, Twitter and the Meta platforms owned by Instagram and Facebook have begun removing or labeling and demoting content posted by or linking to Russian state media, including Russia Today (Facebook said in 2020 it would begin labeling controlled media). Tiktok said earlier this month it will conduct a similar attempt to tag “certain state-controlled media accounts.” TikTok also says it bans “harmful misinformation,” though it’s not clear how it defines the phrase. The three platforms also work with independent fact-checking organizations to identify policy-violating, false content, or provide reliable information. — which talks about attackers using networks of fake accounts to spread lies online — for potential Ukraine-related activities. Meta recently detailed a remote pro-Russian disinformation network that included fake user profiles with AI-generated profile photos and websites posing as independent news outlets to spread anti-Ukrainian propaganda.Some of these efforts have placed tech companies in a difficult position with Russia, resulting in their platforms being restricted or banned in the country and showing them having to walk a tightrope in managing the use of their platforms during times of crisis. And the continued rapid spread of disinformation on the Internet proves that none of these methods can stop the flow of lies.
Even if some content is flagged on one platform, the content is often repurposed on others that may not have equally robust fact-checking methods. When misinformation is posted on social media, the platform’s algorithms can quickly expand its reach to be seen by thousands or millions of users.
There are some efforts currently being made to use social media platforms to spread accurate information and educate users on how to avoid amplifying misinformation.
Last week, the White House held a briefing with top TikTok influencers to answer questions about the war in Ukraine and the role of the United States in the conflict, according to the Washington Post. And Hernandez-Echevarría’s Maldita.es worked with more than 60 other fact-checking organizations from around the world to create a database of debunked war-related disinformation that social media platforms and users can use.
To curb the spread of disinformation online — and in light of the ever-changing rules on social media platforms — Nonneke would like a set of standards or best practices that these platforms should use during times of war outside of the group. “They shouldn’t decide on a whim what they want to do,” she said.
Major social media platforms also need to expand their content moderation capabilities in languages other than English — in this case, especially Eastern European languages such as Polish, Romanian and Slovenian, Wardle said.
“My Romanian friend says, ‘The whole story about Putin coming to save Ukrainians from the Nazis, everyone in the West is laughing at it,'” she said, referring to the Russian president’s unsubstantiated claims that the Ukrainian government is a “gang drug addicts and neo-Nazis. “But she’s like, ‘Here, it’s everywhere.'”