The social network This is usually the case in all social networks during serious crises such as wars. But in this specific case, it was the baptism of fire for the platform led by Elon Musk. And for experts, the early results aren’t exactly brilliant.
These include horrifying images of children in cages, documents purporting to prove that the United States has approved $8 billion in aid to Israel, and videos suggesting that the atrocities that have occurred since the beginning of the Hamas attack on Israel on civilians was orchestrated by the Jewish state.
So much “information” passed on ad nauseam
Thierry Breton vs. Elon Musk
And for the European Union, enough is enough. In an open letter, Thierry Breton, the European digital commissioner, gave Elon Musk, the head of X, 24 hours to clean up his platform. Otherwise, the European Commission reserves the possibility of imposing a penalty of 6% of the social network’s turnover for breach of the European Digital Services Regulation.
Following the Hamas terrorist attacks against 🇮🇱, we have evidence that X/Twitter is being used to spread illegal content and disinformation in the EU.
Urgent letter to @elonmusk We #DSA Bonds ⤵️ pic.twitter.com/avMm1LHq54
— Thierry Breton (@ThierryBreton) October 10, 2023
In fact, Hamas’ attack on Israel represented “a real stress test for digital platforms and their content moderation policies,” assures Hamza Mudassir, co-founder of the British startup consultancy Platypodes and professor of entrepreneurial strategy at the University of Cambridge.
The violence of the last few days has led to unbroken reactions on social networks. More than 50 million messages about the conflict have been posted on Twitter alone since Saturday. This burst of content has been accompanied by a rise in “fake news” and propaganda messages, according to information verification sites around the world.
A cross that is borne by all social networks, from TikTok to Instagram, but which particularly affects Twitter. It’s no coincidence that Thierry Breton chose to target Elon Musk. “He is clearly part of the problem,” said Sander van der Linden, a professor of social psychology at the University of Cambridge and a specialist in disinformation on social networks.
Elon Musk “is part of the problem”
“This is the real baptism of fire for the location [intervenu en avril 2022, NLDR]” explains Jon Roozenbeek, a disinformation specialist at the University of Cambridge.
Also read: Israel: Two viral videos of kidnapped children taken out of context
First impressions aren’t exactly flattering
Firstly, because Elon Musk “is clearly part of the problem,” the latter believes. “He personally doesn’t seem to have much interest in better content moderation,” emphasizes Jon Roozenbeek. His response to Thierry Breton says a lot about his state of mind: “Could you please list the violations?” [aux règles européennes, NDLR] that you refer to it for everyone to see?” he asked on X.
This is either an indication that he is unaware that something is fishy in Kingdom X, or “a way to drag out discussions,” explains Jon Roozenbeek.
Elon Musk was even caught red-handed promoting Twitter accounts that were known to share false information. He therefore called for people to follow @WarMonitors and @sentdefender from October 7th to “follow the war in real time”. Problem: These two reports were already “among the main spreaders of false information about an explosion near the White House.” [qui n’avait jamais eu lieu]“, emphasizes the Washington Post. War Monitors has also been accused of publishing messages of an anti-Semitic nature.
Incentives to spread rumors
But experts are particularly concerned about the dismantling of the protective measures that existed before the Musk era. “The main problem arises from the new certification policy, which allows anyone to have the little blue emblem of the certified account as long as they pay a monthly subscription. This has made it much more difficult to know who to trust on Twitter,” summarizes Jon Roozenbeek.
“Elon Musk’s decision to reinstate the super-spreaders of false information.” [tels que Donald Trump, NLDR] also contributes to the virality of certain problematic content because we know that it only takes a few very influential accounts to make a difference,” adds Sander van der Linden.
After the attack #Hamas against #Israel and the reprisals that followed, social networks became largely invested in documenting and commenting on this deadly conflict, while simultaneously being overwhelmed by a wave of disinformation⤵️ #AFPhttps://t.co/uh4Ff3HDoc
— AFP Factual 🔎 (@AfpFacteur) October 11, 2023
Not to mention the new compensation policy for content creators. “They get paid based on how many times one of their messages is seen. They will therefore be tempted to republish the most viral news – often with the most inflammatory content – without necessarily checking whether it is disinformation or not,” laments Sander van der Linden.
The impact of drastic cuts to content moderation teams is also being felt in this conflict. Images from the war simulation game Arma 3 were used in fake videos of clashes. “Perhaps this would not have happened in the days of old Twitter, because the moderation teams had tools to learn from their mistakes, and the same kind of fakery.” Videos with excerpts from the same game were used during the conflict in Ukraine,” explains Sander van der Linden.
Instead of the old rules of moderation, Elon Musk relies primarily on the vigilance of the community. “He introduced community notes that allow users to add context or report misleading content. It is a system that works quite well,” assures Hamza Mudassir. The disinformation experts interviewed recognize the usefulness of such a mechanism, but this “means that the responsibility for tracking information is transferred 100% to the community,” specifies Jon Roozenbeek.
Real world consequences
The explosion of violence in the Middle East will likely “expose the gaps in Twitter’s anti-disinformation armor and their consequences,” admits Hamza Mudassir. And it’s not all bad news for the decisions Elon Musk has made. The spread of false information on social networks “can have very real consequences,” emphasizes Sander van der Linden.
For example, “rumors spread on social networks have led to acts of violence in India,” this expert adds. As early as 2018, the BBC listed a whole series of tragic consequences worldwide that resulted from the spread of “fake news”.
The problem is that “it’s difficult to ask the head of a private company, who needs to save money, to spend more in an area that doesn’t necessarily impact Twitter’s profitability,” explains Hamza Mudassir.
Unless X has to pay the price for the flood of misinformation. “We must not forget that Elon Musk promoted his Twitter as the network to consult to be informed before anyone else,” emphasizes Hamza Mudassir. But what’s the point if this information is wrong? Enough to definitely deter those who, even before the resurgence of violence in the Middle East, felt that Elon Musk’s promises in the face of increasing disinformation about X? sounded hollow.
But it is advertisers, above all, who risk not being positive about the way people use disinformation, concludes Hamza Mudassir.