Warfare has undergone several changes since the end of World War II. In the 21st century, battle lines are becoming more complex as they now also use technology as well as civilians as combatants in various types of cyber warfare. A recent report by United States (US)-based social media analytics firm Graphika revealed a pro-Chinese political spam operation promoting a new and distinctive form of video content that has been circulating on popular social media platforms like Facebook and Twitter operated , and YouTube. The content, known as spam, uses artificial intelligence (AI) generated videos of fictional people to create misleading political content. Common content that is churned out relates to US inaction on US gun violence and US-China cooperation. The report describes these operations as state-oriented and political.
{{^userSubscribed}} {{/userSubscribed}} {{^userSubscribed}} {{/userSubscribed}}
This development is significant not only from a technological point of view, but also because of the future possibilities of rampant dissemination of disinformation and influence operations in the field of social media. The question that arises in this context is what the potential impact of politically divisive disinformation might be. The 21st century is one where social media connects almost everyone. Civil society, which votes for or against political candidates in elections in democratic societies, forms an impressive audience. Audience swaying against election candidates who may have hostile worldviews toward certain countries that chock out disinformation is a tool that is finding increasing appeal in the current world order. China, which has made great strides in moving up the technology ladder as well as in AI, often resorts to tools such as disinformation, spam flag, use of bots, use of the digital stack, and so on to create a narrative that favorable to his world is sight.
{{^userSubscribed}} {{/userSubscribed}} {{^userSubscribed}} {{/userSubscribed}} {{#content}} {{/content}}
The use of AI-generated content started with deepfake apps; that use deep learning algorithms for face swapping. In September 2019, an app called Zao went viral in China and quickly became the most downloaded free app on China’s Apple App Store. It allowed users to superimpose images of their faces on celebrity video clips in just a few seconds with just one photo. While the app appeared harmless, there was a catch that caused an uproar among users. The app stated that Zao had totally free, irrevocable, perpetual, transferrable and relicensable rights. This means the app company can use the data generated by users to perfect their deep learning models.
In November 2022, China released its “Position Paper” on Strengthening Ethical Governance of AI, emphasizing that AI, as the most representative disruptive technology, has brought uncertainty and, despite enormous potential benefits, has led to numerous global challenges and even fundamental ethical concerns. Recently, on February 16, 2023, China was among 60 countries to sign a non-binding “call to action” endorsing the responsible use of AI in the military. However, as is often the case in most other areas, the “calls” and “actions” differ greatly when it comes to China.
{{^userSubscribed}} {{/userSubscribed}} {{^userSubscribed}} {{/userSubscribed}}
In August last year, New Kite Data Labs – a US-based think tank – revealed that Speech Ocean, a Beijing-based AI and data collection company, collected voice samples from militarily sensitive regions of India, including Jammu & Kashmir and Punjab. Speech Ocean is said to have worked with a New Delhi-based subcontractor who, instead of paying small amounts of money, recruited individuals to record their voices in their language and accent. The report emphasized that Speech Ocean is known for selling to the Chinese military and that the data collected in India has been sold to authorities in China for use and analysis.
However, as Chinese authorities collect datasets such as voice samples from Indian regions, Beijing rolled out new rules in January this year, increasing government influence over China’s tech sector, to regulate the use of deepfakes in China. This demonstrates the dual strategy of using technology for geopolitical purposes and suppressing similar activities within one’s territory that can cause potential political unrest.
{{^userSubscribed}} {{/userSubscribed}} {{^userSubscribed}} {{/userSubscribed}}
As AI-related technologies advance, the dangerous possibilities of using deceptive content for political and geopolitical gain will only increase. The fake anchors unearthed by Graphika can speak in multiple languages as they have mature sample language datasets. While deepfakes can sow doubt, disrupt trust, foster existing prejudices, and manipulate opinions and decisions, AI-powered deepfake tools can wreak havoc by speeding up the process. Regarding India, there is a lot of disinformation on Chinese social media about how India is moving away from its democratic credentials or trying to turn the entire South Asian region into a Hindu region. These types of disinformation are then tweeted by bots on popular social media channels. Initially, information and awareness of how deepfakes work in India is scant. India also has a large, impressionable audience that relies on social media for information, which in turn shapes their belief systems. Therefore, greater education on how social media can be divisive, along with greater collaboration between democracies to study the impact of state-controlled spam, becomes relevant.
{{^userSubscribed}} {{/userSubscribed}} {{^userSubscribed}} {{/userSubscribed}}
This article was written by Sriparna Pathak, Associate Professor and Director, Center for Northeast Asian Studies, OP Jindal Global University and Divyanshu Jindal, Research Associate, NatStrat, New Delhi