1705478075 Disinformation and AI on the agenda of the Economic Forum

Disinformation and AI on the agenda of the Economic Forum in Davos Impacto TIC

This year, the theme of the 54th Annual Meeting of the World Economic Forum is “Rebuilding Trust.” This will be a crucial place to focus on the core principles that motivated the creation of the organization: trust, transparency, coherence and accountability.

The four themes on the agenda this year are: Achieving security and cooperation in a fragmented world; Creating growth and jobs for a new era; AI as a driver of the economy and society and long-term strategy for climate, nature and energy.

Global risks for the coming years

Ahead of this meeting, the Economic Forum presented the “Global Risks 2024” report, which collects the opinions of almost 1,500 global experts and analyzes global risks for the next 2 and 10 years.

The results of the GRPS 2023-2024 show a predominantly negative outlook for the world in the coming years. 84% of respondents are concerned and the situation is expected to get worse in the next decade, leaving 92% pessimistic.

Topics of greatest concern in the coming years include misinformation, climate change, population growth, technological acceleration and geostrategic shifts, among others.

Global risks 2024Global risks 2024

Disinformation and the further development of AI are the biggest risks

Disinformation and the further development of artificial intelligence are considered the biggest risks for the next two years (until 2026). There are also meteorological phenomena, social polarization and armed conflicts.

The report notes that the most serious global risk predicted for the next two years (and the fifth highest-impact risk in a decade) is related to the actions of foreign and domestic actors seeking to exploit disinformation and misinformation in the social and political spheres.

With nearly three billion people set to vote in 2024, the widespread use of disinformation and misinformation, as well as the tools used to spread it, can undermine the real and perceived legitimacy of newly elected governments.

Recent technological advances such as generative AI have increased the amount, scope and effectiveness of false information. Added to this is the long-term pervasiveness of democratic processes, exacerbated by unrest ranging from violent protests and hate crimes to civil unrest and terrorism.

Global risks 2 and 10 yearsGlobal risks 2 and 10 years

The need to regulate artificial intelligence

Given the problems posed by the advancement of artificial intelligence models, the report suggests that governments must address the growing risks by introducing new and changing regulations aimed at both hosts and creators of online -Address disinformation and illegal content.

Additionally, he warns that emerging regulation of generative AI will likely complement these efforts. For example, China's requirement to watermark AI-generated content can help detect false information, including unintended disinformation from AI-generated content.

However, it is noted that there is a risk in this regulatory process that some governments may move too slowly and have to compromise between preventing disinformation and protecting freedom of expression, while repressive governments could use greater regulatory control to undermine human rights.

Technological progress, a long-term risk

Environmental and technological risks are expected to worsen over the next decade (until 2036). The report states that the uncontrolled spread of increasingly powerful and common AI technologies will radically transform the economy and society in the coming decade.

In addition to the benefits for productivity and advances in areas as diverse as healthcare, education and climate change, advanced AI poses significant risks to society.

Risks are also seen in parallel advances in other technologies, from quantum computing to synthetic biology, reinforcing the negative consequences of these developments.

Encourage the meaningful use of technology

The report raises the issue of creating national security incentives that could limit the extent of obstacles to AI development. AI and machine learning professionals are expected to be the fastest growing profession, increasing by 40% (1 million jobs) by 2027.

However, the risks lie in a deeper integration of AI into conflict decision-making, which could lead to unintended escalation, while open access to AI applications could asymmetrically empower malicious actors.

Negative consequences of advanced AI could create new divisions between those who can access or produce technological resources and intellectual property and those who cannot, experts say.

There are also long-term risks as extreme weather events continue to worsen and become the main risk for the next decade. As was the case last year, perceptions of the severity of biodiversity loss and ecosystem collapse are among the most deteriorating risks, moving from twentieth to third place in the short term.

Photo: World Economic Forum