Interesting news about artificial intelligence, a topic that has now monopolized public opinion and the mainstream media. AI needs to be better regulated than it is now, so even the President of the United States, Biden, is moving in that direction.
When we talk about regulating artificial intelligence, we are referring to the need to put in place rules and policies that define how AI must be developed, deployed and controlled ensure their impact is positive and which do not harm consumers, the economy and society at large.
At present, AI cannot yet be described as fully regulated, as there are still many open questions about its implementation and its impact on society. For example, there are concerns about Transparency of the decisions made by AIsProtection of consumer data, civil and legal liability of companies that develop and deploy AI.
For this reason, there are several proposals for international regulation of artificial intelligence, such as the European Union’s General Data Protection Regulation (GDPR) or the Organization for Economic Co-operation and Development Guidance on AI ethics (OECD), but there is still no comprehensive and consistent framework for global regulation of AI.
AI regulation needs to be balanced to ensure technological innovation is not hampered, but at the same time it needs to be controlled to mitigate the risks associated with their implementation and dissemination in society. In this way, AI regulation can help ensure that AI is developed and used responsibly and for the benefit of society.
For these reasons, the highest state offices and institutions are also trying to better understand the current situation of AI
UK Antitrust screens AIs
The Competition and Markets Authority (CMA), or the UK’s anti-trust agency, which deals with regulating the market to prevent the emergence of dangerous phenomena for consumers, such as monopolies that harm fair competition, has made headlines in recent weeks, because it prevented Microsoft’s acquisition of Activision Blizzard sees a threat to consumer choice.
Well, the CMA has now made a strong decision regarding artificial intelligence as well. He announced an overhaul of “fundamental artificial intelligence models” such as language models (LLMs). they support OpenAI’s ChatGPT and Microsoft’s new Bing. Generative AI models powering art platforms like DALL-E or OpenAI’s Midjourney also likely fall within the scope.
The announced review of the CMA on AI represents an important development in AI regulation, particularly in relation to fundamental AI models.
The CMA said its review will focus on competition and consumer protection in the development and use of these models to understand how these models are evolving and create an assessment of the conditions and principles that best guide the development of these models and their future use.
What is interesting about this review is that it also covers generative AI models for artistic use, such as DALL-E and Midjourney by OpenAI. This means that the CMA considers not only the impact of AI on consumers, but also on other sectors such as art and copyright.
The impact of this overhaul could be significant, especially for companies developing these foundational AI models. The CMA could introduce stricter and more restrictive rules for the development and use of these models, which could limit companies’ innovation and competitiveness.
The President of the United States also wants to see things clearly
The President of the United States Joe Bidencalled a meeting with leaders of some of the largest technology companies in the United States, including Google, Microsoft, NVIDIA and OpenAi, to discuss issues related to artificial intelligence and its regulation.
What worries the US government the unethical use of AIfor example, before a meeting between the Biden administration and industry leaders, the White House announced an increase in funding for the development of responsible artificial intelligence.
Measures include a $140 million investment from the National Science Foundation to create seven new National AI Research (NAIR) Institutes, Increasing the total number of dedicated AI facilities to 25 across the country. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to have their language models publicly evaluated. The Office of Management and Budget (OMB) also said it would release interim rules this summer to outline how the federal government should use AI technology.
The press release from the administration states:
These moves build on the administration’s strong commitment to ensuring that technology improves the lives of the American people and open new perspectives for the federal government’s continued efforts to make coherent and comprehensive progress in addressing technology-related risks and opportunities.
In April, the Federal Trade Commission, the Consumer Federal Protection Bureau, the Department of Justice and the Careers Commission issued a joint filing claiming they already have authority to prosecute companies whose AI products harm users.
In short, a real task force is advancing to clearly see the current artificial intelligence situation.