Study Shows People Dislike Received Responses Generated by AI System

“We shouldn’t regulate AI without noting significant harm,” says Microsoft economist, who says “permission follows multiple accidents” – Developpez.com

Study Shows People Dislike Received Responses Generated by AI System
Calls to regulate the use of artificial intelligence are growing louder around the world. Voices are being raised in both Europe and the US calling for legislation to be created to regulate the use of this technology. With the recent introduction of a generative artificial intelligence bill, we seem to be ahead of the game in China. However, Microsoft’s chief economist suggests these are premature initiatives. “We shouldn’t regulate artificial intelligence until we see significant damage,” he said during a recent visit to the World Economic Forum.We shouldnt regulate AI without noting significant harm says Microsoft

“The first time we started requiring driver’s licenses was after dozens of people died in car crashes, and it was the right thing to do.
If we had asked for the driver’s license where the first two cars were, that would have been a big mistake. We would have completely ruined this scheme. There has to be at least a small amount of damage so we can see what the real problem is,” he emphasizes.

Michael Schwarz is in favor of regulating artificial intelligence that is not based on imaginary scenarios but on concrete facts. The approach should enable the establishment of regulations that do not jeopardize the potential benefits of artificial intelligence.

Nevertheless, Microsoft warns of possible damage caused by the use of AI by malicious actors

It’s still Microsoft’s chief economist, Michael Schwarz, who states that artificial intelligence will be dangerous in the hands of malicious humans: “It can do a lot of damage in the hands of spammers with elections and whatnot”.

Generative artificial intelligence (AI) is a technology that makes it possible to create text, images, sounds or videos from existing data or from scratch. An example of generative AI is ChatGPT, a tool developed by OpenAI that can provide consistent and natural responses to different requests. Chatgpt uses a deep learning model called GPT-4, which contains around 500 billion parameters and has been trained on a large amount of text from the web.

If this technology offers fascinating possibilities for creativity, education, entertainment or communication, it also poses significant risks for society and humanity. Here are some of those dangers:

  • Misinformation and Manipulation: Generative AI can be used to create false or misleading content, such as B. fake news, fake testimonials, fake reviews or fake profiles. This content may influence public opinion, democracy, elections or consumer behavior. Distinguishing the true from the false and verifying the sources of information then becomes increasingly difficult.
  • Loss of authenticity and trust: Generative AI can be used to mimic the voice, face, or style of a real or fictional person without their consent or knowledge. These imitations may harm the privacy, reputation or identity of the individuals concerned. They can also create confusion, suspicion or disappointment among the recipients of the content. Knowing who is speaking and for what purpose becomes complex.
  • Ethics and Responsibility: Generative AI can be used to create objectionable, illegal or immoral content such as hate speech, incitement to violence, pornographic or pedophile content. This content may affect the dignity, safety or well-being of those affected or exposed. Controlling, regulating or sanctioning these malicious uses is not as easy as it used to be.
  • The Impact on Human Relationships: Generative AI can be used to replace or simulate human interactions such as conversation, emotion, feeling or advice. These interactions can have positive or negative effects on the mental health, personal development, or social bonding of the individuals involved. It is becoming increasingly difficult to distinguish the human from the non-human and to maintain the authenticity and sincerity of relationships.

One of the illustrations we can give is this shot that went viral on Twitter showing an old man with a bloody face who was obviously abused by the police. The background was the mobilization against the pension reform in France and the elderly man was probably arrested by the police as a demonstrator.

Journalists and experts in detecting fakes on the Internet have studied the issue. According to Guillaume Brossard, creator of the HoaxBuster site, and Agence France Presse, there is little doubt: the image in question has certain flaws that are a sign of software creation and probably midjourney. But the possibility of the real, and above all the difficulty of easily deciding the question, is a cause for concern.

For this reason, experts are asking all AI labs to stop training AI systems that are more powerful than GPT-4

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its Generative Pre-Trained Transformer (GPT) AI program, which wowed users with its wide range of applications, ranging from users in a human-like conversation to composing songs and long summaries were enough documents.

The letter, published by the nonprofit Future of Life Institute and signed by more than 1,000 people including Musk, calls for a pause in the development of advanced AI until common safety protocols for such designs are developed, implemented and overseen by independent experts .

“Powerful AI systems should only be developed when we are sure that their impact is positive and their risks are manageable,” the letter says.

The letter details the potential risks to society and civilization that competitive AI systems pose to humans in the form of economic and political disruption, and urges developers to work with policymakers on governance and regulation.

Among the co-signers are Emad Mostaque, CEO of Stability AI, researchers at Alphabet-owned DeepMind (GOOGL.O), and AI heavyweight Yoshua Bengio, often credited as one of the “godfathers of ‘AI’, and Stuart Russell, a research pioneer in this area.

According to the European Union Transparency Register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as the London-based Founders Pledge group and the Silicon Valley Community Foundation.

These concerns come as Europol, the European Union’s police force, joined a chorus of ethical and legal concerns over advanced AI like ChatGPT on Monday, March 27, 2023, warning of possible misuse of the system in phishing, disinformation and cybercrime Try.

At the same time, the UK government has put forward proposals for an adaptable regulatory framework around AI. The government’s approach, outlined in a recent guide, would split responsibility for managing artificial intelligence between human rights, health and safety agencies and the competition, rather than creating a new body dedicated to the technology.

AI systems with competitive intelligence with humans can pose significant risks to society and humanity, as extensive research shows and is recognized by leading AI laboratories. As the widely adopted Asilomar AI principles demonstrate, advanced AI could represent a profound change in the history of life on Earth and should be planned and managed with due care and resources. Unfortunately, that level of planning and management doesn’t exist, although for the past few months the AI ​​labs have been caught in an uncontrolled race to develop and deploy digital minds that are always more powerful than anyone – not even their creators – can reliably understand or predict or control.

Contemporary AI systems are now becoming competitive for general purpose tasks, and we must ask the question: should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including the rewarding ones? Do we need to develop non-human minds that could one day be more numerous, smarter, more obsolete and replace us? Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders. Powerful AI systems should only be developed if we are convinced that their impact is positive and their risks manageable. This trust must be well founded and grow with the magnitude of a system’s potential impact. OpenAI’s recent statement on artificial general intelligence states: “At some point, it may be important to seek independent verification before beginning training of future systems, and for the most advanced efforts to agree to increase the growth rate of the computations used to create new models.” to limit “. We agree. Now is the time to act.

For this reason, we request all AI labs to immediately stop training AI systems that are more powerful than GPT-4 for at least six months. This pause should be public, verifiable and include all key stakeholders. If such a pause cannot be implemented quickly, governments should step in and introduce a moratorium.

AI labs and independent experts should use this pause to collaboratively design and implement a common set of security protocols for advanced AI design and development, rigorously reviewed and monitored by independent external experts. These protocols are designed to ensure that systems that adhere to them are unequivocally secure, which doesn’t mean a pause in AI development in general, just a step back from the dangerous race to ever larger and more unpredictable black-box systems new skills .

Research and development in the field of AI should be refocused on improving the accuracy, safety, interpretability, transparency, robustness, alignment, reliability and loyalty of today’s powerful and modern systems.

At the same time, AI developers must collaborate with policymakers to dramatically accelerate the development of robust AI governance systems. This should include at least: new competent AI regulators; Monitoring and tracking of high-performance AI systems and large pools of computing capacity; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; a robust audit and certification ecosystem; liability for damage caused by AI; solid public funding for technical research on AI security; and institutions that have adequate resources to deal with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Thanks to AI, humanity can enjoy a prosperous future. Having managed to create powerful AI systems, we can now enjoy a “summer of AI” reaping the fruits of our efforts, designing these systems for the greater benefit of all, and giving society a chance to adapt. The company has paused other potentially disastrous technologies for this, and we can do the same here. Let’s enjoy a long AI summer and not fall unprepared.

Source: WEF video

And you ?

Tinder travaille sur un new subscription mensuel a 500 dollars What is your opinion on the topic?

Tinder travaille sur un new subscription mensuel a 500 dollars Do you agree with Michael Schwarz?

See also:

Tinder travaille sur un new subscription mensuel a 500 dollars Are unfriendly AIs the greatest risk to humanity? Yes, according to the creator of Ethereum, who anticipates a future catastrophe caused by general artificial intelligence

Tinder travaille sur un new subscription mensuel a 500 dollars “The development of AI without regulation poses an existential threat to humanity,” said Elon Musk, whose company is developing brain chips to counter AI

Tinder travaille sur un new subscription mensuel a 500 dollars Google Deepmind researchers co-authored article that AI will wipe out humanity, reigniting debates about the possibility of a machine-dominated future

Tinder travaille sur un new subscription mensuel a 500 dollars Will the apocalypse happen with the creation of AI? Entrepreneur Elon Musk considers AI to be the greatest threat to humanity