Generative AI is just a phase The next phase will

How do we know we have achieved a form of artificial general intelligence? For some AI experts, this will be the case if the machine has human-level cognition – Developpez.com

Generative AI is just a phase The next phase will
Achieving artificial general intelligence (AGI) is the ultimate goal of AI companies. But while AI experts try to predict when this will happen, almost none of them can clearly answer the question: How will AGI manifest? The answer seems simple, but not everyone agrees on this issue. According to some technologists and AI experts, it is unlikely that people will notice that they are interacting with an AGI. Another group believes that we will be in the presence of AGI when the machine has human-level cognition. The idea of ​​AGI is controversial within the community.

To put it simply, artificial general intelligence is a “hypothetical” form of artificial intelligence in which a machine can learn and think like a human. For this to be possible, the AGI would need to have self-awareness and awareness in the first place, allowing it to solve problems, adapt to its environment, and perform a wider range of tasks. If AGI – also called “strong AI” – seems like science fiction, that’s because it’s still relevant today. Existing forms of AI have not yet reached the level of AGI, but AI researchers and companies are still working to make them a reality.

AI (so-called weak) is actually trained on data to perform specific tasks or a set of tasks limited to a single context. Many forms of AI rely on algorithms or pre-programmed rules to guide their actions and learn to operate in a particular environment. On the other hand, AGI should be able to understand and adapt to new environments and different data types. Instead of depending on predefined rules, AGI takes a problem-solving and learning approach. According to experts, AGI should have the same thinking abilities as humans.

Weak or narrow AI is the type of AI that powers autonomous vehicles, image generators and chatbots. In other words, it is an AI that performs a limited range of tasks. Two subgroups of AI fall into the weak AI category: reactive machines and machines with limited memory. Reactive machines can respond to immediate stimuli, but cannot store or learn from memories of past actions. Machines with limited memory can store previous information to improve their performance over time. They represent the majority of AI tools available today.

However, AGI blurs the line between human intelligence and machine intelligence. There are three main opinions on this topic. Some AI experts believe we will see AGI when machines exhibit human-level cognition. However, this claim is controversial among other experts who believe that the manifestation of AGI will be implicit and that it will be difficult to prove that any form of AI is AGI. A third group of experts refutes both arguments, saying AGI is “unworkable.” Furthermore, it should also be noted that experts are divided on the concept of artificial general intelligence.

AGI is not a defined line where it exists on one side and does not exist on the other. This is a subjective state of AI that I believe will evolve gradually on a spectrum. In my opinion, some people will think it exists, others won’t, and it will gradually adapt until there are more people who believe it exists than people who don’t, we can read in the comments. Another wrote: We will reach AGI when the capabilities of machines are far superior to humans in many areas. AGI will never exist because once it is there, it will already be superior to humans.

This is not a binary thing. The Transformers are already doing low-level things when it comes to AGI. AGI will likely increase gradually, with occasional breakthroughs leading to above-average advances. There is no evidence that AGI will rise from 1.1000 overnight. I don’t even think Transformers are capable of human-level AGI, and I don’t know of any architecture that allows it. “So I won’t be betting on human-level AGI any time soon,” another reviewer wrote. Even leading AI experts seem to be divided on this issue.

After the release of GPT-4, a team of Microsoft scientists claimed in a research paper that OpenAI’s technology exhibited “sparks” of AGI. Given the breadth and depth of GPT-4’s capabilities, we believe it can reasonably be considered an early (but still incomplete) version of an AGI. However, this claim and the method the team used to reach this conclusion have been the subject of much controversy. But how will AGI manifest itself when it becomes a reality? Experts predict its arrival, but don’t say how we’ll know it’s there:

Sam Altman, co-founder and CEO of OpenAI

In an interview with AI expert and podcast host Lex Fridman last March, Altman said that while rapid progress is being made in AI, the timeline for AGI is uncertain. He emphasized the importance of discussing and addressing the possibility that AGI poses an existential threat to humanity. He advocates discovering new techniques to mitigate potential threats and walking through complex problems to learn early and limit high-risk scenarios. Altman then asked his interlocutor whether he believed that GPT-4 was an AGI, to which Fridman replied:

I think if it were like that with the UFO videos, we wouldn’t know right away. I guess it’s hard to know when I think about it. I’ve been playing around with GPT-4 and was wondering how to know if it’s an AGI or not. In other words, what part of the interface do I have with this thing? And how much wisdom is there in it? Part of me thinks that we might have a model with “superintelligence” and that we haven’t quite unlocked it yet. I saw this on ChatGPT. Altman then spoke about the potential dangers of AGI and its benefits.

Altman said on the Reddit forum r/singularity this week that his company had developed human-level AI, but he immediately disagreed, claiming that the product developed by OpenAI only “mimics” human-level AI. human intelligence. “Of course it’s just a meme, don’t worry, when the AGI is finished it won’t be announced by a comment on Reddit,” he said.

Geoffrey Hinton: Turing Prize winner and former Googler

Geoffrey Hinton is a Canadian researcher specializing in AI and, in particular, artificial neural networks. A former member of the Google Brain team, he decided to leave his position at Alphabet to warn about the risks of AI. After his departure, he predicted the moment when AI would surpass human intelligence. I’m now predicting five 20s, but without much confidence. We live in very uncertain times. It’s possible that I’m completely wrong when I say that digital intelligence is beyond us. “Nobody really knows and that’s why we should be worried now,” he said in May.

Ray Kurzweil: Author, researcher and futurologist

Ray Kurzweil, a famous American futurist and researcher, has made many predictions over the years and some have proven admirably accurate. At SXSW 2017 in Austin, Texas, Ray Kurzweil predicted that computers will have human-level intelligence by 2029. This means computers will have human intelligence, we will build it into our brains, we will connect it to the cloud, and we will expand who we are. Today it is not just a future scenario. This is already the case to some extent and will accelerate.

Ben Goertzel: CEO of SingularityNET and chief scientist of Hanson Robotics

Ben Goertzel, a controversial figure in technology circles, helped popularize the term AGI. He also tends to make bold statements about the future of technology. He added a few more at a conference in 2018. I don’t think we need fundamentally new algorithms. I think we need to network our algorithms differently than we do today. If I’m right, then we already have the basic algorithms we need. “I think we are less than 10 years away from creating human-level AI,” he said.

But Goertzel added a sentence that suggested he was joking with the prediction. This will happen on December 8, 2026, my 60th birthday. “I will postpone the event until then to organize a big birthday party,” he added.

John Carmack: Computer engineer and developer of Doom

John Carmack believes AGI could be achieved by 2030 and has launched a partnership with a research institute in Alberta to accelerate its development. Carmack shared his views at an event announcing the hiring of Richard Sutton, chief scientific adviser to the Alberta Machine Intelligence Institute, at Keen, his AGI development startup. Sutton believes it is not impossible to program an AGI using current techniques and sees 2030 as a possible target for an AI prototype that shows signs of consciousness.

Yoshua Bengio: Professor of Computer Science at the University of Montreal

Yoshua Bengio is also a Turing Prize winner. Like his friend and colleague Yann LeCunn, winner of the Turing Prize, Bengio prefers the term “human-level intelligence” to artificial intelligence. In any case, he is skeptical about the predictions about his arrival. “I don’t think it’s plausible to know when, in how many years or decades we need to achieve human-level AI,” Bengio said.

Demis Hassabis, CEO of Google DeepMind

Demis Hassabis built Google DeepMind (formerly DeepMind), headquartered in London, England, into one of the world’s leading AI labs. His main task is the development of an AGI. He defines AGI as “human-level cognition” and said earlier this year: “The progress made in the last few years has been pretty incredible.” I see no reason for this progress to slow down. I even think they could speed up. I think we’re only a few years away, maybe even a decade away. He also shares the view that AGI poses an existential threat to humanity.

And you ?

Tinder travaille sur un new subscription mensuel a 500 dollars What is your opinion on this topic?

Tinder travaille sur un new subscription mensuel a 500 dollars What do you think about the term AGI and the controversies surrounding it?

Tinder travaille sur un new subscription mensuel a 500 dollars What do you think of the predictions about when the first form of AGI will arrive? Are they realistic?

Tinder travaille sur un new subscription mensuel a 500 dollars How do we know we have achieved some form of AGI? Do they already exist?

Tinder travaille sur un new subscription mensuel a 500 dollars Do current AI systems indicate the imminent introduction of some form of AGI? For what ?

Tinder travaille sur un new subscription mensuel a 500 dollars Do you share the view that some form of AGI will never be achieved? For what ?

See also

Tinder travaille sur un new subscription mensuel a 500 dollars Doom developer John Carmack believes general-purpose AI (AGI) is feasible by 2030 and is partnering with a research institute in Alberta to accelerate development

Tinder travaille sur un new subscription mensuel a 500 dollars Microsoft claims that GPT-4 exhibits sparks of general artificial intelligence: We believe that GPT-4’s intelligence signals a real paradigm shift

Tinder travaille sur un new subscription mensuel a 500 dollars According to Geoffrey Hinton, a pioneer in artificial intelligence research, the threat that AI poses to the world may be “more urgent” than climate change