AI Godfather Geoffrey Hinton is resigning from Google and his

AI “Godfather” Geoffrey Hinton is resigning from Google… and his reasons are BREAKING

The “Godfather of Artificial Intelligence” has sensationally quit Google, warning that the technology could turn life as we know it upside down.

Geoffrey Hinton, 75, is credited with developing the technology that has become the foundation of AI systems like ChatGPT and Google Bard.

But the Turing Prize winner now says part of him regrets helping to make the systems and worries he could encourage the spread of misinformation and replace humans in the workforce.

He said he had to come up with excuses like “if I didn’t build it, someone else would have to” to keep from feeling overwhelmed with guilt.

He drew comparisons to the “father of the atomic bomb,” Robert Oppenheimer, who was reportedly disturbed by his invention and devoted the rest of his life to stopping its proliferation.

Geoffrey Hinton, 75, who is considered the

Geoffrey Hinton, 75, who is considered the “Godfather of Artificial Technology,” said a part of him now regrets having a hand in making the systems. He is pictured above speaking at a summit hosted by media outlet Thomson Portal in Toronto, Canada in 2017

1683015639 694 AI Godfather Geoffrey Hinton is resigning from Google and his

There’s a big AI divide in Silicon Valley. Smart minds are divided over the advancement of systems – some say it will improve humanity and others fear technology will destroy them

Speaking to The New York Times about his resignation, he warned that AI would soon be flooding the internet with fake photos, videos and text.

These would conform to a standard, he added, by which the average person “could run out of knowledge of what’s true.”

The technology also poses a serious drudgery risk, he said, and could upend the careers of people working as paralegals, personal assistants and translators.

Some employees already say they use it to cover multiple jobs for them, handling tasks like creating marketing collateral and transcribing Zoom meetings so they don’t have to listen.

“Maybe what’s going on in those systems is actually a lot better than what’s going on in the systems [human] Brain,” he said, explaining his fears.

“The idea that this stuff might actually get smarter than humans — a few people believed that.

“But most people thought it was far away. And I thought it was far away. I thought it was 30 to 50 years or more away.

“Obviously I don’t think that anymore.”

When asked why he helped create a potentially dangerous technology, he said, “I console myself with the usual excuse: if I hadn’t done it, someone else would have.”

Hinton added that he had previously paraphrased Oppenheimer when confronted with this question in the past, saying “when you see something that’s technically cute, go ahead and do it.”

Hinton decided to leave Google last month after a decade at the tech giant amid the spread of AI technologies.

He had a lengthy conversation with Google’s parent company Alphabet CEO Sundar Pichai before departing – although it’s not clear what was said.

In a broadside to his former employer, he accused Google of not being a “proper administrator” for AI technologies.

In the past, the company has kept potentially dangerous technology under wraps, he said. But it had now thrown caution to the wind as it competes with Microsoft — which added a ChatBot to its Bing search engine last month.

Google’s chief scientist Jeff Dean said in a statement: “We remain committed to responsible use of AI. We are continually learning to understand emerging risks while boldly innovating.”

His warning comes as Silicon Valley plunges into a civil war over the advancement of artificial intelligence – with the world’s greatest minds divided over whether it will uplift or destroy humanity.

AI fears come as experts predict it will reach singularity by 2045, when technology will surpass human intelligence, to which point we cannot control it

AI fears come as experts predict it will reach singularity by 2045, when technology will surpass human intelligence, to which point we cannot control it

Elon Musk, Apple co-founder Steve Wozniak and the late Stephen Hawking are among the most prominent critics of AI, believing it poses a “profound risk to society and humanity” and could have “catastrophic effects.”

Last month they even called for a pause in the “dangerous race” to introduce advanced AI, saying more risk assessments are needed.

But Bill Gates, My Pichai and futurist Ray Kurzweil are on the other side of the debate, hailing technology as the “most important” innovation of our time.

They argue it could cure cancer, solve climate change, and increase productivity.

Hinton has not yet commented on the debate, saying he does not want to comment until he has officially left Google.

He rose to fame in 2012 when, at the University of Toronto, Canada, he and two students designed a neural network that could analyze thousands of photos and teach itself to identify common objects like flowers, dogs, and cars.

Google later spent $44 million to acquire the company Hinton founded based on this technology.

The release of AI bots like ChatGPT (stock image) has prompted calls in many quarters for the technology to be reviewed because of the risk it poses to humanity

The release of AI bots like ChatGPT (stock image) has prompted calls in many quarters for the technology to be reviewed because of the risk it poses to humanity

Among the advanced AI systems already available is ChatGPT, which attracted more than a billion people after its release in November. Data shows that it also has up to 100 million monthly active users.

Launched by San Francisco-based OpenAI, the platform has become an instant hit around the world.

The chatbot is a large language model trained on massive text data, enabling it to generate eerily human-like text in response to a given prompt.

The public uses ChatGPT to write research papers, books, news articles, emails, and other text-based works, and while many see it more as a virtual assistant, many brilliant minds see it as the end of humanity.

When humans lose control of the AI, it is assumed to have reached singularity, meaning it has surpassed human intelligence and can think independently.

AI would no longer need or listen to humans, allowing it to steal nuclear code, create pandemics, and start world wars.

DeepAI founder Kevin Baragona, who signed the letter, told : “It’s almost like a war between chimpanzees and humans.

Humans obviously win as we are much smarter and can use more advanced technology to defeat them.

“If we are like the chimpanzees, the AI ​​will destroy us or we will be enslaved to it.”