1683327292 Ex Google ex executive who fears AI excesses accused of misplaced opportunism

Ex-Google ex-executive who ‘fears AI excesses’ accused of misplaced opportunism –

Did Geoffrey Hinton express his regrets a little too quickly? Several ex-Google employees accuse the former engineer of opportunism. Hinton would have hidden criticism of the AI ​​​​internally.

The truth is sometimes multiple. On May 1, Geoffrey Hinton, one of the pioneers of AI, a neural network specialist, said in an interview with The New York Times that he had left Google to speak more freely about the dangers of artificial intelligence. He then said he regretted some of his work and consoled himself with “the usual excuse: if I hadn’t done it, someone else would have”.

Geoffrey Hinton then stated that he feared abuse in the sense of misinformation. Worse, the engineer feared a catastrophic scenario worthy of a sci-fi movie, in which the machine would surpass human intelligence (the arrival of an AGI) with serious repercussions on society. A cliché that has been widely criticized since then.

A career with blinkers

According to his former acquaintances at Google, Geoffrey Hinton only recently became aware of the ethical problems associated with AI. When Timnit Gebru was fired from Google in 2020 after publishing a scientific study on the bias of artificial intelligence systems, Hinton reportedly remained silent.

Timnit GebruTimnit Gebru licensed by Google in 2020. // Source: Kimberly White

The study in question discussed the environmental costs, financial costs, “stereotyping, denigration, increase in extremist ideology, and wrongful arrests” that can be induced by large language models (LLMs).

A late awakening criticized on Twitter by Margaret Mitchell, once co-head of the ethics department for AI at Google, fired shortly after the case: “It would have been for Dr. Hinton had time to denormalize the dismissal of Timnit Gebru (not to mention those who have recently followed). He did not do it. »

And to add: “This is how systemic discrimination works. People in positions of power normalize themselves. They practice discrimination, they watch their peers practice it, they say nothing and move on. »

Invisibility of current damage by AI

Worse, Geoffrey Hinton, interviewed on CNN May 5, continues to obscure the facts uncovered by Timnit Gebru. His ideas are not “as existentially serious” as his own theory of the dominance of machines over humans.

Minorities are sometimes made invisible by LLMs.  // Source: Gerd Altmann / PixabayMinorities are sometimes made invisible by LLMs. // Source: Gerd Altmann / Pixabay

“It’s amazing that someone can say that’s harmful [de l’IA] what is happening now – and felt most strongly by historically marginalized people such as black people, women, people with disabilities and casual workers – are nonexistential,” said Meredith Whittaker, president of the Signal Foundation and AI researcher, transmitted by Fast Company.

Ex-Googler Meredith was also ousted from Google in 2019. Google allegedly accused her of fighting a contract with the US military for AI technology for drones. Whittaker criticizes Hinton for continuing to discredit critics other than his own.

Misplaced opportunism or invisibility of the work of his colleagues? More than ever, Geoffrey Hinton’s recent statements need to be put into perspective.

If you liked this article, you will like the following ones: Don’t miss them by subscribing to on Google News.