Is AI really ready for our society? A former Google engineer answers – La Nouvelle Tribune

Last summer, the former engineer from Google and AI ethicists, Blake LemoineShe caused a stir by claiming that the big tongue model TheMDA from Google was conscious. Lemoine’s statement sparked debate about the sensitivity of cutting-edge language models and shed light on the ethical challenges facing the AI ​​industry. In the wake of the controversy, Lemoine shares his thoughts on the state of the AI ​​industry and the challenges ahead in terms of ethics, transparency and understandability in an interview with the Futurism website.

According to Lemoine, the AI ​​sensitivity debate can be a distraction. He believes that we could use the tools of scientific inquiry developed by psychology to better understand the cognition of AI systems and to develop more controllable and understandable models. Lemoine also emphasizes the importance of focusing on the transparency and understandability of models rather than bickering over the terms used to describe how they work.

technology companies such as Google And Open AI have made great strides in developing sophisticated language models. However, according to Lemoine, these companies have also put a lot of time and effort into keeping their templates safe and making sure they don’t generate problematic or biased content. For example, two years ago, Google could have released a more advanced version of its LaMDA model, but chose to invest additional time working on its security.

Despite these advances, Lemoine warns of the potential dangers of AI, especially in an internet rich in synthetic content. Crooks and scammers could take advantage of language patterns to generate deceptive content and perform more sophisticated scams. Lemoine therefore calls for stricter regulation and a better understanding of AI systems in order to minimize these risks.