AI To protect scientific research from the clutches of AI

AI: To protect scientific research from the clutches of AI, experts suggest a “simple” solution: Trust My Science

⇧ [VIDÉO] You might also like this partner content

The quality of future scientific research could deteriorate as generative AI becomes more widespread. This is at least what some researchers think, pointing out the risks associated with these technologies, especially due to the still too frequent errors that they cause. However, researchers at the University of Oxford propose a solution: using LLMs (large language models) as “zero-shot translators.” In their opinion, this method could enable the safe and effective use of AI in scientific research.

In an article published in the journal Nature Human Behavior, researchers at the University of Oxford raise concerns about the use of large language models (LLMs) in scientific research.

These models can generate erroneous answers, which can reduce the reliability of studies and even lead to the spread of false information by creating incorrect study data. Furthermore, science has always been described as an intrinsically human activity. These include curiosity, critical thinking, the creation of new ideas and hypotheses, and the creative combination of knowledge. The fact that all of these human aspects are “delegated” to machines is a cause for concern in scientific communities.

The Eliza Effect and Overreliance on AI

Oxford scientists cited two main reasons for using language models in scientific research. The first is the tendency of users to attribute human qualities to generative AI. This is a recurring phenomenon called the “Eliza Effect,” where users subconsciously view these systems as understanding and empathetic, even wise.

The second reason is that users may show blind trust in the information provided by these models. However, despite recent advances, AIs are likely to provide incorrect data and do not guarantee the accuracy of answers.

Additionally, the study’s researchers say, LLMs often provide answers that seem convincing, be they true, false, or inaccurate. For example, for certain questions, the AI ​​prefers to give incorrect answers rather than answer “I don’t know” because it has been trained to please users and, in particular, to easily predict a logical sequence of words when making a query. .

All of this obviously calls into question the very usefulness of generative AI in research, where the accuracy and reliability of information is crucial. “Our tendency to anthropomorphize machines and trust models as if they were human-like soothsayers, thereby consuming and disseminating the bad information they produce, is particularly worrying for the future of science,” the researchers write in their paper.

The “zero-shot” translation as a solution to the problem?

However, the researchers suggest another, safer way to incorporate AI into scientific research. This is the “zero-shot translation”. With this technology, AI operates based on incoming data that is already considered reliable.

In this case, instead of generating new or creative answers, AI focuses on analyzing and reorganizing that information. Its role is therefore limited to manipulating the data without introducing new information.

With this approach, the system is no longer used as a huge repository of knowledge, but rather as a tool aimed at manipulating and reorganizing a specific and reliable data set in order to learn from it. However, unlike the usual use of LLMs, this technique requires a deeper understanding of AI tools and their capabilities, as well as programming languages ​​such as Python depending on the application.

For a better understanding, we directly asked one of the researchers to explain the principle to us in more detail. According to him, using LLMs to convert precise information from one form to another without special training for this task brings, first of all, the following two advantages:

Source: Nature Human Behavior