AI can also produce misleading data – PIEUVRECA

AI can also produce misleading data – PIEUVRE.CA

Italian researchers used ChatGPT to create a fake data set from a medical study to see if ChatGPT could generate anything convincing. Result, not surprisingly: Yes, as long as we don’t confront it with a real expert.

We already knew that “generative artificial intelligence” was capable of producing compelling research summaries. It is therefore not surprising that it is capable of producing a data series in less than a minute. The robot’s great weakness: It was unable to use its… critical thinking.

An expert in the field could actually see that the data lacked authenticity, as the authors of this experiment – which involved two alleged eye surgeries – wrote in the journal JAMA Ophthalmology.

But a common citizen who knows nothing about clinical trials – or statistics – and wanted to see in this data above all “proof” of his favorite beliefs, would have seen nothing but fire.

Nature magazine asked Jack Wilkinson, a British biostatistician at the University of Manchester who specializes in identifying dubious data, to review the document. This contains numerous errors that reveal that the robot does not really understand what it is doing: several participants to whom it assigned a gender did not seem to match their names, no correlation between the vision measurements before and after the alleged operation, one unusually high number of patients whose age ends at 7 or 8 years … In short, “obvious signs” that these data were “made up”.

But not everyone is an expert, and even among experts, not everyone will take the time to look closely at research data.

The manipulation of data by an unscrupulous researcher has always been a problem in research, but this issue may soon become even more important, notes Italian ophthalmologist Giuseppe Giannaccare of the University of Cagliari, lead author of the “experiment.”

Subscribe to our extensive newsletter