According to a study by La Tribune ChatGPT diagnoses emergencies

According to a study by La Tribune, ChatGPT diagnoses emergencies “as well” as a doctor

However, the authors of the study published on Wednesday stressed that the days of emergency doctors are not yet numbered, as the chatbot, while capable of speeding up diagnosis, cannot replace a human’s judgment and experience.

Thirty cases treated in an emergency room in the Netherlands in 2022 were reviewed by feeding ChatGPT based on patient histories, laboratory tests and medical observations and asking the chatbot to suggest five possible diagnoses.

In 87% of cases, the correct diagnosis was found in the doctor list, compared to 97% with version 3.5 of ChatGPT.

The chatbot “was able to carry out medical diagnoses, similar to what a human doctor would have done,” summarized Hidde ten Berg from the emergency room at the Jeroen Bosch Hospital in the south of the Netherlands.

Maintain confidentiality

The study’s co-author, Steef Kurstjens, emphasized that it does not conclude that computers could one day run emergency rooms, but rather that AI could play a crucial role in helping doctors under pressure.

The chatbot “can help make a diagnosis and may be able to suggest ideas that the doctor hasn’t thought of,” he told AFP.

Such tools are not designed to be medical devices, but noted that he also raised concerns about the confidentiality of sensitive medical data in a chatbot.

And like other areas, ChatGPT has encountered some limitations.

His reasoning is “at times medically implausible or inconsistent, which can lead to misinformation or incorrect diagnoses with significant consequences,” the study says.

The scientists also admit some shortcomings of their research, such as the small sample size.

Furthermore, only relatively simple cases were studied, with patients presenting with only one chief complaint. The effectiveness of chatbots in complex cases is unclear.

“slip”

Sometimes ChatGPT did not provide the correct diagnosis in all five possibilities, Kurstjens said, particularly in the case of abdominal aortic aneurysm, a potentially life-threatening complication involving swelling of the aorta.

But as a consolation for ChatGPT: in this case too, the doctor was wrong.

The report also points out medical “errors” made by the chatbot, such as diagnosing anemia (low blood hemoglobin levels) in a patient with normal hemoglobin levels.

The results of the study, published in the journal Annals of Emergency Medicine, will be presented at the European Congress of Emergency Medicine (EUSEM) 2023 in Barcelona.