ChatGPT confirms diagnosis of a 4yearold child after 17 consultations

ChatGPT confirms diagnosis of a 4yearold child after 17 consultations with doctors Link Estadão

A fouryearold American child named Alex discovered a rare disease after spending three years searching for the right diagnosis. But the person responsible for the order was not a doctor; ChatGPTan artificial intelligence robot launched by OpenAI in November 2022.

The story became public after the child’s mother, Courtney, told it to American news website Today last week. The family’s last name has been omitted to protect the privacy of those involved, the report said.

Courtney claims that she has taken Alex to 17 different doctor’s appointments since 2021 when the boy suffered from mobility issues, such as difficulty sitting crosslegged. For her, this was a “big trigger” that “there was something structurally wrong” with the boy.

In search of answers, the mother went to ChatGPT, where she entered the results of all the examinations carried out by the boy into the robot’s chat. “I went line by line through everything that was in Alex’s (MRI) notes and entered them into ChatGPT,” Courtney told Today.

At that time, OpenAI’s response service concluded that Alex suffered from tethered spinal cord syndrome, a rare disease that affects a child’s spinal cord and affects development, according to Stanford University Children’s Hospital. Symptoms include back pain, weakness when walking, foot deformities and even constipation.

After receiving the alleged diagnosis, Courtney discovered a Facebook group where children were experiencing several symptoms similar to Alex’s case. A doctor later confirmed the hypothesis put forward by ChatGPT.

Now Alex is recovering after undergoing surgery to treat the syndrome, says Today.

ChatGPT cannot be used for diagnosis

Despite its success, ChatGPT is not a reliable source for medical diagnosis and should be used with caution.

ChatGPT, like other artificial intelligence chatbots, is not an “oracle”: the technology decides answers based on the material on which it was trained in this case, billions of Internet pages, including the Wikipediasocial networks and scientific articles, for example.

Still, ChatGPT can generate what experts call “hallucination” when the AI ​​generates an answer that is incorrect or deviates from common sense even if it is written convincingly. Worse, there is no way to know when ChatGPT is hallucinating and generating false answers, making it an unreliable source for verifying information, particularly medical diagnoses.

Alcântara: the Brazilian space disaster