According to Professor Jean-Emmanuel Bibault, oncologist and researcher specializing in these technologies, AI is gradually becoming an indispensable tool for doctors.
The entry of artificial intelligence (AI) into the world of medicine is as fascinating as it is frightening, as the field of possibilities seems infinite and still difficult to define. Professor Jean-Emmanuel Bibault admits: “We do not yet know a tenth of the possibilities that AI offers in healthcare.” ) organized in Geneva (Switzerland).
In his small office at the Georges-Pompidou European Hospital in Paris hangs the poster of this oncologist’s favorite film, 2001, A Space Odyssey, which inspired the title of his book in 2041: “Medical Odyssey”. (Equateur ed., January 2023). In Stanley Kubrick’s HAL 9000, the artificial intelligence piloting the spacecraft kills the humans piloting it. In his book, intended for the general public, the researcher specializing in AI examines all the possibilities that this new technology could offer in the near future. It also raises ethical questions that must accompany all this progress. More pressing questions than you might think because AI is already being used in many hospital departments.
Franceinfo: How are doctors using artificial intelligence today? What does it bring to medicine?
Jean Emmanuel Bibault: Currently, artificial intelligence can take on many tasks, especially in radiation therapy. For example, prior to arranging radiation treatments [pour traiter une tumeur ou un cancer], you must program the machine to aim at the right spot. To do this, we carry out a CT examination of the patient and draw a three-dimensional image of the tumor to be destroyed and the surrounding organs in order to protect them as much as possible.
This step, which we call computer contouring, can take two to three hours, even half a day in complex cases. Now we have software based on deep learning, deep neural network learning, that can do this contouring in two or three minutes. Currently these results are still being verified by a human as some are not yet perfect. But in the years to come, these imperfections will diminish and we will need to make fewer and fewer corrections.
How much do these technologies cost?
It varies greatly from solution to solution. Unlike traditional machines, the price of AI currently depends on the number of patients treated. As a rule, it is several tens of thousands of euros. That may seem like a lot, but it really isn’t, compared to the cost of running a radiation therapy department. For example, a scanner is worth millions of euros, not counting the annual maintenance contract, since all these machines are checked very regularly. This is also one of the reasons why the centers work with artificial intelligence.
We understand the interest that AI has for doctors and hospitals. And for the patient?
For the patient, AI brings consistent quality wherever it is developed. If I take the example of contouring again, thanks to AI, no matter where the patient applies it, the result will always be the same. In addition, AI will provide better dosimetry [mesure de la dose de rayonnements ionisants que peut recevoir un objet ou une personne] because it is more accurate than humans. This will potentially lead to fewer side effects and better efficacy in cancer.
This is in line with what you emphasize in your book, which is that these results, which are increasingly precise, will force doctors to hone their skills to verify that the AI is not making a mistake…
That is what one should hope for and advocate. However, one must remember that if young doctors rely heavily on AI to create these contours, they will lose this habit over time. Therefore, this ability could be lost from generation to generation if we are not careful. Using AI alone shouldn’t mean that nobody can ensure they’re doing it well. It’s like learning basic arithmetic in elementary school and forgetting how to add or subtract once you get a calculator.
Listening to you, we get the impression that the AI is faster than us and that it already knows how to do a lot of tasks…
Yes, but we have to keep in mind that AI remains a human-made technology. At this point, an AI itself cannot ask questions or request a diagnosis, even if it does come up.
“Some algorithms can do things that humans don’t know and never can, like predicting the risk of developing a disease in 10 or 15 years. Or, in the case of an illness, to predict the chances of recovery.” at the age of five, ten or fifteen. Even the best experts don’t know how to do that.”
Jean Emmanuel Bibault
at franceinfo
Are these predictions already being used or are they still in clinical trials?
Currently, these algorithms exist for translational research [qui réunit des médecins et des chercheurs pour développer des applications médicales, ou, dans le sens inverse, qui peut orienter les scientifiques à partir d’une observation clinique]. It will be difficult to get these models out of the computer and evaluate them. Because if I’m developing an algorithm that predicts the risk of diabetes in 10 years, for example, and I want to see if it works, I have to run a whole protocol and then wait for that time to elapse. In the best-case scenario, we wouldn’t have an answer for ten years.
Beyond these validation difficulties, imagine that this AI works and can make these types of predictions with near 100% certainty. Is this prediction feasible from a medical point of view? If I know I have a very high risk of developing colorectal cancer in 10 years, can I use this information to reduce that risk and adjust my behavior? Or, whatever I do, will that risk still exist? It is not obvious that this information is good as it can have many psychological implications on quality of life.
There is a second, more confusing question. To illustrate, I’ll draw a parallel with the film Minority Report, in which a police department arrests people before they even commit a crime. In medicine, it’s the same logic. If one day we can predict a disease ten years in advance and it doesn’t happen, you’ll have lived for ten years with that sword of Damocles hanging over your head. Was the algorithm wrong? Or have you adjusted your behavior to reduce the risk of infection? Nobody will ever be able to answer these questions.
In your book you also question the training of AIs. They cite the case of an application developed in dermatology to detect an anomaly using photographs taken by the patient himself (PDF). The study shows that it is more effective than dermatologists in general, but on the other hand not on black skin…
This example illustrates the fact that we need to be very vigilant about the biases we ourselves inject into the AIs through the data or algorithmic methods we use. Sometimes we recognize these biases, as is the case in this experiment. In other cases, however, there is also a risk that we are not even aware of it and use tools that lead to poor results.
There is also the issue of cybersecurity. Let’s imagine that a high-profile personality will be operated on by a fully automated AI tomorrow. How to make sure it won’t be hacked for malicious purposes?
Approaching this question, many think it’s science fiction. However, this is not the case. A study on the subject of “attacks by adversaries” was published in the journal “Science”. [ou contradictoires], which consists in creating completely artificial images that do not have any special properties to the naked eye. However, when analyzed by artificial intelligence, these images produce an incorrect result. I often give the example of a panda image that the AI recognizes as such. I have created an image that seems to have no special character to the human eye. When I add it to the panda image, the AI no longer recognizes a panda but a gibbon.
This technique can be reproduced in radiology, on scanners or MRIs. Imagine that one day we will only rely on systems that automatically interpret images at very high speeds, analyzing 50 patients every hour and with no one monitoring what the AI is doing. In this case, you could theoretically warp all the results with the same type of layer as the panda image, staging a cyberattack. In the United States, for example, where the healthcare stakes are higher than in France and Europe, a group of hackers could be reimbursed for hundreds of treatments that cost millions of dollars and produce false results. It does not exist. This may sound completely crazy, but it is by no means.
Will artificial intelligence replace doctors?
In fact, there is a danger that we will be told that we need fewer doctors because we can do more things in less time. But it’s a big mistake to think like that, because I’m sure that despite the use of AI, we still have to recruit and train doctors. These technologies should only allow practitioners to enhance their skills, not replace them. AI needs to free up our medical time, and instead of counting fifteen minutes for a consultation, we can give it 45 minutes or even more and see patients more often.
Think AI is just a fad?
I don’t think so. In my opinion, the only time AI could fail would be if there were a global economic crisis, a very severe recession, or a war that would bring technical or technological progress to a standstill because we would decide to reuse the resources for other more original purposes , more essential things.