I recently attended the 2023 annual meeting of the American Roentgen Ray Society (ARRS), one of the major professional societies for radiologists and medical imaging specialists. As expected, one of the “hot topics” was artificial intelligence (AI) and expected impact on radiologists in particular, as well as medical practitioners in general.

Although I could not attend all of the numerous lectures, panel discussions, and research presentations on AI, I did learn of many exciting developments as well as areas of both opportunity and concern. In this column, I’d like to share some thoughts on how AI will affect patients and physicians alike in the short-to-medium term future.

(Note: This discussion will be confined to so-called “narrow AI” to accomplish particular medical tasks, rather than “artificial general intelligence” or AGI that can simulate general human cognition. I’ll leave the debate over whether a sufficiently advanced AI will exterminate humanity to others.)

ADVERTISEMENT

1) AI will play an increasingly greater role in medical care, in ways both obvious and non-obvious to patients.

In my own field of radiology, AI will be used to enhance (but not yet replace) human radiologists making diagnoses from medical images. There are already FDA-approved AI algorithms to detect subtle internal bleeding within the brain or potentially fatal blood clots (“pulmonary embolism”) within the arteries of the lung.

When properly used, these algorithms could alert the human radiologists that a patient’s scan has one of these life-threatening abnormalities and “bump” the case to the top of the priority queue. This could significantly shorten the time between the scan and the appropriate treatment and thus save lives. (See this paper by Dr. Kiran Batra and colleagues from University of Texas Southwestern Medical Center for one example of the time savings achieved by AI.)

AI can also be used to enhance medical care in ways not directly related to rendering diagnoses. For instance, developers are working on physician “co-pilot” software that can sift through a patient’s medical records and extract the information most relevant for the patient’s upcoming visit to the radiology department (or internal medicine clinic, etc.). This could save the practitioners valuable time during each patient visit.

ADVERTISEMENT

2) The AIs are still not perfect, and human physicians will still need to have the final say in diagnoses and treatments.

For example, AIs are pretty good in detecting early breast cancer in mammogram images, but still make errors. (Often they make errors humans don’t, and vice versa.) This makes AI great as an “assistant” to the human radiologist, but not (yet) a viable replacement.

Thus, we will see an interesting period of time where human physician-plus-AI will perform better than either human alone or AI alone. At some point in time, I predict that AI-assisted medicine will become “standard of care” and physicians who do not incorporate AI into their daily practices could open themselves to lawsuits for practicing “substandard” care.

ADVERTISEMENT

3) As AIs get better, humans may start to over-rely on them.

This phenomenon is known as “de-skilling.” As an analogy (made by Dr. Charles Kahn of University of Pennsylvania in one of the ARRS panel discussions), suppose we develop self-driving automobiles that could handle most traffic conditions, but still required a human driver to take the wheel in emergencies. As AIs got increasingly better and the need for human intervention became less frequent, we human drivers could easily become complacent and lose good driving-related cognitive habits and reflexes.

If a partially-automated car going 70 mph on the highway suddenly alerted a human driver who hadn’t truly driven in the past year to take over because of icy conditions ahead, things could go badly.

Similarly, if a human radiologist lets their cancer detection skills go rusty, they could run into trouble when the medical images included complex visual features beyond the ability of the AI to accurately parse.

ADVERTISEMENT

My own personal approach will be to think of the AI as a tireless-but-quirky medical student constantly asking questions like, “Could that squiggle be a cancer? How about this dark line — is it a fracture? Could this dot be a small blood clot?” An inquisitive human medical student can keep experienced doctors on their toes in a good way, and the same could be true for an AI.

4) AI could take over some interactions with patients that currently require human medical personnel.

We’re probably not too far from reaching the point that a LLM (Large Language Model) AI like ChatGPT could take a radiology report written in medical jargon and “translate” it into terms understandable to non-physicians — and possibly even answer follow-up questions about the significance of the findings.

A recent article by Ayers and colleagues in JAMA Intern Med compared how AI chatbots and human physicians responded to patient medical questions offered on social media. According to the judges (who were blinded as to the author of the answers), the chatbot answers were considered better both in terms of information quality and empathy than the human physicians’ answers!

ADVERTISEMENT

The use of artificial intelligence in medicine is a rapidly evolving field, and I’ve only scratched the surface of the exciting work being done. Given the rapid pace of developments, I don’t know what things will look like in 5 months, let alone in 5 years. But I’m glad to be alive during this time of potentially massive innovation (and admittedly potentially uncomfortable upheaval). For now, I remain optimistic that AI could be an enormous boon for patients and physicians alike.



Source link

author-sign