Imaging providers continue to embrace AI technology, taking advantage of its ability to improve workflows and prioritize urgent cases. However, according to a new analysis published in the American Journal of Roentgenology, researchers still don’t truly understand how these algorithms work—and that’s a significant issue that must be addressed.
“Utilization of AI, especially deep learning research, is increasing in radiology, pathology and medicine in general,” wrote authors Adarsh Ghosh and Devasenathipathy Kandasamy, both from the department of radiodiagnosis at the All India Institute of Medical Sciences. “However, because such algorithms affect patient outcomes, the black box–like structure of deep learning algorithms remains a pet peeve. We do not know exactly how the algorithms work, and therefore we cannot anticipate when the algorithms will fail.”
When a human specialist makes a mistake, the cause can typically be researched and documented, allowing others to learn from what happened. AI models, on the other hand, are much harder—and often impossible—to learn from after a mistake occurs. This difference “hinders clinical implementation,” according to the authors.
Considering that “the ultimate aim of science” is to “bring forth the unknown using hypothesis and rebuttals,” Ghosh and Kandasamy also said that academics exploring AI technology must go further than documenting a certain model’s accuracy or sensitivity.
“Although machine learning is a very convenient method of exploring big medical data, researchers and peer reviewers should not limit themselves to accuracy-driven metrics and should attempt to the explore the concrete biologic explanations underlying the opaque models being built,” the authors wrote. “In the long run, this will enable medical discovery.”
The authors closed their analysis by looking ahead, noting that key changes are necessary for AI to reach its potential as a true game-changer.
“Scientific discovery should remain the main driving force behind research published in medical and radiology journals, and AI research should not be limited to reporting accuracy and sensitivity compared with those of the radiologist, pathologist, or clinician,” the authors concluded. “More importantly, reports of AI research should try to explain the underlying reasons for the predictions, in an attempt to enrich biologic understanding and knowledge.”