As machine learning progresses from research settings to clinical practice, how are clinicians to know they can trust the machine’s conclusions to guide care for actual patients?
They may never know for sure. And that’s exactly as it should be, suggests Ravi Parikh, MD, MPP, assistant professor of medical ethics and health policy and medicine at the University of Pennsylvania.
“U.S. clinicians have a tremendous amount of intuition that the algorithm is never going to see,” he points out. “I tend to treat algorithms as a distinct data point in addition to a variety of other data points that you’re seeing in practice, whether they be someone’s laboratory test results [or] how the patient looks in front of you.
“It’s important to view these types of outputs as a series of data points, rather than the be-all and end-all [telling you] how you’re going to make your clinical decisions.”
Parikh made the comments at a “Machine Learning 101” session convened by the American Medical Association for medical students in June.
The AMA posted a summary and half-hour video of the virtual session July 12.
Preceding Parikh was Herbert Chase, MD, MA, professor of clinical medicine and biomedical informatics at Columbia University.
Chase laid out the basic principles of machine learning and described the main use cases for clinical AI—disease diagnosis, management and population-health discovery.
Parikh focused on the technology’s nagging problem with bias and offered ways to address it.
Give and take
Some of the liveliest material, including the question and answer above, emerged during a brief Q&A period following the prepared presentations.
To Parikh’s point on machine learning’s limitations in clinical settings, Chase added an illustrative anecdote.
“A couple of medical students told me last week that when the sepsis alert goes off in the electronic health record, basically everybody ignores it because they don’t believe it,” Chase said. “It’s black box and was delivered within the electronic health record. Nobody’s tested it on sensitivity and specificity.
“So [you should] hit the pause button and then decide whether or not [an algorithm’s] data point applies to your patient.”
Demise of the doctors?
Perhaps inevitably, the subject of AI’s potential for replacing physicians came up during the Q&A.
There’s no question that imaging-based specialties—radiology, pathology, dermatology—have been notably successful using machine learning, Chase responded. But the goal should be better care by way of a happy outcome for both AI and human image interpreters.
“I think the next generation of radiologists will be operating at a higher level,” Chase explained. “They’ll be overseeing the cases that are being referred to them by the machine and making sure that you don’t over-biopsy a patient because of a false positive.
“And I think that actually will make the profession to some extent more interesting. You’re not going to be looking at film after film that ends up being negative.”
Machine, meet patient
Parikh urged attendees to imagine a clinical decision-making scenario from the patient’s point of view.
“If you were hearing that a machine rather than a human was going to be diagnosing your lung cancer … would you be interested in that? I would imagine that a lot of patients wouldn’t be,” Parikh said. “There’s still a huge demand for a human element to how we practice medicine. That element is never going to be replaced by machines.
“Too often, we’ve been thinking about these things as adversarial—human versus machine—when the real purpose of a machine is to collaborate with a human.”
AMA session summary with video here. Standalone video posted to YouTube here.