AI will not earn a place in the daily practice of medicine until its developers definitively answer some pressing questions on fitness and appropriateness.
The unresolved issues needing attention include data quality and ownership, transparency in governance, trust‐building in “black box” medicine and legal responsibility for medical errors in which AI is implicated.
That’s according to the authors of an opinion piece posted this week in the Medical Journal of Australia.
After summarizing AI’s proven prowess in several layers of healthcare—diagnostics, image interpretation and predictions/prognostications—senior author Ben Freedman, MBBS, PhD, of the University of Sydney and colleagues flesh out the nagging issues:
1. Health disparities, excluded populations and data biases. Existing inequities in healthcare delivery stand to be exacerbated by AI if developers fail to include population- representative data when training algorithms, the authors point out.
“This is a not a new problem, and we must do better science and be awake to the limits of data quality and evidence‐based medicine,” they comment.
2. Data sovereignty and stewardship. When Google-owned DeepMind came out with an AI-based app for patients with kidney disease, consumer watchdogs cried foul over the developers’ nontransparent use of patient data.
“Issues of data sovereignty … threaten the existence of effective AI,” Freedman and colleagues write. “Patient data should not be provided to technology giants without a good governance structure to protect data sovereignty.”
3. Changing standards of care. Healthcare providers will have no choice but to change care protocols as AI makes inroads into daily practice. In part this is because it may become poor practice not to use the technology when it’s available and deemed a preferred approach by clinical guidelines.
“We will see a time when all medicine and allied health work as a team with AI,” the authors write. “Those who refuse to partner with AI might be replaced by it.”
4. Legal responsibility for AI‐caused injury. Physicians using AI should “own” their care decisions when they’re aided by the technology, the authors argue. However, as AI teaches itself to function independently, at least in theory, some doctors might not be wrong to blame the technology if they get sued for malpractice.
Further, it “seems unfair for doctors to be held responsible for an AI decision when they are unable to deduce how and why that decision was made,” the authors write, alluding to AI’s “black box” problem. “Such matters are outside the scope of clinicians’ expertise and best dealt with legally as a product liability claim.”
“AI has already arrived in healthcare,” Freedman et al. write, “but are we ready for the kind of changes that it will introduce?”
“Much effort is needed,” they conclude, “to translate algorithms into problem-solving tools in clinical settings and demonstrate improvement in clinical outcomes with saving of resources.”
The journal has posted the paper in full for free.