Healthcare AI is advancing too quickly for its users to fully comprehend the implications of its design, development and applications, according to bioethics specialists who scanned the literature and have had their findings published in BMC Medical Ethics.
Jennifer Gibson, PhD, and colleagues at the University of Toronto searched eight databases to conduct a scoping review of peer-reviewed articles involving health, ethics, AI and related terms.
The team retrieved data on around 12,700 papers and narrowed these to a representative 103 for analysis.
They report finding the literature highly attentive to the ethics of healthcare AI—especially regarding diagnostics, precision medicine and point-of-care robotics—but “largely silent” on ramifications for public and population health.
“AI is being developed and implemented worldwide, and without considering what it means for populations at large, and particularly those who are hardest to reach, we risk leaving behind those who are already the most underserved,” Gibson and co-authors comment in their discussion.
The relative dearth of literature on the ethics of AI within low- and middle-income countries, they suggest, “points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.”
Full findings and discussion here.