In order for AI technology to be safely incorporated into diagnostic and clinical decision support (CDS) software, stakeholders must first address its effectiveness and provide additional evidence for its usefulness in the clinical setting, according to Duke researchers.
AI-enabled diagnostic and CDS systems have the potential to transform health systems and revolutionize the way care is delivered by augmenting clinicians’ intelligence, enhancing decision-making processes and reducing unnecessary testing and treatments. However, stakeholders need to address several challenges around the technology’s development, regulation and safety before it can be widely adopted.
The Duke Margolis Center for Health Policy partnered with AI and healthcare experts to identify the top three issues slowing the development, adoption and use of AI-enabled CDS software.
- Not enough evidentiary support: Developers and researchers should provide more evidence on how AI-enabled CDS systems may affect patient outcomes, care quality, costs of care and clinician workflow. More available evidence would help ensure the effectiveness and trustworthiness of the technology.
- Patient risk assessments: Developers should provide more information about how the technology was made and trained, which would allow regulators and clinicians to assess the technology’s risk to patients.
- Bias: It should be ensured that the software was developed with data-driven AI methods that don’t perpetuate existing clinical biases. Researchers also suggested assessing the technology’s scalability and ability to protect patient privacy.
“For this opportunity to be realized, the real challenges holding back safe and effective innovation in this space need to be addressed, and consensus standards need to be developed,” the report concluded.
The paper was published through the Duke’s Margolis Center for Health Policy and funded by a grant from the Gordon and Betty Moore Foundation.