Explainable AI wouldn’t be much use diagnosing victims of poisoning when the medical toxicology is complex, as in an overdose of multiple drugs at once.
However, it’s almost as sharp as human experts when the cause is simple and straightforward, as with ingestion of a single common cleaning product.
This means the technology could be called upon during frenetic periods in emergency rooms or poison centers.
So suggest researchers who developed a probabilistic logic AI network for the task, then tested its performance against that of two medical toxicologists and a decision tree model.
Michael Chary, MD, PhD, of Weil Cornell Medicine and colleagues at Harvard used a library of 300 synthetic cases to build an AI system capable of mimicking experienced clinicians making decisions based on inputs from physical exams.
They gave each case five findings that would be expected in patients sickened by one or two substances.
The AI system, which they dubbed Tak, agreed with the human experts most of the time for straightforward cases and some of the time for moderately complex cases but fell behind on the complicated cases.
Still, it handily beat the decision-tree classifier across the board.
Chary et al. comment that probabilistic logic networks “can model toxicologic knowledge in a way that transparently mimics physician thought.”
Underscoring that the synthetic design of the experimental cohort makes the study a proof-of-concept project, they call for further research to figure out how their approach might translate to clinical practice for medical toxicologists.
Publishing their work in the July edition of Computers in Biology and Medicine, the authors conclude:
Physicians must trust an AI-based system to include it in their evaluation and treatment of patients. An algorithm can earn that trust through proficiency on complex cases and transparency. Tak demonstrates transparent clinical reasoning. This transparency, if preserved in more accurate models, may remove barriers to the use of AI approaches in clinical decision making. Even if a more detailed analysis of the limits of probabilistic logic networks suggests an unimprovably poor performance on complex cases, a transparent AI system may be useful by automating aspects [of] routine cases and in doing so freeing up expert time for more complicated cases.
Chary’s co-authors were Ed Boyer, MD, PhD, of Brigham and Women’s Hospital and Michele Burns, MD, MPH, of Boston Children’s Hospital.