On Oct. 29, 2018, Lion Air Flight 610 plummeted into the Java Sea. Fewer than five months later, Ethiopian Airlines Flight 302 nosedived into a field. Combined, the crashes took the lives of all 346 people aboard.
What the two disasters had in common were the make and model of the aircraft: the Boeing MAX 737, introduced in 2017. This plane uses an innovative AI-incorporating application, the “maneuvering characteristics augmentation” system (MCAS). Upon investigating, the National Transportation Safety Committee found that, among other problems, the system’s designers had made faulty assumptions about flight crew response to MCAS malfunctions.
Two radiologists at the University of California, San Francisco, reflect on what went wrong AI-wise, outlining potential parallels between aviation and their specialty—which is probably the medical branch furthest along with AI adoption. Their commentary is running in Radiology: Artificial Intelligence.
“Automated systems designed to improve safety may create dangers or cause harm when they malfunction,” wrote John Mongan, MD, PhD, and Marc Kohli, MD. “The effects of an artificially intelligent system are determined by the implementation of the system, not by the designers’ intent.”
Here are synopses of the lessons they urge radiology to draw from the disasters.
1. A malfunctioning AI system may have the opposite of its intended positive effect; a failing system can create new safety hazards. AI system failures and their downstream effects “need to be considered independent of the intended purpose and proper function of the system,” Mongan and Kohli write. “In particular, it should not be assumed that the worst-case failure of a system that includes AI is equivalent to the function of that system without AI.”
2. Proper integration into the working environment is key: The accuracy of inputs into an AI algorithm is as important as the accuracy of the AI algorithm itself. Implementation of an AI algorithm—connecting it to inputs and outputs—“requires the same level of care as development of the algorithm, and testing should cover the fully integrated system, not just the isolated algorithm,” the authors point out. “Furthermore, AI systems should use all reasonably available inputs to cross-check that input data are valid.”
3. People working with AI need to be made aware of the system’s existence and must be trained on its expected function and anticipated dysfunction. Mongan and Kohli emphasize that the flight crews of the two doomed 737 MAX planes were wholly unaware of the existence of MCAS inside their planes. “At a meeting with Boeing after the first of the two crashes, an American Airlines pilot said, ‘These guys didn’t even know the damn system [MCAS] was on the airplane—nor did anybody else.’”
4. AI systems that automatically initiate actions should alert users clearly when they do so and should have a simple, fast and lasting mechanism for override. The authors suggest that MCAS’s closed-loop design—the output of the automated system directly initiates an action without any human intervention—could similarly challenge their specialty going forward. “At present, most radiology AI provides triage, prioritization or diagnostic decision support feedback to a human, but in the future closed-loop systems may be more common,” they write, adding that closed-loop systems “cannot be ignored and must be inactivated to avoid consequences.”
5. Regulation is necessary but may not be sufficient to protect patient safety. This may be a particular concern when the regulation is “subject to the conflicts of interest inherent in delegated regulatory review.”
“We have the opportunity to learn from these failures now, before there is widespread clinical implementation of AI in radiology,” Mongan and Kohli conclude. “If we miss this chance, our future patients will be needlessly at risk for harm from the same mistakes that brought down these planes.”
The authors flesh out each of these points in some detail, and Radiology: Artificial Intelligence has posted the commentary in full for free.