Affronts against privacy and equality—real, perceived or ginned up—may fuel lawsuits by patients whose COVID care incorporated AI.
So warn Stanford scholars in a paper published March 16 in The BMJ.
Senior author Daniel Ho, JD, PhD, and colleagues predict that such legal challenges may surface as disputes against regulatory decisions, tort actions or suits citing health privacy laws.
“In evaluating the legality of public health use of algorithms, courts will likely … probe how the output of these tools is used to shape policies and programs,” the authors write. “But showing that a model performs well and does not exceedingly burden privacy and other interests are essential preconditions for lawful deployment.”
Noting the proliferation of AI models for predicting patients’ COVID risks at the individual level, Ho and co-authors break out three key messages for government bodies and healthcare providers:
- The use of personally identifiable information, including race, raises legal concerns over privacy and antidiscrimination, which we illustrate in the context of U.S. law.
- The underlying legal principles are essentially an assessment of effectiveness and burdens of AI and machine learning tools.
- More robust evaluation of AI and machine learning tools will be necessary to support the adoption and legality of rapidly proliferating tools.
“The deployment of AI in the fight against COVID-19 is an important moment for algorithmic governance,” the authors comment. “There is an abundance of models and a shortage of coordinated and consistent standards and evaluation. … Governments implementing risk scoring tools must show that their tools produce valid, reliable predictions and burden individuals’ civil liberties no more than necessary.”
The analysis is available in full for free.