A unfancy prediction model developed at Harvard in 2020 to risk-stratify COVID-19 inpatients has proven robust enough that, a year later, its accuracy has waned only slightly.
The year-old AI tool’s differential-diagnostics power has remained consistent across demographic and gender subgroups, too, albeit with one exception—younger patients, for whom the model struggled to predict outcomes.
The model further faltered when it came to computing positive predictive values.
The latter falloff likely reflected “substantial diminution in mortality and mechanical ventilation between the original and the subsequent study periods,” comment the authors, who were led in the project by psychiatrist Roy Perlis, MD, MSc, director of the Center for Quantitative Health at Massachusetts General Hospital.
Still, the model proved sufficiently correct in its predictions of adverse outcomes that it demonstrates the reusability of existing COVID algorithms with recalibrations rather than trips back to square one, the authors suggest.
Perlis’s co-authors are Victor Castro, MS, and Thomas McCoy, MD, both of whom have appointments at the quantitative health center.
The new study, published July 27 as a research letter in JAMA Network Open, builds off research conducted by the same team during the first wave of the pandemic.
In October 2020, JAMA Network Open published their work showing inpatients with COVID-19 could be efficaciously risk-stratified based on just three sets of variables— admission lab results, sociodemographic factors and prior diagnoses of pulmonary diseases.
The team initially trained and tested the model on data from more than 2,500 patients treated at six academic and community hospitals in eastern Massachusetts.
For the followup research, Castro and colleagues applied the algorithm to a little under 2,900 patients treated at the same six hospitals.
Mean age of the new cohort was 63.0 years, and the group included 1,460 (50.5%) women, 673 (23.3%) Hispanic individuals and 344 (11.9%) Black individuals.
Mean length of stay was 6.2 (5.3) days. Of the 2,900 studied patients, 126 (4.4%) required an ICU stay and 68 (2.4%) mechanical ventilation. Mortality prior to discharge was the final outcome for 167 (5.8%) of the followup cohort.
In its second outing, the prediction model stacked up against its first as follows on overall accuracy for mortality:
- Overall accuracy for mortality had an AUC of 0.83 in 2021 vs. 0.85 in 2020;
- Positive predictive value was 0.22 in 2021 vs. 0.46 in 2020;
- Negative predictive value was 0.98 in 2021 vs. 0.97 in 2020.
“Our results indicate that the population of individuals hospitalized for COVID-19 has shifted and the prevalence of the studied outcomes changed,” Castro et al. comment. “However, they suggest that prediction models derived earlier in the pandemic may maintain discrimination after recalibration.”
The team notes the six hospitals represented two health systems in the same region, acknowledging this as a limitation.
The authors add that the new results also illustrate
the importance of investigating risk stratification models across patient subgroups as a step toward ensuring that particular groups are not adversely affected by the application of such tools, particularly in settings of potential resource constraints.”