As AI inches into the workflows of residency program directors, many may need to prepare against delegating too much of the laborious candidate-selection task to an algorithm.
Two reasons why:
One, applicants may catch on quickly to what the AI tool is screening for. Some will pad their applications with the right keywords.
Two, AI is no less biased than the data on which it was trained. It may screen out topnotch candidates whom it doesn’t recognize as such.
The points were raised at the AMA’s recent Graduate Medical Education Summit, according to Nov. 5 event coverage by the association’s news department.
To counter AI’s inherent bias, administrators of medical residencies could invite input from a diverse set of non-MD individuals, suggests Charlene Green, director of the office of admissions and the office of student and resident diversity at UC Davis School of Medicine.
Green recommends gathering insights from nurses, social workers and even campus police chiefs, the idea being to go where the diversity is.
“Everyone has bias, right?” Green said at the summit. “When you have non-diverse people selecting—trying to select—diverse people, you miss some of the things that maybe you don’t understand about identity and background.”
Green cited statistics showing that, in 2019, only 5.3% of active residents identified as Hispanic and just 4.4% identified as African American.
Hispanics comprise 18.3% of the population; African Americans, 13%.
As for the temptation to let AI do the heavy lifting—it can, after all, quickly “look at hundreds of characteristics to determine what makes a successful resident,” as AMA news writer Timothy Smith points out—residency administrators will have to be diligent.
“There is a huge pile of residency applicants for every available position,” Jesse Burk-Rafel, MD, an internal medicine hospitalist at New York University Langone Health, said at the summit.
That aspect of the work, Burk-Rafel added, can leave “a whole team of beleaguered program directors all exhausted and frustrated.”