Deep learning models can detect signs of Barrett's esophagus (BE) and esophageal cancer in high-resolution microscopy images, according to new research published in JAMA Network Open.
The study’s authors developed a model that could classify images using a convolutional attention-based mechanism, evaluating each image in a way similar to how human pathologists look at slides under a microscope. The dataset included 180 whole-slide images from patients who underwent endoscopic esophagus and gastroesophageal junction mucosal biopsy from January 2016 to December 2018 at a single academic medical center. While 116 of those images were used to train the deep learning model, 64 were used for testing purposes. These whole-slide images were then separated into 379 high-resolution images.
Two experienced pathologists then annotated each image, providing the researchers with a reliable reference standard, the authors explained. The deep learning model worked by extracting features from each image and then classifying the slide based on those features.
Overall, the team’s proposed model demonstrated a mean accuracy of 0.83 when classifying the test whole-slide images. In addition, its F1 scores were “at least 8% higher for each class compared with the sliding window approach.”
“Previous methods for analyzing microscopy images were limited by bounding box annotations and unscalable heuristics,” wrote lead author Naofumi Tomita, MS, Dartmouth College in Hanover, New Hampshire, and colleagues. “The model presented here was trained end to end with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for applying deep learning in digital pathology.”
Tomita et al. did note that their study had certain limitations. The slides all originated from the same academic medical center, for example, and were even scanned using the same equipment. The dataset was also “relatively small” when compared to the datasets commonly used by researchers evaluating deep learning models.