Clinical nutritionists won’t be left out of the medical AI revolution, as researchers are exploring use cases for augmented diet optimization, food image recognition, risk prediction and diet pattern analysis.
The state of the science is described in a paper published this month in Current Surgery Reports.
Applications for AI and other digital technologies are “still young, [but] there is much promise for growth and disruption in the future,” write multidisciplinary specialists at UCLA Health, San José State University and the Mayo Clinic.
The authors represent expertise in gastroenterology, molecular and biochemical nutrition, anesthesiology and general internal medicine.
Surveying recent research and literature reviews, lead author Berkeley Limketkai, MD, PhD, of UCLA Health and colleagues home in on the four aforementioned use cases. Here are snapshots of their reports on each.
1. Diet optimization. A machine learning model for predicting blood sugar levels after people eat a meal was significantly better at the task than conventional carbohydrate counting, the authors report. The algorithm’s creators used the tool to compose “good” (low glycemic) and “bad” (high glycemic) diets for 26 participants.
“For the prediction arm, 83% of participants had significantly higher post-prandial glycemic response when consuming the ‘bad’ diet than the ‘good’ diet,” Limketkai and colleagues note. … “This technology has since been commercialized with the Day Two mobile application on the front.”
2. Food image recognition. A primary challenge in alerting dieters to likely nutritional values and risks going by photos snapped on smartphones is the sheer limitlessness of possible foods, the authors point out. An early neural-network model developed at UCLA by Limketkai and colleagues achieved impressive performance in training and validating 131 predefined food categories from more than 222,000 curated food images.
“However, in a prospective analysis of real-world food items consumed in the general population, the accuracy plummeted to 0.26 and 0.49, respectfully,” write the authors of the present paper. “Future refinement of AI for food image recognition would, therefore, benefit on training models with a significantly broader diversity of food items that may have to be adapted to specific cultures.”
3. Risk prediction. Machine learning algorithms beat out conventional techniques at predicting 10-year mortality related to cardiovascular disease in a densely layered analysis of the National Health and Nutrition Examination Survey (NHANES) and the National Death Index.
A conventional model based on proportional hazards, which included age, sex, Black race, Hispanic ethnicity, total cholesterol, high-density lipoprotein cholesterol, systolic blood pressure, antihypertensive medication, diabetes, and tobacco use “appeared to significantly overestimate risk,” Limketkai and co-authors comment. “The addition of dietary indices did not change model performance, while the addition of 24-hour diet recall worsened performance. By contrast, the machine learning algorithms had superior performance than all [conventional] models.”
4. Diet pattern analysis. Here Limketkai et al. look at a prospective study of more than 7,500 pregnant women who self-reported dietary intake approximately three months prior to giving birth. Comparing logistic regression with machine learning for predicting adverse pregnancy outcomes, researchers found logistic regression failed to find an association between undesirable outcomes and suboptimal consumption of fruits and vegetables.
Meanwhile the machine learning model “found that the highest fruit or vegetable consumers had lower risk of preterm birth, small-for-gestational-age birth and pre-eclampsia,” which is a pregnancy complication marked by elevated blood pressure and, in cases, organ damage.
Wrapping their discussion, Limketkai and co-authors reiterate that the widening acceptance and use of digital devices as well as AI have:
opened exciting avenues for personalized nutrition and optimization of nutrition care. Mobile applications and wearable technologies have since facilitated longitudinal, real-time and multi-type data collection, while advances in computing power and refinements in machine learning algorithms have permitted high-dimensional analyses of large datasets to generate meaningful observations.
… As the application of cutting-edge digital technologies lags in nutrition relative to the medical or other consumer-oriented industries, disruptive technologies in nutrition are still forthcoming but near. As such, continued research and development in these areas will indubitably produce technological innovations for nutrition that would have once been relegated to science fiction just a few years ago.”
The paper is available in full for free.