Facial analysis enabled by AI is helping researchers assess mental health in a more objective way, relying on algorithms that detect behavioral biomarkers rather than subjective exams alone, the Wall Street Journal reported April 1.
Louis-Philippe Morency, an associate professor of computer science at Carnegie Mellon University in Pittsburgh, is one of the scientists working on the case. He and colleagues at Carnegie Mellon and the University of Southern California have established more than a dozen behavioral biomarkers for conditions like depression, post-traumatic stress disorder, schizophrenia and suicide.
“We built a dictionary of these behavior markers for different mental health disorders,” Morency told the Wall Street Journal. “The technology gives you a summary of these behavior markers and we can use it as part of a treatment to see how that person is behaving today compared to a month ago.”
Other researchers at Columbia and the University of Pittsburgh are in the midst of a clinical trial that’s testing whether smartphones can detect some of those biomarkers in adolescents. Between 200 and 300 teens are expected to take part in the six-month trial.
When taught to recognize certain markers—depressed people don’t enunciate their vowels as much, for example, and suicidal patients who talk in a breathy tone are more likely to re-attempt suicide than if they’re talking in a tense voice—the smartphones might be able to provide some objective data to contribute to patients’ ultimate diagnoses. The tech is meant to act as a supplementary tool to a traditional doctor’s assessment.
“This doesn’t mean if you suddenly have a breathy voice, you are suicidal,” Morency said. “This is one sign that the doctor should be able to use as they are doing their assessment.”
Read more below: