• Recognizing Thought Markers

    Predicting Who Will Move from Thoughts of Suicide to Action 

    Imagine a typical day − it might be any Monday or Thursday. As always, the Emergency Department at Cincinnati Children’s is busy. In one exam room, there might be a 12-year-old who fell off his bike and broke his arm.

    In another, a toddler suffering an asthma attack. In a third, a cancer patient who spiked a fever.

    Countless health issues bring children and teens to the Emergency Department (ED) at our Burnet and Liberty campuses.

    A surprising number of these children are there for a mental health evaluation − more than 5,000 last year alone. As many as 2,000 children a year are brought to the ED because they’re thinking about committing suicide.

    Every day, the clinicians who evaluate these patients must make a decision: What is the likelihood this child will attempt suicide? Should the child be admitted to the hospital, or is it safe to send the child home with medicine or a referral for counseling?

    No blood tests or MRI scans can help them make this decision.

    They make the judgment − as they have for generations − based on the child’s history, behavior and current living situation; the child’s responses to questions that help them assess the child’s state of mind; and their own instinct from years of experience.

    While there have been enormous strides in developing more advanced, sensitive diagnostic tools for medical illness, there have been no comparable advances in diagnosis of mental illness. “We need better tools to help us screen patients more accurately,” says Michael Sorter, MD, director of the Division of Child and Adolescent Psychiatry at Cincinnati Children’s. “Enhancing our ability to detect kids at risk of suicide would save lives.”

    Now a research team led by John Pestian, PhD, director of the Computational Medicine Center at Cincinnati Children’s, is taking a new approach that may provide a groundbreaking advance.

    Pestian and his team are creating innovative computer software that listens to patients and hears things the clinicians may not. The software is designed to help clinicians predict patients’ risk of committing suicide with greater accuracy than ever before. 

    Letters Left Behind 

    Pestian’s specialty is machine learning: teaching computers to think. He’s teaching them to think about the likelihood that a patient will die of suicide.

    He and his team have collected more than 1,300 notes from people who died by suicide. He mined these suicide letters for cues computers can be taught to recognize and interpret. First, he had the notes scanned and transcribed. Then each note was painstakingly annotated by at least three volunteer readers. The 160 volunteers were surviving family members of individuals who had taken their own lives. “Their courage was admirable, even when it led to churning such deep emotional waters,” Pestian says.

    The readers were asked to identify emotions expressed in the letters− abuse, anger, blame, fear, guilt, hopelessness, sorrow, forgiveness, happiness, peacefulness, hopefulness, love, pride, thankfulness, as well as instructions and information.

    Pestian and his team then created algorithms to teach the computer how to find predictive thought markers in this large set of data. The computer doesn’t interpret the words, as a human listener does, but finds meaningful patterns in sentence structure and clusters of words.

    To test whether his computer model could accurately recognize thought markers for suicide, Pestian conducted a series of experiments. For the first, in 2005, he used 33 real suicide notes and 33 simulated notes.

    He asked 43 mental health workers, including seasoned professionals and psychiatry trainees, to read the notes and identify which were real.

    On average, they were right about 55 percent of the time. His computer model was right nearly 80 percent of the time. 

    Moving from Structure to Sentiment 

    Encouraged by this promising result, the research team took the next step: sentiment analysis. With funding from the National Institutes of Health, Pestian sponsored an international competition for scientists who specialize in natural language processing to create computer algorithms to classify emotions in suicide notes.

    Anyone who uses Google sees natural language processing at work, Pestian explains. You start typing a word, and the rest of it pops up.

    The software predicts the word you intend to write. Or perhaps you search Amazon for a songwriter’s recent release. Next thing you know, Google Music has a playlist.

    Predicting from structured data is one thing. Identifying and predicting an emotion is another, far more challenging problem.

    For the community of linguists and computer scientists interested in sentiment analysis, a database of 1,300 suicide notes was an extraordinary resource. Twenty-four teams around the world competed to develop the most accurate algorithms for classifying emotions found in text. The winning entry was developed by Microsoft Asia’s research lab. Work continues to refine and improve the algorithms.

    In addition to linguistic structure and sentiment, Pestian is incorporating data from sound waves and silences, from facial expressions and genetics − giving the computer more ways to learn what we do when we’re getting ready to commit suicide. 

    Validating Through Clinical Trials 

    The accuracy of Pestian’s approach is being tested and validated through clinical trials involving real patients in four different emergency room settings.

    In the first small trial at Cincinnati Children’s, suicidal and control group patients were asked several open-ended questions. Their responses were recorded and transcribed. The computer model was then put to the test.

    It was able to accurately assign the responses to the right group − suicidal or non-suicidal − at least 93 percent of the time.

    Pestian has now begun a larger trial that will involve 500 adults and children at hospitals in Cincinnati, Appalachia and Canada.

    He looks forward to the day when staff in emergency rooms and psychiatric hospitals will have a reliable new diagnostic tool at their side to help them evaluate patients at risk for committing suicide − and above all, to save lives. 

    Clinical counselor Nicole Piersma.

    Clinical counselor Nicole Piersma, LPCC, evaluates a young patient in the Emergency Department.

  • Virtual Human

    Can a virtual human become an auxiliary resource to the staff in an emergency room? Would patients be comfortable, open and honest talking to an avatar?

    The mere idea of talking to a computer may sound like science fiction, but it’s real.

    John Pestian’s team and research collaborators at sites across the country are taking artificial intelligence to the next level. They’re creating avatars − virtual humans − that can move realistically, listen attentively, make conversation and analyze input faster than any human.

    Among the benefits Pestian foresees: computer avatars could expand staff resources in communities where mental health expertise isn’t available. The appearance of the avatars could be adjusted to look like whomever the patient prefers to talk to − male or female, a friend the patient’s own age or race, a grandmother figure. “We can make it anything that will help the patient tell us more,” Pestian says.

    We’ve seen what virtual humans can do in movies. Now Pestian and other scientists envision a new, innovative application for clinical care. And believe it or not, it’s not in a time and galaxy far away. It’s just over the horizon.

  • recognizing-pestian

    John Pestian, PhD, director, Computational Medicine Center. His specialty is machine learning: teaching computers to think.

    Numbers at a Glance

    •  Every 14 minutes, someone in the United States dies of suicide. Cincinnati Children’s researchers are developing a more accurate tool for predicting suicide and saving lives. 

    •  Suicide is the third leading cause of death among 15- to 25-year-olds in the United States. 

    •  The research team's computer model accurately recognized responses as suicidal or non-suicidal at least 93 percent of the time.