Schedule a Vaccination | Reminder: Masks Are Still Required When Visiting
Suicide is the second leading cause of death among 25-34 year olds and the third leading cause of death among 15-25 year olds and ranges from the 10th to 15th leading cause of death in the United States. In the Emergency Department, a particular problem is deciding how to best manage those patients who have presented after attempting, but not completing suicide. We know of no evidence-based risk assessment tool for predicating repeated suicide attempts. Thus, emergency medicine clinicians are often left to deal with suicidal patient with clinical judgment alone. The purpose of the research is to discover new ways to predict suicide.
The Pestian Lab group has conducted two prospective studies to understand the acoustic, linguistic, facial expression, and genetic features of suicidal patients and patients with mental illness. The goal of the first study, the Suicidal Adolescent Clinical Trial (ACT) conducted in 2011 was to build a machine learning classifier to differentiate suicidal and control adolescent patients using their linguistic and vocal characteristics. The ongoing STM (Suicide Thought Marker) study is an expanded version of the ACT study, where the main goal is to build a machine learning classifier to differentiate suicidal patients, patients with mental illness, and control patients using their linguistic, vocal, genetic and/or facial expression characteristics.
In both studies, machine learning techniques have been used to determine suicidal risk using linguistic and acoustic features. Ongoing analyses of the STM will determine the power of facial expression and genetic features in complementing the assessment of suicidal risk and mental illness.
Machine learning classifiers were built using either linguistic features or acoustic features from the ACT data set. These classifiers were able to differentiate between suicidal and control patients with at least 90% accuracy. Publications related to the STM data are in progress, but comparable accuracies are expected; although the STM cohort is more diverse in age range, demographics, and variety of mental disorders, the cohort is larger.
The development of the Suicide Notes database marked the initial phase of suicide research for the Pestian Lab. It was our desire to build a "corpus" or collection of notes, and annotate the words, phrases and sentences in them according to the emotions they convey. The database contains over 1,300 notes, collected between 1950 and 2011 by Dr. Edwin Shneidman and Cincinnati Children's Hospital Medical Center, written by people before they died from suicide.
After the notes were compiled, transcribed and anonymized, annotators were recruited to identify the emotions in these notes. "Vested volunteers" -- specifically those with an emotional connection to the subject of suicide -- were enlisted from online suicide support communities to identify emotions such as anger, blame, fear, guilt, hopelessness, etc. Up to three annotators were assigned to a single note, which allowed for interannotator agreement to be evaluated.
A total of 1,278 notes were annotated at least once and 1,008 were evaluated three times. Interannotator agreement of the emotions assigned to sentences indicated good agreement.
The International Challenge in Emotion Prediction from Suicide Notes study evaluated the performance of natural language processing systems in classifying emotions in the sentences of suicide notes.
The organizers provided the classification scheme and a manually annotated training dataset consisting of 1,000 suicide notes, where each sentence was associated with zero or more emotions. They were then asked to submit predictions of the emotions in another set of notes, where true assignment were only known to the organizers. Each team’s performance evaluated using the “micro-average F1 score”, which can be roughly interpreted as a measure of accuracy.
Twenty-four teams submitted results. F1 scores ranged from 0.61 to 0.30, reflecting the diversity of approaches used to make the emotion assignments. Although some might view this score as low, it is important to keep in mind that interannotator agreement of humans assigning emotions to the sentences within these same notes was measured to be a=0.55.
ULTRE was an initiative started at Cincinnati Children's Hospital Medical Center to help chaplains screen more efficiently for those patients that are undergoing religious struggle. The goal was to build a decision support tool that can be used by hospital chaplains to help them prioritize their meetings with families who may be in spiritual crisis.
The prayers in hospital chapel prayer books were electronically transcribed. Each prayer was then annotated by chaplains from the United States and the United Kingdom based on a particular ontology. That is, each chaplain
annotator identified whether certain themes, such as “Questioning where God is” and “Expression of Guilt”, occurred in the prayer. Natural language processing techniques (e.g., machine learning) were then used to build a mathematical model that could be used to automatically identify those prayers containing religious struggle.
Multiple approaches were applied to analyze the prayers. For instance, one analysis involved comparing the sentiments (e.g., positive, negative, etc.) expressed in prayers written in English and Spanish. Another analyses involved directly building machine learning (automated) classifiers that used the frequencies of words in the prayers to identify religious struggle.