Natural language to detect mental health crises

0
Mental health

Mental health demands are on the rise. Over the past 20 years, there has been a more than 30% increase in suicide rates. Approximately one in five Americans currently suffer from a mental health illness. According to organizations like the National Alliance on Mental Illness (NAMI), which provides free support for anyone going through a crisis,. Between 2019 and 2021, the number of people seeking treatment increased by 60%.

To address this, organizations and health care providers are turning to digital tools. Digital tools like crisis hotlines and chat lines are being used to support patients in crisis, but dropped call rates remain high, often siloed from clinicians.

The National Suicide Prevention Lifeline reported a 30% chat response rate and a 56% text response rate in 2020. This results in many patients in crisis without support due to standard queuing methods.

The group created a machine learning (ML) system named Crisis Message Detector 1 (CMD-1). This system can recognize and automatically prioritize alarming signals using natural language processing. It helps reduce patient wait times from 10 hours to less than 10 minutes.

Swaminathan says:

For clients who are suicidal, the wait time was simply too long. The implication of our research is that data science and ML can be successfully integrated into clinician workflows, leading to dramatic improvements when it comes to identification of patients at risk, and automating away these really manual tasks,”

Crisis Specialist Empowerment

The team sourced the data for CMD-1 from Cerebral, whose chat system receives hundreds of messages from patients every day. Messages can include a wide range of issues, from appointment scheduling to medicine refills, as well as messages from patients in crisis.

A study analyzed 200,000 patient messages, categorized as “crisis” or “non-crisis” based on key crisis words and patient IDs. It helps highlight signs of suicidal ideation, domestic violence, or self-harm.

The team categorized messages conservatively, focusing on false negatives and false positives. It helps determine that missing a false Negative was 20 times more undesirable than addressing a false positive.

CMD-1 accurately detected high-risk messages, reducing response time from over 10 hours to 10 minutes. It is crucial for directing high-risk patients away from suicide attempts.

LEAVE A REPLY

Please enter your comment!
Please enter your name here