A recent study conducted by Vanderbilt University Medical Center (VUMC) reveals the promising potential of artificial intelligence (AI) in improving suicide prevention efforts in medical settings. Led by Dr. Colin Walsh, MD, MA, an associate professor of Biomedical Informatics, Medicine, and Psychiatry, the team developed a novel AI system called the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL). This AI-driven approach uses routine patient information to help physicians identify individuals at high risk of suicide. The study’s results suggest that integrating AI alerts into the clinical workflow could be an effective tool for improving early intervention in suicide prevention.
The study, which was published in JAMA Network Open, examined the effectiveness of two types of AI-powered alerts—interruptive and passive—in three neurology clinics at VUMC. Interruptive alerts were designed to interrupt a doctor’s workflow, directly prompting them to screen patients for suicide risk. On the other hand, passive alerts simply displayed information in the patient’s electronic health record, without pausing the doctor’s current tasks. The aim was to compare how the two alert systems influenced the frequency of suicide risk screenings during regular patient visits.
The findings were clear: interruptive alerts significantly outperformed passive ones. Doctors responded to 42% of the interruptive alerts with suicide risk assessments, compared to only 4% of the passive alerts. This result reinforces the idea that a direct and immediate prompt in the doctor’s workflow increases the likelihood of action. Dr. Walsh emphasized that suicide prevention can be challenging, especially in settings where universal screening isn’t feasible for every patient. “Most people who die by suicide have seen a healthcare provider in the year before their death, often for reasons unrelated to mental health,” he noted. By proactively identifying high-risk patients, AI systems like VSAIL can help doctors engage in more focused discussions around suicide risk during routine appointments.
Suicide rates in the U.S. have steadily increased over the past decades, with more than 14 deaths per 100,000 Americans every year, making suicide the 11th leading cause of death in the country. Research has shown that a significant proportion of people who die by suicide have seen a healthcare provider within the year prior to their death. The challenge lies in the fact that many of these patients do not present with obvious mental health concerns during their visits. Hence, identifying those at increased risk for suicide has become a top priority for researchers and healthcare providers alike.
The VSAIL model addresses this issue by analyzing electronic health records (EHR) data and calculating the 30-day suicide risk for patients. By examining a combination of factors such as demographic information, medical history, and behavioral data, VSAIL generates individualized risk scores that signal which patients may require closer monitoring. Earlier studies have shown that VSAIL is effective at identifying high-risk patients. One prior test indicated that 1 in 23 individuals flagged by the model went on to report suicidal thoughts. This kind of predictive capability is instrumental in improving suicide prevention, as it helps doctors focus their attention on those who need it most.
In the current study, the team applied VSAIL to 7,732 patient visits over a six-month period across three neurology clinics. Neurology clinics were chosen for this research, as certain neurological conditions have been associated with increased risk for suicide. The study resulted in 596 total alerts, which prompted healthcare providers to perform suicide risk screenings. A key finding was that 8% of all patient visits were flagged for screening by VSAIL, and despite the high rate of identification, no patients in either the interruptive or passive alert groups experienced episodes of suicidal ideation or attempts within the 30-day follow-up period.
The approach demonstrated several advantages: not only did VSAIL provide a focused and targeted system to flag high-risk patients, but it also ensured that the alerts weren’t overwhelming. Automatic alerts can lead to “alert fatigue” if they become too frequent or disruptive, reducing their effectiveness and leading to decreased clinician responsiveness. The study’s researchers caution that while interruptive alerts seem more effective, they could still be prone to causing clinician burnout if overused or misused.
Dr. Walsh pointed out that healthcare systems must carefully balance the benefits of such interruptive alerts with their potential downsides, such as alert fatigue. Nevertheless, the results from this study strongly suggest that AI-powered alerts can play a vital role in improving suicide prevention efforts by focusing on those patients most at risk. Combining AI risk detection with well-designed alert systems might ultimately help healthcare providers catch critical early warning signs and take prompt action. Dr. Walsh and his colleagues believe that these kinds of automated alerts could eventually be expanded and applied across a range of medical specialties beyond neurology, from primary care settings to emergency departments, offering a scalable model for suicide prevention that integrates seamlessly into a busy clinical environment.
Despite the positive outcomes of the study, researchers recognize the importance of ongoing evaluation. Future studies will need to assess how long-term use of the alerts impacts clinical practice and whether patients are consistently screened in line with identified risks. There is also a need to explore other potential challenges that may arise from implementing AI-driven systems at larger scales.
For now, the findings highlight a clear advantage of AI in identifying patients at risk for suicide who might otherwise be missed. By refining these systems, researchers hope to create tools that not only alert providers to the risk of suicide but also prompt vital conversations between doctors and patients—conversations that could ultimately save lives. As healthcare systems continue to explore innovative ways to integrate AI into their workflows, suicide prevention remains an area where technology, when used thoughtfully and effectively, has the potential to create real, life-saving impacts in the clinical setting.
Reference: Risk Model–Guided Clinical Decision Support for Suicide Screening, JAMA Network Open (2025). DOI: 10.1001/jamanetworkopen.2024.52371