Behavioral Healthcare AI, Part 2: Detecting Suicide Risk for Earlier Intervention

By Physician's Weekly Editors - Last Updated: April 30, 2025
Behavioral Healthcare AI, Part 2: Detecting Suicide Risk for Earlier Intervention

In This Episode

PeerPOV: The Pulse on Medicine is a weekly podcast series that features expert commentary on the latest healthcare news, landmark research, and more.

Tom Zaubler, MD, continues sharing his insights on AI’s applications in the field of behavioral healthcare, emphasizing the technology’s ability to identify which patients are at high risk for suicide.

Let us know what you thought of this week’s episode on Twitter: @physicianswkly

Want to share your medical expertise, research, or unique experience in medicine on the PW podcast? Email us at editorial@physweekly.com!

Thanks for listening!

TRANSCRIPT:

Welcome back to PeerPOV: The Pulse on Medicine, a podcast series by Physician’s Weekly showcasing the latest insights from your peers across the medical community.

In this week’s episode, Dr. Tom Zaubler continues discussing how AI can expand access to behavioral healthcare and improve patient outcomes.

Let’s say you’re a behavioral provider. You’re seeing a patient in a snapshot, which can be limiting. AI allows for a collection of data that is not limited to a snapshot of time. One reason that is so important is issues around safety. Just to give you an example, we know that how patients experience suicidal thoughts and become acutely suicidal can occur very suddenly and discontinuously. We used to believe there was a gradual increase in distress level, and people would crescendo to a point where they became acutely suicidal. In fact, that’s not often how it works. Often, suicidality can emerge very suddenly.

There may be an underlying predisposition, like depression or anxiety. That’s not always the case, though. Not everyone who becomes suicidal is suffering from depression or anxiety. A significant percentage of people are not. We know environmental triggers may make someone feel unsafe, and it’s important to capture those moments when people are at risk as best as possible. When you’re meeting with a patient at two o’clock on a Wednesday afternoon, they may not be at risk. It may be that they will be at risk at three in the morning.

At NeuroFlow, we have journaling. We have natural language processing that will pick up, in a patient’s journal, when they are having thoughts of hurting themselves. If a patient indicates, either through a scale or their journal, that they are at risk, natural language processing will pick that up, and the patient will immediately get what we refer to as a “caring contact” with both national and local resources, with whom they can call to get help. At NeuroFlow, we will follow up with a response services team, with a human reaching out to do suicide assessment. Based on how that patient is doing, they may wind up being directed to the emergency room or intensive outpatient treatment, or maybe the decision is that there’s not an immediate risk. They can see their therapist in three or four days.

But those assessments often don’t happen, and the onset of suicidal ideation to the point of acting on it can occur within minutes. It’s important to intervene. Something as simple as a caring contact—an email or a phone call—can be lifesaving.

AI is a way of extending the work that goes on in practice, and we need to teach behavioral health providers that technology is not only a workforce multiplier, but it’s also a way of delivering care at critical moments when behavioral health providers are unavailable. We must help behavioral providers understand that AI is a way of improving and democratizing access, managing stigma, and standardizing care, making sure that it’s measurement-based, and making sure patients can access services not just at two o’clock on a Wednesday, but around the clock.

It’s interesting to look at how AI will continue to be transformative in terms of its impact on behavioral healthcare delivery or in a range of medical settings. We publish a lot on our work, whether it’s on suicide prevention or utilizing AI for other purposes. We compare usual care with usual care plus AI, and adding AI enhances outcomes robustly and significantly. There’s growing evidence that supports that, and we continue to contribute to that.

Along with that, there’s an interesting finding. A recent study showed that technology, particularly telehealth, has increased the number of people getting therapy in this country. It used to be that about 3% to 4% of individuals would pursue mental health services, psychotherapy in particular. With telehealth, it’s gone up to about 8% or 9% post-pandemic.

The problem is that telehealth has not addressed disparities in care. Telehealth has largely benefited the socioeconomically advantaged. It’s largely been those with mild to moderate psychiatric illnesses who have benefited from the improved access. So, we need to think about how we can improve access for those with more severe conditions. How can we improve access for those who come from zip codes that are so determinative? Socioeconomics is so incredibly determinative in terms of whether someone has broadband access or will get care. There are huge disparities. Even though the general population among socioeconomically advantaged individuals has improved access to care, access has not improved for those who are not socioeconomically advantaged. Even among those who are, in certain communities, care has not improved. In young African American youth, there has been a substantial decrease in the number of individuals who are receiving care post-pandemic.

What can AI do to address this? One is being able to identify problems seamlessly. There are challenges, obviously, with broadband. AI can’t solve a lot of these problems. However, once you can start to improve access to broadband and AI, you can engage individuals with AI in a way that allows them to open up. There can be avatars, for example, that are culturally sensitive and allow individuals in various communities to feel that they can relate. The work that’s being done is remarkable in that regard.

We find that AI and large language models appear to be empathic. Now, that’s not to suggest that they’re going to replace human beings, but what it does suggest is that there will be an opportunity increasingly to use AI for suicide assessments in the moment to determine whether someone needs to go to the emergency room.

We’re going to see that, for those who don’t have the ability to access care because they live in a behavioral health desert or don’t have the means or insurance, AI can deliver that care in a very efficient way. There will be more focus on delivering care through large language models—again, supplementing human care, but it may be chatbots that can deliver care seamlessly. It may be one-off interventions that can be done through AI, moving beyond 14 weeks of digital cognitive behavioral therapy, to looking at specific interventions that can be done in the moment to help people build resilience and manage acute distress. There are many ways that can be done. AI offers a lot of promise to deliver care in ways that are very different than we’ve contemplated before. AI can help with workforce shortages, improve access, and democratize accuracy in a way that telehealth alone has not done.

I’ll give you an example of how AI already has improved how we manage those in the most extreme distress, where there’s concern about safety and suicidality. We recently published a study looking at individuals who journaled, and then natural language processing on our platform picked up a concern about safety. Those same individuals had completed scales to assess suicidality and indicated that there was no concern at all, yet their journaling and natural language processing indicated there was a concern about safety. They then went on to get appropriate interventions from humans, who further made assessments that determined there was a need for care. That can save lives, to look at novel ways of detecting risk, stratifying populations according to risk, and ensuring that there’s timely delivery of care, especially in moments of great need. Those are just a few examples.

Lastly, we talked about avatars being culturally sensitive. We can have AI deliver care in just about any language. We can have AI deliver care with avatars that look just like the person who is asking for care. There are so many things that we can do. Again, this is not intended to replace human beings. It’s intended to complement the care that human beings deliver and, as I’ve said many times, to deal with the workforce shortage, standardize care, and augment the level of care provided.

With the advent of electronic health records, a lot of behavioral health professionals—a lot of healthcare professionals in general—look at technology as something that can be burdensome and disrupt workflows and take up a lot of time. We must continue to work hard to educate healthcare providers that we’ve come a long way from the traditional healthcare record. That technology now can disrupt workflows in a positive way to create newer workflows that are more user-friendly for patients and providers, and create economies of scale to deliver healthcare more efficiently, economically, and effectively. When you look at the quintuple aim of healthcare, to improve the care experience, outcomes, cost of care, issues around equity, and provider satisfaction, technology ticks all of those off. Once providers get beyond the stigma that technology is going to create more work as opposed to improving my workflows and making me more efficient—once you get beyond that bias (and that’s an important step for all of us to take), it can improve the quality of care and physicians’ and patients’ lives.

We must educate people on what it means to utilize technology. Things are changing so rapidly, and sometimes, that can be intimidating. It’s important that we address any concerns and biases, continue to educate providers and individuals seeking care about what technology is and what it is not, and reinforce all the safeguards in terms of data privacy and security.

Thanks for listening. Stay tuned for next week’s episode. To hear more, follow PeerPOV: The Pulse on Medicine on Apple Podcasts, Spotify, or Amazon Music.

This transcript has been edited for readability.

Post Tags:Behavioral HealthPodcastssuicide
Podcasts
August 27, 2025