Skip to main content
Humans+Robots

A suicide hotline powered by big data

A crying-face emoji. A 4 a.m. text. At Crisis Text Line, AI helps find the warning signs of danger.

By Glenn McDonald

It happens more than 3,000 times a day: Someone on a smartphone types a message, sad or worried or desperate, to the nonprofit crisis-intervention service Crisis Text Line.

But before a human gets on the other end, artificial intelligence is already going to work. Carefully calibrated algorithms square the language with the time of day and other contextual clues. The system searches for trigger words using the insight of data analysis. The split-second operation is designed to make educated guesses, with the precision of machine learning, about whether the crisis requires immediate emergency intervention.

It’s essentially the same technology that scans your email to serve up ads, but applied instead to matters of life and death.

For many people — young people, especially — texting has become a default mode of communication. Research shows that people text in moments of despair, too. Crisis Text Line was early on the trend; founded in 2013, it’s now the United States’ busiest round-the-clock crisis intervention hotline that uses texting as its exclusive mode of communication. Over the past six years, the service has processed over 130 million anonymous messages, gathering mounds of data in the process.

That data have made Crisis Text Line an increasingly valuable tool in the field of crisis intervention. It’s not just the AI engine that analyzes initial texts with the goal of making the service more effective. Researchers around the world can gain access, for free, to a scrubbed, fully anonymized version of the dataset — the largest body of crisis data in the U.S., according to a Crisis Text Line spokesperson — to learn how people communicate at their most desperate moments.

Analysis of the data has identified spikes in activity that appear to be consistent over time: Anxiety calls peak at 11 p.m., and self-harm is most likely at 4 a.m. And the data have revealed surprising ways that people communicate at their most desperate. Words like “ibuprofen,” “Tylenol,” and “bridge,” it turns out, are four times more likely to indicate a high-risk situation than “suicide” or “shooting.”

A study of the conversations that led a crisis counselor to start an “active rescue” — a call to emergency services — revealed another unexpected pattern. In those conversations, a crying-face emoji was four times more likely to appear than the word “suicide.” Crisis Text Line’s artificial intelligence system scans incoming texts for those kinds of words and phrases, then routes high-risk messages to the front of the queue — similar to a hospital’s triage system.

“Algorithms allow us to create a queue designed to help the texters who are in imminent risk of suicide first,” says spokesperson Laurel Schwartz.

All texters get an immediate, automated welcome reply. The nonprofit aims to send high-risk texters a personalized reply within 30 seconds and all texters within five minutes.

That’s where the human work comes in. Jonah Sharkey, 46, is one of the service’s 4,500 volunteer counselors. Based in North Carolina, he volunteers in two-hour shifts, plugging into the system from anywhere with a computer and an internet connection. In two years of volunteering, Sharkey has learned how to pick up on critical cues within incoming texts.

“I find that often people who are closest to ending their life are those with the fewest words to share.” Sharkey says. “Those are the ones I take extra care with, to gently bring out what they want to talk about and understand what they’re struggling with.”


Artificial intelligence and data analysis were built into Crisis Text Line’s architecture from the get-go, Schwartz says. In fact, employees think of the service as a tech company as much as a mental health nonprofit. On its blog and YouTube channel, the service maintains a casual style — as informal and direct as modern text communications. The New York offices have conference rooms named after Hogwarts houses from the “Harry Potter” books.

Crisis Text Line is also now the world’s second-largest distributed network of active volunteers, behind only Wikipedia, according to the nonprofit recruiting service VolunteerMatch. Real-time tools give Crisis Text Line counselors access to research and professional guidelines. Volunteer supervisors can digitally shadow counselors online and communicate directly with them as text exchanges develop. Counselors also have a channel for communicating with each other.

The technology helps crisis counselors do the critical work, even when things get hectic. Sharkey says texts often spike depending on the news and what’s trending on social media. “Some nights we might have 20 volunteers online,” Sharkey says, “and each one has four or five conversations going.”

Conversations about suicide peak on Sunday nights and are the lowest on Friday and Saturday. Texters who mention social media are more likely to be over 35. Some of the lowest volumes come at Thanksgiving and Christmas.

Over time, and with the supervisors’ support, Sharkey says he’s gotten better at identifying dangerous situations, navigating the hot moment, and talking people toward a cooler place. He can tell that texters are panicked, he says, “if they’re repeating themselves or putting out a lot of energy. But then throughout the conversation, they’ll get clearer and clearer. They’ll say, ‘I’m calming down,’ or even, ‘I feel like I can handle this now — let me go back to my day.’”

Researchers have used Crisis Text Line data to analyze the work of the counselors themselves — determining, for instance, that the longer counselors work for the service, the more they increase the diversity of words they use to communicate with texters.

Other researchers have studied how people use the service overall. One recent study published in the journal Advances in Social Work examined dozens of text-counseling transcripts and concluded that young people, in particular, appreciate the app-based approach.

“Texters valued the privacy and flexibility of texting that permitted them to receive help immediately rather than delaying,” according to the study, authored by Ande Nesmith, associate professor at the University of St. Thomas in Minnesota. “Texters and counselors suggested that the texting option might lead young people to seek help that they might otherwise avoid.”


It’s not just young people who are using the texts. The service is open to anyone who texts “741741” — the left-hand side of a phone’s keypad. And Crisis Text Line’s data provides insights about texters of all ages. All conversations are confidential, but people can provide information about themselves in an optional survey at the end of a text exchange.

A recent blog post from the service outlines some statistics drawn from that survey data. Conversations about suicide peak on Sunday nights and are the lowest on Friday and Saturday. People who text during lunchtime are more likely to be between 25 and 54. Texters who mention social media are more likely to be over 35. Texters 13 and under are more likely to be of color and identify as LGBTQ, and are three times as likely to talk about bullying. Some of the nonprofit’s lowest volumes — contrary to popular belief about holidays and mental health — come the weeks of Thanksgiving and Christmas.

Unlike many businesses in the Big Data arena, Crisis Text Line isn’t monetizing its data by selling it in secret batches to merchants. (And there have been offers.) Instead, the nonprofit is openly sharing information with researchers, scientists, and the public at large on the website CrisisTrends.org.

And some researchers are using the Crisis Text Line data to sprint off in entirely new directions. Jennifer Runkle, of the North Carolina Institute for Climate Studies, examines the mental health impact of natural catastrophes. Along with colleague Maggie Sugg, an assistant professor of geography and planning at Appalachian State University in North Carolina, Runkle analyzed the use of Crisis Text Line after Hurricane Florence in 2018. The research concluded that text-based crisis intervention should be better publicized in vulnerable rural areas.

“Typically, these areas have the highest suicide rates, but lowest CTL usage,” Runkle said. 

Text-based crisis intervention is gaining traction in the world of mental health and social services. Peer-to-peer text networks are popping up on college campuses. Some established helpline networks have added texting options. And a coalition of crisis groups is promoting new federal legislation that would require colleges and universities to include numbers for Crisis Text Line and the National Suicide Prevention Lifeline on the back of every student ID.

For those who still remember the dream of the 1990s — before multinationals and trolls and misinformation brokers — this is how the Internet was supposed to work. In an age of TikTok, YouTube, and blinding multimedia offerings, Crisis Text Line underscores the power of old-school Internet protocol and simple text-based, one-to-one communication.

“Crisis Text Line is like a 911 of sorts for young people,” Runkle says. “It’s fast, free, and anonymous and allows them instantaneous access to crisis services when they are in the heat of a crisis.”

And as odd as it may seem, texting can be more intimate and revealing than a phone call or a face-to-face meeting. “We do find that people share the hard stuff with us via text message,” Schwartz says. “More than half of our texters open up about something they’ve never shared with anyone before.”

As a front-line crisis counselor, Sharkey can confirm that.

“I think when people are in crisis, it’s hard for them to reach out to the people in their life,” Sharkey says. “They know that the people that love them would want to help them, but when they’re in crisis they may not want to take that step. They don’t want to worry their family or friends. But they’ll text someone.”

Published on

Glenn McDonald is a writer based in Chapel Hill, North Carolina.

 

Illustration by Lorenzo Gritti

Humans+Robots

How to prepare for sentient AI

And why the experts hope it never happens

By Schuyler Velasco