On May 30, 2017, the Rochester Police Department received a call about a juvenile in the city who had posted something concerning on Facebook.

The caller thought the boy might be contemplating suicide, so officers quickly contacted his mother. She told police her son hadn’t harmed himself and she had already taken him to a local hospital for evaluation.

It was a good end to what, in many ways, was a routine call.

Since 2016, Rochester police have received 95 reports about potentially suicidal residents based on Facebook posts that friends or family members spotted and deemed concerning enough to warrant calling 911.

But the department has also received two reports, including the one from May 30, 2017, that came from a different source: Facebook itself.

Three months before a Facebook employee called Rochester police about the juvenile’s concerning post, the social media company announced the launch of a new artificial intelligence program that scans text and pictures posted to the website for evidence that the user may be suicidal.

“I’m sure there are going to be people out there who say it’s an invasion of privacy,” Rochester police Capt. Jason Thomas said. “But to me, if they’re putting it out there on Facebook, which is public, it’s a cry for help.”

The program is currently running worldwide, looking for patterns in every Facebook user’s posts. It is perhaps the largest and most active example of a burgeoning new use of artificial intelligence, but several more advanced tools are in development.

From companies using Twitter posts and Fitbit data to recognize suicidal inclination to a group of Dartmouth College researchers who developed a program to scan Instagram posts and identify users at a high risk of alcoholism, social media-trawling AIs are quickly becoming tools for detecting elusive behavioral health problems.

“This technology is possible. It’s very clear that there are signals relevant to our mental health that are present in our social media data,” said Glen Coppersmith, CEO of Qntfy, a Virginia company that is using AI to identify people at risk of suicide based on data from Facebook, Twitter, Instagram and wearable devices.

“We can think of ways that we could apply this, but the biggest question is, ‘How ought we apply this?’ ” he said. “There are a lot of really big, obvious questions there. How should it be used? Who should have access to that information?”

The health problems these programs are designed to solve are among the biggest around.

In its announcement last month that life expectancy in the U.S. decreased last year, the Centers for Disease Control and Prevention identified two main causes: increases in suicide and overdose deaths.

Combined, they accounted for 725 deaths in New Hampshire in 2016.

Research at Dartmouth

Saeed Hassanpour and a team of researchers at Dartmouth set out last year to prove that AI could be used to first identify people at high risk of drug and alcohol addiction, and then target them with the proper treatment options.

The team created a neural network, a type of artificial intelligence programming, and fed it two sets of data: more than 1.3 million Instagram comments and pictures from 2,287 users and the responses those users gave on a clinical survey used by health care professionals to measure alcohol and drug use.

By analyzing the two data sets together, the network began to recognize patterns in the Instagram posts that corresponded with surveys that indicated a heightened risk of addiction.

In an article published in October in the journal Neuropsychopharmacology, the researchers announced that their neural network successfully detected users at a high risk of alcoholism. Hassanpour said they are now working to gather more data and believe they will be able to teach the network to identify indicators of drug addiction as well.

They’re also working on another aspect of the project: figuring out how, exactly, their network achieved its task. Because the program did not receive any human input — such as instructions to look for pictures of people holding alcoholic beverages, for example — the researchers don’t know what patterns it identified.

“We trained the model on the raw data, so we didn’t really give any kind of manual direction to the model on what features maybe contribute to (risk of addiction),” Hassanpour said. “That’s kind of the strength … but at the same time, it makes the models kind of a black box, so we are right now working to try and develop the features that contributed and provide some insight.”

Qntfy’s algorithm picked up on unexpected indicators of suicide, Coppersmith said. People contemplating suicide were more likely to use the word “I” than “we” and less likely to use emojis when discussing emotions. Other studies have shown that people contemplating suicide prefer Instagram’s Inkwell filter, which turns pictures black and white.

New science always takes time to gain the trust of clinicians and make its way into practical settings, said Ken Norton, executive director of the National Alliance on Mental Illness’ New Hampshire chapter. But he sees real potential for AI in the mental health field if the tools can be used with participants’ consent.

“We really need to look at different ways of preventing suicide than we’ve been using up until now,” he said.

“If I’m willing to have all my data analyzed for purposes of shopping and consumer goods, why shouldn’t I likewise be willing to have that information looked at for potential lifesaving measures?” he added.

Privacy risks

The research into these tools suggests they are ripe with potential, both for helping solve intractable health problems and creating a wide array of new privacy concerns.

In order to work, they depend on social media companies with spotty track records of protecting users’ data.

Facebook, which owns Instagram, announced in September that a data breach had exposed the personal information of about 50 million of its users.

And the companies’ business models are built around collecting huge amounts of user data that can be plugged into algorithms to help advertisers target potential customers. The AI tools developed by Qntfy, the Dartmouth researchers and others work on many of the same principles.

“I think the optimistic view of the future version of this is that a really intelligent app could just observe your behavior as you go about your life, on your phone, watching you move, and an algorithm would be crunching all that information, comparing it to millions of others’ and then tell your doctor” if you show signs of addictive or suicidal behavior, said Chris Danforth, a University of Vermont professor who helped develop one of the earliest AI tools to detect suicidal inclinations.

He doesn’t subscribe to the optimistic view and worries that unethical marketers could use AI tools to target alcohol and drug ads.

“There will be ways for advertisers of any industry to ask Facebook to show ads to people with demographic traits that correlate to people with mental health problems,” he said. “I’m definitely pessimistic that we’re not ready culturally. As a society, we don’t have the structures in place — legal structures or support structures.”

In a statement, Facebook wrote that it works closely with mental health experts and carefully handles information flagged by its AI program.

“Our Community Operations team includes thousands of people around the world who review reports about content on Facebook, including those flagged by our AI,” the company said. “The team includes a dedicated group of specialists who have specific training in suicide and self harm. Where we have signals of potential imminent risk or harm, a specialized team conducts an additional review to determine if we should help refer the individual for a wellness check.”

Silicon Valley’s “move fast and break things” ethos (it was once a Facebook motto) has been criticized in recent years as it became evident that some of the world’s largest tech companies failed to anticipate the damage their platforms could cause.

Some of the developers of AI-powered behavioral health screening tools are eager not to make that mistake.

“I think it’s a little too early to see who’s going to be the main user of this. I don’t want to get too ahead of ourselves,” Hassanpour said. “What we tried to do is show the value of social media data in identifying individuals at risk.”

“We hope that this can one day be operational while protecting people’s privacy, with the correct safeguards,” he added.

Opt-in, opt-out options

Social media users are also becoming more conscious of the value, and risk, that comes with turning their data over.

Danforth said that when he began recruiting people to participate in a study of his suicide risk-detection tool he had no trouble finding people willing to disclose their serious mental health information to researchers. But when they were asked for their social media data, many of them dropped out of the study.

One of the biggest ethical tests for the AI programs will be whether they’re developed as opt-in or opt-out models. Will social media users choose to have their posts monitored, or will platforms run them in the background, as Facebook is doing?

Coppersmith praised Facebook’s suicide prevention efforts and pointed out that users must opt-in to using Facebook in the first place, although the platform has become ubiquitous and can be hard to leave.

“I do applaud them that they’re trying to do this, doing some of the experimentations, even as they risk offending some people, because they want to see progress in this space,” he said, adding that the statistics — 47,173 Americans committed suicide last year and 70,237 fatally overdosed, according to the CDC — warrant bold action.

“Someone could build these things and use them unethically,” Coppersmith said. But that has to be balanced against “the fact that we’ve not meaningfully moved the needle in 50 years in suicide prevention.”

tfeathers@unionleader.com