THR Web Features   /   April 8, 2022

How Smart Tech Tried to Solve the Mental Health Crisis and Only Made It Worse

Considering the case of Crisis Text Line.

Emma Bedor Hiland

( Su San Lee via Unsplash.com)

Curio · The Hedgehog Review | How Smart Tech Tried To Solve The Mental Health Crisis And Only Made It Worse

Crisis Text Line was supposed to be the exception. Skyrocketing rates of depression, anxiety, and mental distress over the last decade demanded new, innovative solutions. The non-profit organization was founded in 2013 with the mission of providing free mental health text messaging services and crisis intervention tools. It seemed like the right moment to use technology to make the world a better place. Over the following years, the accolades and praise the platform received reflected its success. But their sterling reputation was tarnished overnight at the beginning of 2022 when Politico published an investigation into the way Crisis Text Line had handled and shared user data. The problem with the organization, however, goes well beyond its alleged mishandling of user information.

Despite Crisis Text Line’s assurance that its platform was anonymous, Politico’s January report showed that the company’s private messaging sessions were not actually anonymous. Data about users, including what they shared with Crisis Text Line’s volunteers, had been provided and sold to an entirely different company called Loris.ai, a tech startup that specializes in artificial intelligence software for human resources and customer service. The report brought to light a troubling relationship between the two organizations. Both had previously been headed by the same CEO, Nancy Lublin. In 2019, however, Lublin had stepped down from Loris, and in 2020 Crisis Text Line’s board ousted her following allegations that she had engaged in workplace racism.

But the troubles that enveloped Crisis Text Line can’t be blamed on one bad apple. Crisis Text Line’s board of directors had approved the relationship between the entities. In the technology and big data sectors, commodification of user data is fundamental to a platform or toolset’s economic survival, and by sharing data with Loris.ai, Crisis Text Line was able to provide needed services. The harsh reality revealed by the Politico report was that even mental healthcare is not immune from commodification, despite the risks of aggregating and sharing information about experiences and topics which continue to be stigmatized.

In the case of the Crisis Text Line-Loris.ai partnership, Loris used the nonprofit’s data to improve its own, for-profit development of machine learning algorithms sold to corporations and governments. Although Crisis Text Line maintains that all of the data shared with Loris was anonymized, the transactional nature of the relationship between the two was still fundamentally an economic one. As the Loris.ai website states, “Crisis Text Line is a Loris shareholder. Our success offers material benefit to CTL, helping this non-profit organization continue its important work. We believe this model is a blueprint for ways for-profit companies can infuse social good into their culture and operations, and for nonprofits to prosper.”

The outrage generated by the Politico piece was not unwarranted. Who would want their private and sensitive information, particularly information related to mental health and generated during moments of crisis, shared or sold without their explicit permission? Brendan Carr, head of the Federal Communications Commission, wrote an open letter to both Crisis Text Line and Loris.ai, citing not only the questionable nature of their partnership but also its broader implications for individuals experiencing crises. He emphasized that the damage done extended to a loss of public trust in other entities—such as the National Suicide Prevention Lifeline—that claimed to anonymously provide emotional and mental support for people in distress.

Crisis Text Line tried to manage the public relations disaster with assurances that they were no longer sharing data with Loris, but the damage caused by the partnership between the two organizations could not be undone. The publication of the Politico article unleashed a barrage of criticism directed toward an organization once considered above reproach.

My book, Therapy Tech: The Digital Transformation of Mental Healthcare, examines new technologies that claim to improve the mental health of their users. These include chatbots powered by advances in artificial intelligence; smartphone applications proffering guided meditations, games, and self-directed exercises; social media and other platforms that facilitate the surveillance and monitoring of users’ mental states (including Crisis Text Line); and screen-based mental health services, described as telemental healthcare. Advocates for these tools suggest that their efficacy and accessibility make them viable solutions to the global mental healthcare crisis. Yet the research I did for Therapy Tech demonstrates that technological interventions cannot provide a “quick fix” to an overburdened and broken mental healthcare system. The tragedy is that these tools, designed with good intentions, end up perpetuating pre-existing disparities between who does and does not have access to mental healthcare and further entrench inequities of race, sex, and class.

The problem is not just that these technologies fail to achieve their stated aims. It’s also that some organizations exploit well-intentioned volunteers. Crisis Text Line, for example, relies almost entirely on unpaid labor for its public-facing workforce. Tasked with messaging users in need of support, the company describes the work of volunteers in terms of self-improvement: “an opportunity to hone your skills in communication, counseling, and intervention…which can in turn sharpen your crisis management skills!”

Crisis Text Line’s Recruitment and Admissions Manager told me that the company had considered paying volunteers, but ultimately decided against it. Lublin, while CEO, stated that even though “rewards” are certainly necessary to retain volunteers, they need not be financial. Taking to Crisis Text Line’s blog in 2018 she wrote that “The best way to keep people involved...is to reward them…. In order to do this, we created Levels. They’re a huge source of pride for our Crisis Counselors, with each level corresponding to a certain number of conversations taken. Levels are a form of thanking our Crisis Counselors…an acknowledgment of amazing work completed.” The rewards given for upping one’s level, I was told, included water bottles, t-shirts, and coupons.

This is a chilling failure of social responsibility. Crisis Text Line financially benefits from the data it sold about users but refuses to provide financial compensation to its volunteers. What’s more, as I noted in Therapy Tech, Crisis Text Line management knew that volunteers were experiencing psychological distress as a direct result of volunteering. Would monetary payment undo those harms? It’s doubtful. Nevertheless, it would demonstrate, at the very least, an attempt to value the work of volunteers beyond merely improving their “crisis management skills.”

Take an example from big tech. Facebook—which at least pays its contract workers—recently settled a lawsuit filed by content moderators that asserted that workers experienced psychological distress and trauma while working for the platform. Facebook is no stranger to complaints about the treatment of data privacy, nor should it be considered a model in the realm of techno-facilitated health interventions. Yet Crisis Text Line’s history of partnering with the social media giant suggests that the nonprofit’s concern for user privacy has always been limited at best.

So it is interesting that a February 1 update to Crisis Text Line’s Terms of Service notes that “If you reach out to us through a third-party service (such as Facebook Messenger or WhatsApp), their policies control what they do with your data.” Since 2017, Facebook has used a proprietary algorithm that assesses whether posts suggest a likelihood of suicidal or harm-related intent. This is a form of techno-reliant medicalization: The algorithm is manufacturing medical data and information about its users based on the content of their posts. We do not know what Facebook does with this medicalized data, how it is stored, or even whether any steps are taken to protect it, as is required for other health care entities that store patient data electronically. The fact that Crisis Text Line allows its users to gain access to its messaging platform through their Facebook accounts, then cedes control over whatever data is generated to Facebook, demonstrates another example of how the nonprofit prioritizes commodification over privacy.

The backlash against Crisis Text Line is a good sign. It suggests that the public remains stubbornly skeptical about business practices in the health technologies sector that treat consent to terms of service as permission to analyze, share, or sell their most sensitive data to the highest bidder. Even if user agreements disclose those possibilities, we know in our bones that such legalese does not paper over an organization’s ethical obligation to its users.

When it comes to mental health, consent is not the only consideration that matters. Consider also that many people who use health technologies are particularly vulnerable. They may use inferior—albeit free or inexpensive—substitutes for tried-and-true healthcare providers because they lack the resources to pursue other options. To expect users to read pages of jargon-filled user agreements and then decide not to use a platform based on what they read is the technological equivalent of speaking out of both sides of one’s mouth. We cannot assert that a technology increases access to care while simultaneously claiming that if users are not amenable to a technology’s terms of use, they can choose to get help elsewhere. To suggest otherwise fundamentally undermines the argument that mental health technologies increase accessibility of health interventions and resources. We must operate based upon the presumption that individuals using these technologies, for whatever reason, do not have other options available to them.

Some professionals who work in the health technologies sector seem unsettled by this ethical confusion. I interviewed one licensed psychologist who also worked as a consultant for a well-known mental health application and she identified the double standard. “I’m in here doing science on the data and nobody signed a consent form,” she told me. “They acknowledge the terms of service, which say, ‘Hey, we’re probably going to analyze your data.’ But once it gets into [that] I’m looking at whether this works, it starts to feel like a clinical trial with no consent process.”

Once it is recorded and shared, personal data can end up in places that users may never have imagined and to which they would never consent. What makes Crisis Text Line’s actions especially egregious is that mental distress, disorders, and illnesses continue to be stigmatized. It is unconscionable that there could be additional harm done to patients already in distress when they learn that personal information they supplied in order to be treated has not been anonymized. Organizations that provide mental health-related interventions must take concrete, proactive steps to prioritize user privacy. In particular, they should refrain from sharing data unless expressly authorized by the patient in a manner beyond what is outlined in an inscrutable term of service agreement.

But a more realistic relationship with these tools has to start with the acceptance of one basic principle: Big data can’t save us from mental distresses and disorders. Mental health (and conversely, mental illness and/or disorder) go beyond what even the most sophisticated of deep learning algorithms can assess and predict. Healthcare providers who see patients face-to-face know that each case is culturally contingent and individually specific. Psychology, psychiatry, and mental health remain domains in which “smart,” predictive data and algorithms do not, and likely never will, exist. Investing time, money, and energy in the pursuit of technosolutionism fails those who are most in need of mental healthcare services in the here and now.