Green ghosted shapes image
Insights

Artificial Intelligence, Data Privacy, and How to Keep Patients Safe Online

Why It Matters


"As we assess the risks and benefits of AI in health care, we must build solid definitions and expectations of accountability and a thoughtful framework of how AI can be used to safely and effectively meet the needs of all users."

 

When we think about the patient experience, we typically think of in-person clinical care. However, it is important to remember that the patient journey often starts at home, guided by sources of information found online. As a digital marketer and patient advocate for mental health, I am watching this new world evolve in real-time. Mental health services are very much in demand, and artificial intelligence (AI) is being used to fill the gap in care.

Like many businesses, virtual mental health services collect as much information as possible to advertise to their target audience. In March 2023, Cerebral, a virtual therapy service, disclosed that they had shared protected health information for more than 3 million clients with third-party clients such as Facebook, TikTok, Google, and other online platforms. This data included contact information, birth dates, social security numbers, and results from mental health assessments. Another company, BetterHelp, settled with the US Federal Trade Commissions for a similar violation.

What are some of the consequences of this? For me, this meant disturbingly specific advertising being served across my social media accounts. Because I used one virtual therapy service that disclosed protected health information to third-party clients without permission, advertisers knew my age and diagnosis, and AI-generated ads promised me instant cures for a condition I have been living with most of my life.

The urgency in advertising based on protected health information take advantage of vulnerable people, pressure them to “buy now, before the price increases,” and promise quick cures. Marketing materials make vague references to “research-backed” treatments with no citations of actual studies. Self-serve eye movement desensitization and reprocessing (EMDR) apps promise to be as effective at treating PTSD as the help of a professional. I am fortunate to have strong health literacy and digital literacy skills, but in the online forums I am active in, I see plenty of people who fall for these ads.

The Ever-Growing Demand for Mental Health Support

The State of Mental Health in America 2023 report sheds light on an ever-growing need for mental health services:

  • 21 percent of adults are experiencing a mental illness. This is equivalent to more than 50 million Americans.
  • 55 percent of adults (28 million) with a mental illness receive no treatment.
  • 11 percent (5.5 million) who live with a mental illness have no health insurance.
  • For 23 percent of adults with a mental illness, costs prevent them seeking care.
  • 11 percent of adults who identified themselves with two or more races reported serious thoughts of suicide.
  • There are 350 individuals for every one mental health care provider in the US.

Many in the online forums frequented by the mental health community in the US express considerable frustration with their lack of access to therapy or dissatisfaction with the help they are able to get. Some have turned to AI as a therapist, pouring their hearts out to Google Bard and ChatGPT, undeterred by the fact that what they type goes into a massive data lake to be used in unknown ways down the road.

Some mental health services use AI to chat with patients seeking help. In late 2022, one service launched a pilot program without making clear that patients were interacting with bots. The company’s informed consent was not clearly outlined, and people who eventually learned they were not receiving care from a human lost confidence in the sessions. After the company, called Koko, went public with this experiment, heated debates on social media ensued. The company argued that their services should not require the scrutiny of an institutional review board because, among other claims, they did not intend to publish their findings. Others argued that an IRB should have been involved because Koko’s research involved human subjects. Detractors also conveyed concern about AI’s lack of empathy and nuance and potentially significant risks to patients in crisis.

The Positive Aspects to AI in Mental Health

The same company that used bots to talk to patients has partnered with social media companies to monitor content. Their system understands coded language — like “su1c1d3” instead of “suicide” — and reaches out to people publishing posts with sensitive content, checks to see if they are in distress, and offers to connect them with resources to get help.

After sharing a post about some of what I have learned about my diagnosis, one of this company’s bots contacted me and asked if I was experiencing a crisis. It checked in with me every few days to see if I needed to talk to someone. While I did not need help at the time, I appreciated seeing the message, especially since I was on a platform popular with marginalized communities that frequently experience bullying and discrimination.

As we assess the risks and benefits of AI in health care, we must build solid definitions and expectations of accountability and a thoughtful framework of how AI can be used to safely and effectively meet the needs of all users. This process must welcome all communities in developing and managing these services. Factors that include the following will help ensure a safe patient experience:

  1. Determine definitions and roles for accountability for compliance, reporting, oversight, and enforcement.
  2. Involve a broad set of perspectives and stakeholders in assessing and managing risks(3)
  3. Adhere to a framework that is built on a goal of trustworthiness.
  4. Protect users from unsafe or ineffective systems.
  5. Design algorithms in an equitable way to prevent discrimination.
  6. Ensure data privacy and build in protections from potential abuse.
  7. Make the informed consent process transparent enough so patients understand when they are interacting with AI and its risks.
  8. Opt-out options and access to human assistance should be available to those who want it.

With millions of people seeking help for mental health, there is an opportunity to make AI an efficient and creative asset, but we should proceed with caution. As private equity and tech giants pour money into new applications that may make lofty promises, guardrails and policies about safety cannot be an afterthought. The active participation of patients living with mental illness will be essential to shaping the services meant to help them.

Lee Frost is an IHI Marketing Operations Manager and patient advocate.

You may also be interested in:

Artificial Intelligence in Health Care: Peter Lee on Empathy, Empowerment, and Equity

The Current Generation of AI Tools: Three Considerations for Quality Leaders

Share