5 Takeaways from a Discussion on How Generative AI Can Support QI

5 Takeaways from a Discussion on Generative AI and QI

Why It Matters

"Eventually, AI will likely become widely used in health care. There are many implications to consider. Preparation can begin now."

Recently, over 300 attendees tuned in to an hourlong Institute for Healthcare Improvement (IHI) webinar on using generative AI for quality improvement. While we did answer questions, we mostly took an “all teach, all learn” approach to our discussion of how generative AI tools can be applied in a QI context and learned what excites and worries participants about the current slate of tools available.

Our conversation surfaced needs for ongoing education, policy development, and thoughtful consideration of ethical, privacy, and environmental impacts as AI becomes more integrated into health care practice.

Here, we summarize our takeaways from the session. 

  • Generative AI is not yet being widely used, but uptake is occurring. We polled attendees and found that most (65 percent) are not using AI in quality work. Those who are using AI are using it to brainstorm changes (25 percent), analyze problems (16 percent), and building solutions such as standard work (16 percent). Many attendees were active in the chat and shared the following types of use cases:

    Data management — The cases described involved using AI for data manipulation and summarizing to simplify complex or time-consuming tasks.

    Clinical charting — A pilot project using generative AI for charting in primary care showcased its potential for automating documentation.

    Medical writing and research — Participants noted that the widely documented accuracy issues necessitate caution. Anyone using AI for research support should not use or cite AI-generated information without verifying sources.

    Translation — Participants shared mixed experiences with using AI for translating information between languages, noting its usefulness but also the need for careful review by a human with expertise in the languages of interest, especially when technical terms are involved.

    Content creation — Participants had used generative AI to help prepare presentations, panel discussions, and developing proposals thereby illustrating how AI can serve a creative function and be an efficient brainstorming partner.

    Prognostication and predictive analytics — This use case is relevant to AI more broadly defined (rather than generative AI specifically), and at least one participant was already using AI for a sepsis prediction project.
  • Limitations and dangers of existing AI tools constrain the willingness to use them. Participants highlighted weighty challenges, such as ensuring the accuracy of AI-generated content (e.g., identifying hallucinations and checking references given AI’s propensity for generating realistic but fictitious references), promoting transparency regarding AI limitations, and addressing bias. 

    Participants raised the issue of outdated information. For example, the free version of ChatGPT (GPT-3.5) only includes data available through January 2022. One possible solution to handling the problem of outdated information AI tools may provide is requesting verifiable references to evaluate the outputs.

    Transparency and trust are vital for the ongoing use of AI tools. If health care professionals do not trust the outputs, they will not use the tools. A better understanding of the algorithms and how they generate their responses might help address users’ concerns. A participant asked about AI companies acknowledging the likelihood of inaccurate outputs. Notably, OpenAI has added a disclaimer to the ChatGPT interface noting the possibility of inaccuracy.

    Participants also expressed concern about bias in AI algorithms that result in clinical decision-making that does not align with evidence-based care best practices. These actions can be harmful and inequitable. Some companies build precautions like Reinforcement Learning with Human Feedback into their tools, but users should be aware that they are not foolproof.
  • Rising adoption of generative AI has led to a range of concerns. One participant expressed apprehension about users becoming overly dependent on AI platforms. Three contributors discussed the implications for privacy, including the handling of sensitive data and compliance with regulations like HIPAA in the United States. (Notably, at this writing, the commercial AI tools are not HIPAA compliant, so no protected health information should be entered when using them.) One participant likened AI’s environmental footprint to the impact of Bitcoin mining. The issue of access came up as we discussed whether beginner-friendly tools would democratize quality improvement by making tools and resources more widely available or would they exacerbate the gaps between high- and low-resource settings.
  • There is room for optimism alongside deep-seated concerns. Other use cases and policies covered the topic of AI integration in health care systems (e.g., AI triage systems). The potential for AI-enabled co-production with underserved communities came up and raised the possibility of promoting quality improvement best practices and equitable co-design as a counterpoint to the discussion of algorithmic bias.
  • Eventually, AI will likely become widely used in health care. There are many implications to consider. Preparation can begin now. Given AI’s rapid uptake across the QI community, contributors raised the need for training students and professionals in using AI effectively and safely. We discussed a range of issues to consider now and in the near future to responsibly engage with AI.

    Large organizations with significant volumes of intellectual property that want to employ generative AI should start now (if they have not already) to create policies regarding responsible organizational use and the risk of sacrificing their proprietary information when sending it into the public domain. 

    Will it someday be considered negligent not to utilize AI for health care? Considering its growing impact and potential to produce highly accurate predictions and diagnoses, this is another possibility worth studying.

Resources to Learn More

Given the rapid pace of change and the wealth of new information coming online almost daily, it is useful to know how to stay current as AI develops. The discussion surfaced three podcasts to consider: Everyday AI, The AI Breakdown, and Perspectives. In addition, facilitators advised participants to look into free massive open online courses (MOOCs) that teach skills like prompt engineering and provide basic information for non-computer scientists about how these new tools work.

Gareth S. Kantor, MD, is a clinical consultant, Insight Actuaries & Consultants. Marina Renton, MPhil, is an IHI Research Associate, Innovation and Design. Jeffrey Rakover, MPP, is an IHI Director, Innovation and Design. Pierre M. Barker, MD, MBChB, is IHI's Chief Scientific Officer, Institute for Healthcare Improvement.

You may also be interested in:

You can still register to listen to the recording of the AI for Quality Improvement webinar.

IHI Forum 2023 Keynote Address — Peter Lee, PhD, Corporate Vice President of Research and Incubations at Microsoft

The Current Generation of AI Tools: Three Considerations for Quality Leaders

Artificial Intelligence in Health Care: Peter Lee on Empathy, Empowerment, and Equity

Artificial Intelligence, Data Privacy, and How to Keep Patients Safe Online