Image
Questions and Insights on Artificial Intelligence and Quality Improvement
Insights

Learning Together: Questions and Insights on Artificial Intelligence and Quality Improvement

Why It Matters

AI has a potentially transformative role to play in health care. 

 

The Institute for Healthcare Improvement (IHI) is dedicated to sharing what we are learning as we focus on the intersection of Generative Artificial Intelligence (generative AI), Large Language Models (LLMs), and Quality Improvement (QI). IHI faculty Gareth S. Kantor, MD, and Senior Research Associate Marina Renton, MPhil, recently facilitated a webinar that sparked a rich discussion with participants. We have captured key takeaways from these exchanges below.

Note: We used a generative AI tool to help draft this post. We started by summarizing the key questions raised during the discussion and our answers. We submitted this information and instructed (or prompted) the tool for a blog post with a “conversational” and “engaging” style. Finally, we edited the output for clarity, tone, accuracy, and improved diction.

Learning More About AI

  • Prepare the workforce for AI. Health care professionals and students need to prepare to use AI tools in the workplace. The key? Dive into the basics of prompt engineering and leverage free resources like Coursera to understand AI’s practical applications. As AI reshapes traditional tasks like data analysis, data visualization, process analysis, and change idea generation, nurturing complementary skills like strategic thinking, active listening, and design thinking becomes even more vital. Generative AI, with its ability to suggest structured discussions and analyze information, also emerges as an ally in building these competencies.
  • Stay informed amid the AI deluge. The AI landscape is dynamic, with new developments occurring daily. We do not endorse specific resources, but general interest podcasts (like Everyday AI or The AI Breakdown), special focus journals such as NEJM AI, and mainstream media outlets (like The New York Times and BBC News) offer up-to-date insights.

Data and Privacy

  • Learn about AI deployment in smaller organizationsMeta’s Llama and a host of other open-source tools may facilitate use of AI tools by smaller entities at potentially lower cost. Internal hosting, whether with these or other alternatives, can help ensure that data remains safely in-house. Some programming skill is needed to implement these open-source options.
  • Do not assume HIPAA compliance when using Generative AI. Technologies like ChatGPT are not inherently HIPAA-compliant, but use of proper business agreements can achieve compliance, as highlighted in a recent HIPAA Journal article.

Language and International Impact

  • Test AI’s linguistic versatility. ChatGPT and similar platforms offer multilingual interactions, although proficiency varies based on the underlying training data. You can start by prompting the tool in your primary language. No separate language setting change is necessary for most generative AI tools. 

Broader Health Care Use Cases

  • Consider AI a potential catalyst for joy in work. Organizations are harnessing AI to streamline note-taking and medical record-keeping, enhancing workplace satisfaction. Many tools (such as Abridge, Nabla and DAX), exemplify this trend, offering to accelerate the performance of mundane but essential tasks and perhaps reduce burnout.
  • Learn about patient navigation and triage. Institutions like Johns Hopkins are exploring AI's patient navigation potential, promising a future where it is common to use AI for critical health care processes such as aiding with emergency department triage and connecting patients to the most appropriate outpatient care setting.

Coproduction and Equity

  • Learn to use AI as a tool against bias. AI can be tailored to identify and address bias in health care. By using specific prompts (e.g., telling a generative AI tool it is an expert in health care equity and can apply concepts like implicit bias), it is possible to highlight and mitigate bias when using AI in clinical encounters and research tasks. But the risks of bias are ever-present and not always visible.

Other Questions and Concerns

  • Look for transparency in AI responses. LLMs differ in various aspects. These include whether they demonstrate transparency and promote trust with built-in accuracy checks and by identifying sources of information. Identifying errors helps prevent the spread of false or misleading information. By providing attribution information (e.g., research papers, news articles, etc.), LLM users can assess the reliability of the information. This can also help identify potential sources of biases.
  • Be aware of knowledge cutoffs. LLMs also vary in their knowledge cutoff dates. Just as books may have multiple editions, a knowledge cutoff is when information used to train the model was last updated. For example, ChatGPT-4’s knowledge cutoff is April 2023. Using LLM plug-ins that browse the internet (e.g., GPT-4’s web reader) can decrease the likelihood of hallucinations, the plausible-sounding but false, fabricated responses that have received some media attention.
  • Learn more about AI's role in data analysis. Some tools, especially GPT-4, allow for analyzing and manipulating data across multiple spreadsheets, showcasing its versatility in handling complex tasks. Google’s Gemini is catching up, allowing users to input data directly to produce run charts, control charts, and other analyses. 

AI has a potentially transformative role to play in health care. IHI will continue to host opportunities for dialogue around AI’s implications, challenges, and trajectory. For the latest information from IHI, scroll to the bottom of this page to subscribe to our newsletter. If you are interested in being part of a larger conversation on AI and QI, please contact IHI Director, Innovation and Design, Jeffrey Rakover, MPP, at jrakover@IHI.org.

Pierre M. Barker, MD, MBChB, is IHI's Chief Scientific Officer, Institute for Healthcare Improvement. Gareth S. Kantor, MD, is a clinical consultant, Insight Actuaries & Consultants. Marina Renton, MPhil, is an IHI Research Associate, Innovation and Design. Jeffrey Rakover, MPP, is an IHI Director, Innovation and Design.

You may also be interested in:

5 Takeaways from a Discussion on How Generative AI Can Support QI

IHI Forum 2023 Keynote Address — Peter Lee, PhD, Corporate Vice President of Research and Incubations at Microsoft

Artificial Intelligence in Health Care: Peter Lee on Empathy, Empowerment, and Equity

Artificial Intelligence, Data Privacy, and How to Keep Patients Safe Online

Share