Patient Safety and Artificial Intelligence

Patient Safety and Artificial Intelligence

The IHI Lucian Leape Institute convened an interprofessional expert panel in January 2024 to explore the promise of generative artificial intelligence (genAI) in health care and potential risks for patient safety. The panel focused on three clinical use cases: documentation support, clinical decision support, and patient-facing chatbots. The resulting report discusses the expert panel's findings and recommendations.
Patient Safety and Artificial Intelligence

IHI Lucian Leape Institute Expert Panel Report

Patient Safety and Artificial Intelligence

Patient Safety and Artificial Intelligence

"As a support tool, AI can save clinicians time and help them assess patient cases, improve diagnostic accuracy, and reduce costs by guiding clinicians to the most cost-effective workups."

IHI Lucian Leape Institute Expert Panel Report

Patient Safety and Artificial Intelligence

Patient Safety and Artificial Intelligence

"While genAI may improve care quality, lower costs, and enhance both patient and clinician experiences, these tools also have flaws that may compromise patient safety."

IHI Lucian Leape Institute Expert Panel Report

Patient Safety and Artificial Intelligence

Patient Safety and Artificial Intelligence

"Chatbots offer the opportunity to expand access to care and democratize access to credible health care information."

IHI Lucian Leape Institute Expert Panel Report

Patient Safety and Artificial Intelligence

Patient Safety and Artificial Intelligence

"The need for human oversight of genAI clinical output will raise a major challenge to patient safety."

Expert Panel Report

Patient Safety and Artificial Intelligence: Opportunities and Challenges for Care Delivery

The IHI Lucian Leape Institute (LLI) expert panel reviewed and discussed three use cases that highlight areas where generative artificial intelligence (genAI) could significantly impact patient safety: documentation support, clinical decision support, and patient-facing chatbots. The panel also discussed the broader implications of genAI for the field of patient safety and the work of safety professionals.

The goal of the expert panel was to identify areas where genAI could enhance safety, name potential threats, and suggest ways to maximize benefits and minimize harm.

Based on the expert panel’s review and discussion of AI implications for patient safety, the LLI published a report, Patient Safety and Artificial Intelligence: Opportunities and Challenges for Care Delivery, which summarizes:

  • For the three use cases, the potential benefits, risks, and challenges of genAI implementation in clinical care;
  • A detailed review of mitigation and monitoring strategies and expert panel recommendations; and
  • An appraisal of the implications of genAI for the patient safety field. 

Along with these recommendations, further guardrails must be implemented based on the following concerns highlighted by the panel:

  • Relying on clinicians alone to double-check the accuracy of AI results and recommendations is an unreliable safety strategy.
  • The risk of deskilling is high and will require proactive mitigation strategies.
  • AI-driven efficiencies will simply result in more duties assigned to clinicians, with no relief from their current workload and cognitive burden.

Key Findings

  • GenAI is here to stay, and will grow and evolve quickly.
  • There are benefits and potential risks that must be addressed.
  • Many of the safety concerns relate to trustworthiness, which stem partly from accuracy and partly from the challenges of figuring out what information is trustworthy.
  • Other important safety-adjacent issues need to be considered, including transparency, the importance of human connection, equity, and more.
  • Some challenges will be addressed by regulatory or accreditation bodies at the federal level; much will be determined locally, particularly by governance and safety programs within health systems.

Expert Panel Recommendations

Panel members are enthusiastic about the potential for genAI tools to reduce clinician burnout and cognitive load, facilitate the provision of evidence-based practices, improve diagnostic accuracy, and potentially reduce cost. They also raised concerns about numerous potential risks to patient safety that must be identified, mitigated, and monitored. 

In pursuing the ongoing development of genAI tools and their integration into clinical care delivery, the expert panel made these recommendations:

  • Serve and safeguard the patient.
  • Learn with, engage, and listen to clinicians.
  • Evaluate and ensure AI efficacy and freedom from bias.
  • Establish strict AI governance, oversight, and guidance both for individual health delivery systems and the federal government.
  • Be intentional with the design, implementation, and ongoing evaluation of AI tools.
  • Engage in collaborative learning across health care systems.

Considerations for Key Groups

The report also includes specific recommendations and mitigation strategies for key groups:

  • Patients and patient advocates
  • Clinicians
  • Quality and patient safety professionals
  • Health care systems
  • GenAI developers
  • Researchers
  • Regulators and policymakers

IHI is grateful to the Gordon and Betty Moore Foundation for their generous funding that supported the convening of the expert panel and creation and dissemination of the report.

Patient Safety and AI Report

The full report includes the panel's examination of three clinical use cases (benefits, risks, and challenges), recommendations and mitigation strategies, an appraisal of the impact of genAI on the patient safety field, and considerations for key groups.

View Report
IHI Lucian Leape Institute Report: Patient Safety and Artificial Intelligence

Expert Panel

The IHI Lucian Leape Institute (LLI) gratefully acknowledges the experts that contributed to this work. [Asterisk (*) denotes LLI members.] 


  • Robert Wachter, MD,* Professor and Chair, Department of Medicine, University of California, San Francisco    
  • Kaveh Shojania, MD, Professor and Vice Chair (Quality and Innovation), Department of Medicine, University of Toronto

Expert Participants

  • Nasim Afsar, MD, MBA, MHM, Chief Health Officer, Oracle
  • David Bates, MD, MS, Chief of General Internal Medicine, Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School
  • Leah Binder, MA, MGA, President and CEO, The Leapfrog Group
  • David Classen, MD, MS, Professor of Medicine, University of Utah School of Medicine; Chief Medical Information Officer, Pascal Metrics
  • Pamela Cipriano, PhD, RN, NEA-BC, FAAN, President, International Council of Nurses; Professor, University of Virginia School of Nursing
  • Patricia Folcarelli, PhD, MA, RN, Senior Vice President, Patient Care Services and Chief Nursing Officer, Beth Israel Deaconess Medical Center
  • Tejal Gandhi, MD, MPH, CPPS, Chief Safety and Transformation Officer, Press Ganey
  • Eric Horvitz, MD, PhD, Chief Scientific Officer, Microsoft
  • Gary Kaplan, MD, FACMPE,* Former CEO, Virginia Mason Franciscan Health; Chair, Lucian Leape Institute
  • Della Lin, MS, MD, FASA, Consultant; Executive Officer, Anesthesia Patient Safety Foundation; Physician Lead, Hawaii Safer Care
  • Kedar Mate, MD,* President and CEO, Institute for Healthcare Improvement
  • Muhammad Mamdani, MPH, MA, PharmD, Vice President, Data Science and Advanced Analytics, Unity Health Toronto
  • Patricia McGaffigan, MS, CPPS, Vice President, Patient Safety, Institute for Healthcare Improvement
  • Genevieve Melton-Meaux, MD, PhD, Chief Analytics and Care Innovation Officer, M Health Fairview
  • Eric Poon, MD, MPH, Chief Health Information Officer, Duke Health
  • Vardit Ravitsky, PhD, President and CEO, The Hastings Center
  • Tina Shah, MD, MPH, Chief Clinical Officer, Abridge
  • Rod Tarrago, MD, Lead of Clinical Informatics, Amazon Web Services, Academic Medicine
  • Beth Daley Ullem, MBA, Co-Founder, Patients for Patient Safety; Co-Chair, SPS AI Steering Committee; Board of Directors, Institute for Healthcare Improvement

External Reviewers: Patient Safety and AI Experts and LLI Members*

  • Julia Adler-Milstein, PhD, Professor and Chief of the Division of Clinical Informatics and Digital Transformation, University of California, San Francisco
  • Brian Anderson, MD, President and Chief Executive Officer, Coalition for Health AI (CHAI)
  • Joanne Disch, PhD, RN, FAAN,* Professor ad Honorem, University of Minnesota School of Nursing
  • Michael Howell, MD, MPH, Chief Clinical Officer and Deputy Chief Health Officer, Google
  • Bob Kocher, MD, Adjunct Professor, Stanford University Department of Health Policy; Non-Resident Senior Fellow, USC Schaffer Center; Partner, Venrock
  • Julianne Morath, BSN, MS, CPPS,* Leadership Coach and Consultant; Affiliate Faculty, Department of Medicine, University of Washington, Seattle
  • Charles Vincent, PhD, MPhil,* Professor of Psychology, University of Oxford; Emeritus Professor of Clinical Safety Research, Imperial College, London
  • Daniel Yang, MD, Vice President, AI and Emerging Technologies, Kaiser Permanente

Additional Resources

Peter Lee speaks about AI in his keynote at the 2023 IHI Forum.

About IHI Lucian Leape Institute

Composed of international thought leaders with a common interest in patient safety, the IHI Lucian Leape Institute (LLI) functions as a think tank to identify new approaches to improving patient safety.

Learn More about LLI