Summary
- AI has the potential to streamline safety event review and reporting. Here’s how quality leaders can strengthen processes, reduce risks, and ensure AI truly supports patient safety.
AI is spreading quickly across health care, but the evidence showing how it improves quality and safety is still evolving. A report from IHI's Lucian Leape Institute offered guidance on how AI could support patient safety through early identification of signals associated with deterioration and aggregating serious safety events to learn from harm. Building on this foundation, IHI’s Chief Quality Officer (CQO) Network, which includes leaders from multiple countries, is actively exploring how AI can meaningfully benefit routine quality and safety work.
Recent discussions within the CQO Network have focused specifically on potential applications for AI in safety event review and reporting. Differing views have emerged. Several leaders caution that using AI risks oversimplifying complex events and missing context, especially when underlying investigations and resources are insufficient. AI may widen equity gaps if algorithms are biased. Others see potential for AI to identify patterns earlier, support rapid learning, and strengthen thematic analysis across large volumes of events. Many agree that the value of AI depends on clear use cases, systematic design, and reliable, unbiased data.
This exchange builds on work by others, such as Sorlie et al., De Micco et al., IHI’s Lucian Leape Institute, and others, who have noted the potential for generative AI tools to support reporting, identification, analysis, and mitigation for health care adverse events. Little real-world evidence demonstrates the impact (or risks) of such uses; yet, some health care organizations have adopted AI tools to identify support related workflows.
Amid these contrasting perspectives and ongoing exploration of AI’s capabilities, it’s important for quality leaders to assess both the benefits and risks of using AI.
Opportunities and Risks to Consider
Several potential benefits of AI for safety event reporting and review may be realized, including:
- Possibility for faster event reporting due to technology-enabled efficiency, ease of use, and alignment with reporting and event review workflows
- Higher quality reporting by inclusion of relevant data that might not otherwise be captured in the initial report.
- Prompts to consider data that might be overlooked.
- Development of AI tools to prompt reporting early, upstream signals that are indicative of harm, rather than looking downstream at confirmed events.
- Identification of trends across aggregated reports for faster pattern recognition.
- Sourcing evidence-based, stronger actions and solutions to mitigate risk and prevent harms.
Possible risks of deploying AI for safety event review prior to sufficient evidence of its effectiveness include:
- If reports do not have sufficient detail or sociotechnical context, AI synthesis may result in generalizations that do not address root causes and contributing factors.
- Substitution of AI analysis for a more deliberative process among multiple stakeholders may not be as robust, as people understand the system and conditions better than the AI tools can at this stage.
- Suboptimal safety recommendations and solutions may result due to AI inaccuracies, biases, or training data limitations.
It is important to establish an AI governance structure, to include quality and safety personnel in AI oversight, and to ensure AI algorithms are not propagating bias and inequities.
Getting Started with AI for Safety Event Reviews
AI technologies offer a set of tools, but safety event reporting is a process. Across health care settings, there is a critical need to improve the quality of safety event reporting and review. A key recommendation for all quality leaders is to assess their current event review processes to identify weaknesses and improve existing approaches. Using processes like RCA2 and common cause analysis, and ensuring sociotechnical considerations relevant to the events — including patient voice and family perspectives — are addressed in reports, are essential.
Using the science of quality improvement, health care organizations can then design and test AI solutions for safety event review. Key questions to consider are:
- What would an idealized workflow look like for adverse safety event identification, reporting, analysis, and solutioning, given the needs and limitations of current systems (e.g., time, IT barriers, resource constraints)?
- What role can technology, including AI, play to realize that event review workflow?
The goal for health care AI is to enhance human judgment, not replace it. By strengthening existing safety reporting processes, applying improvement science, and ensuring strong governance and oversight, quality leaders can shape AI tools that advance learning rather than complicate it. The most promising path forward is one where technology amplifies, not substitutes, the expertise of the people closest to patient care.
Nikki Tennermann, LICSW, MBA, is a Senior Project Director and Jeff Rakover, MPP, is a Director at the Institute for Healthcare Improvement.
We are grateful to the quality leaders who contributed their perspectives to inform this piece: Lori Pelletier, PhD, MBA, Chief Quality and Patient Safety Officer, Connecticut Children’s; James Hoffman, PharmD, MS, Senior Vice President, Quality & Safety, St. Jude Children’s; Navneet Marwaha, MD, Chief Quality & Patient Safety Officer, Northern Light Health; Sean Martin, MHS, RRT, Vice President, Clinical Services & Health Equity, Chief Quality Officer, Peterborough Regional Health Centre; and Amar Shah, MBBS, MBA, National Clinical Director for Improvement, England.
Photo by Accuray on Unsplash
You may also be interested in:
- Learn more about the CQO Network
- IHI Lucian Leape Institute report on patient safety and AI
- Leveraging AI to Support Improvement
- Root Cause Analyses and Actions (RCA2)
- BMJ Quality and Safety Editorial: Thinking and organising in systems: reframing the long problem of learning from incidents
- BMJ Quality and Safety Viewpoint: Advancing AI in healthcare: three strategic roles for quality and safety leaders