AI Governance: Maximizing Benefit and Minimizing Harm for Patients, Providers, and Health Systems
Summary
- Governance is necessary for the safe, impactful, and trustworthy adoption of AI. Use case and vendor selection, validation, education, clinical implementation, and post-deployment monitoring all require transparent, integrated, and expert governance.
Artificial intelligence is rapidly transforming health care, yet practical guidance for its governance remains limited – particularly for organizations without dedicated AI teams or infrastructure. To address this gap, the IHI Leadership Alliance convened an AI Accelerator*, bringing together leaders from 10 diverse health care organizations to move beyond high-level theory and identify actionable strategies for responsible and effective AI implementation.
As artificial intelligence (AI) becomes increasingly embedded in the health care ecosystem, governance structures must evolve to ensure that its use is safe, effective, and responsible. From pre-deployment oversight to post-marketing monitoring, health systems face the challenge of managing not just the technology itself but also its organizational, ethical, and clinical implications.
The conversations around AI governance across various health care organizations reveal a critical need for guidance that is practical, scalable, and centered on patient outcomes. Following are four key takeaways from the Leadership Alliance AI Accelerator that health systems can apply when developing or refining their AI governance structures.
1. Take a Broad, Integrated Governance Approach
Whether AI governance structures are stand-alone groups or integrated into existing structures, they must be comprehensive and integrate both domain content expertise and AI competency. Below are some of the important features of AI governance:
- Multidisciplinary: Governance structures should bring together relevant stakeholders, such as medical informatics, clinical leadership, legal, compliance, safety and quality, data science, bioethics, and patient advocates.
- Integrated: AI governance should become a routine part of the health system’s broader ecosystem, not an ad hoc group with an end-to-end process: pre-implementation validation, explicit go/no-go thresholds, and continuous monitoring once live.
- Highly Functional: At minimum, the group should set enterprise AI policy, prioritize use cases, assess for AI risk and impact, and explicitly define what counts as “governable AI.”
2. Build Governance with Scalable Capabilities and Clear Accountability
A practical AI governance model must be capable of scaling across diverse functions and institutions — from large academic centers to smaller safety-net hospitals. Consider the following structural principles:
- Central coordination with local accountability: Governance should clarify who has control, how harm is reported, and how escalation works when AI performance issues arise.
- Inventory and lifecycle management: Maintain an active list of all AI tools in use and implement structured evaluation and monitoring processes.
- Balanced evaluation: Avoid over- or under-evaluating AI tools. Governance should facilitate reasonable, context-aware assessments of model performance.
3. Prioritize Patient Outcomes Over Model Performance
AI governance should focus less on the technical accuracy of a model in isolation and more on the actual impact on patient outcomes. Assurance labs, while useful, can fall short if models are tested in environments that don’t reflect real-world diversity or care settings. Health systems should consider:
- Does this AI tool improve outcomes for our patient population?
- What is the minimum acceptable level of benefit, and how much risk of harm are we willing to accept?
- How do we monitor AI’s performance post-deployment and signal when it begins to underperform?
This mindset helps ensure the governance framework is tuned to clinical relevance and accountability rather than abstract model metrics.
4. Prepare for Regulatory Gaps and Build Internal Oversight
There’s a notable absence of consistent federal regulation around AI in health care. That leaves end users — clinicians, IT teams, and system leaders — primarily responsible for determining what is safe, effective, and appropriate. In response, health systems should:
- Establish internal harm reporting mechanisms and define escalation pathways clearly.
- Set minimum standards for data sharing and vendor relationships to control risk and protect patient information.
- Advocate for and design post-marketing surveillance processes, including routine model reassessment and performance audits.
- Include non-clinical AI applications (e.g., documentation automation, claims processing) in governance, ensuring they also demonstrate organizational value.
The flipped responsibility — where the health system is accountable for AI validation rather than the vendor or regulator — requires a deliberate, self-governed approach that is cautious but not paralyzing.
Summary
Whether you’re a large academic medical center or a small community hospital, the goal of AI governance is the same: to maximize benefit and minimize harm for patients, providers, and the system. By embracing a broad, integrated governance structure, prioritizing patient-centered outcomes, and preparing for regulatory uncertainty, health care organizations can responsibly harness the power of AI.
Governance isn’t just a safeguard — it’s the frame that holds everything together. As AI becomes a permanent fixture in health care, thoughtful governance will be the difference between tools that merely function and those that truly transform care.
* An Accelerator is a Leadership Alliance initiative that brings health care leaders together to advance improvements in the field and drive transformational change in health care. A global group of collaborators incubate untested and emerging system design theories, develop and expand subject matter expertise, and test models of implementation. To learn more about the IHI Leadership Alliance and opportunities to participate in future AI Accelerators, please visit our website.
Dr. Charles E. Binkley, MD, FACS, HEC-C, is the Director of AI Ethics and Quality at Hackensack Meridian Health, and Associate Professor of Surgery at Hackensack Meridian School of Medicine.
Amy Weckman, MSN, APRN-CNP, CPHQ, CPPS, is an IHI Director.
Natalie Martinez, MPH, is an IHI Project Manager.
Photo by Freepik
You may also be interested in:
- IHI Lucian Leape Institute report: Patient Safety and Artificial Intelligence: Opportunities and Challenges for Care Delivery
- Turn on the Lights Podcast: Health Care AI at Speed
- Healthcare Innovation article: Why It Is Crucial to Involve Nurses in AI Development Processes
- Read more about the IHI Leadership Alliance and how to join this group of innovators
- Enroll in Making AI Work for You: Online Course