Summary
- Transparent AI builds trust in health care by ensuring patients know when it’s used and trained staff utilize tools to improve care safely. The IHI Leadership Alliance explored practical solutions to make AI safer, clearer, and more effective for all.
As artificial intelligence (AI) technologies become more common in health care – from clinical decision support to ambient listening documentation assistants – health systems face two critical, connected challenges: being transparent about AI use (with both patients and staff), and educating users on proper AI use. Patients appreciate knowing when AI helps in their care (a majority prefer to be informed), and clinicians need practical training to use AI effectively and ethically. Successfully addressing these areas can build patient trust, encourage appropriate adoption, and mitigate risks in the deployment of health care AI.
Recognizing this, the IHI Leadership Alliance – a network of health care organizations that collaborate to deliver on the full promise of the Triple Aim – brought together 10 diverse health care organizations in an Accelerator space to identify practical guidance for AI adoption across a variety of settings and systems. Through months of collaboration, these leaders focused on creating pragmatic, actionable insights to bridge the gap between innovation and real-world practice. AI governance was featured in a previous article, and here we share key insights about AI transparency and education that health care organizations can act on today.
The Importance of AI Transparency for Patients and Clinicians
There is growing awareness among patients and the public regarding the role of AI in daily life, including its application in health care. Surveys indicate that the majority of patients prefer to be informed when AI is used during their care. While detailed technical explanations may not be necessary, individuals generally seek assurance that AI applications are managed responsibly and safely. Organizational transparency – ensuring that clinicians and staff are aware of when and how to use AI tools – is also regarded as important for maintaining safety and accountability.
Clear communication about the use of AI can support trust between patients and health care providers. If patients become aware of AI involvement without prior notification, it could negatively affect their perception of care. Conversely, informing patients about AI use, when appropriate, can contribute to patient engagement and trust in health services. For clinicians, understanding which AI tools are authorized and appropriate for use enables them to provide accurate information to patients, and incorporate these tools into their practice as needed.
Informing Patients about AI: Balancing Transparency and Consent
Organizations can effectively inform patients about AI by using a tiered transparency approach:
- General Disclosure: For routine AI uses (like aiding radiologists or drafting visit notes), provide general notice through policy updates or broad communications. This "community consent" keeps patients informed without requiring individual consent each time.
- Point-of-Care Transparency: If AI directly interacts with patients (e.g., ambient scribe technology recording conversations), use clear flyers, handouts, or verbal notifications at the point of care. This reassures patients and seeks their assent without extra paperwork.
- High-Risk/Autonomous AI or Human Out of the Loop (HOOTL): For AI that operates independently or poses significant risk, seek explicit informed consent or have detailed discussions before use, similar to consent for invasive tests.
- Ongoing Communication: Regularly update patients on new AI tools via emails, patient portals, or care summaries (e.g., labeling sections as “AI-assisted”).
- Clarity and Accessibility: Avoid technical details; focus on why AI is used and its benefits for patient care.
This approach provides transparency without overwhelming patients, fostering trust while maintaining efficient care.
Case in Point: Ambient AI Scribe Transparency
A primary care clinic introduced an ambient AI scribe that records doctor-patient conversations (with consent) to create clinical notes. The tiered transparency approach is used as follows:
- Patient Notification: Notices in exam rooms, paperwork, and the patient portal inform patients that their visit may be recorded by a secure AI for documentation. Patients are told the audio is confidential, used only for notes, and that they can opt out or ask questions.
- Verbal Assent: Doctors remind patients about the AI transcription at the start of the visit, explaining its benefit and assuring them the recording is deleted after note-taking. Patients can decline at any time.
- Outcome: Trials showed high patient acceptance with clear explanations. Doctors reported more focus on patients, and surveys indicated clinicians gave undivided attention 90 percent of the time—up from 49 percent. This transparency and permission approach encourages trust and respects autonomy.
Educating and Training AI Users: Ensuring Safe and Effective Use
Health care professionals must be thoroughly trained to understand both the capabilities and limitations of AI tools. Misunderstanding these can lead to misuse or lack of adoption, so comprehensive training and clear guidelines are essential. Following are key aspects of AI education and training that we recommend for the health care workforce:
- Role-Specific Training: Tailor training modules to different staff roles, focusing on practical tasks and relevant scenarios for each group.
- Keep it Practical: Avoid overwhelming technical details. Emphasize tool usage, limitations, and interpretation, highlighting real-life applications and responsible use.
- Engaging Formats: Use interactive methods like videos, workshops, tip sheets, and simulations for brief, hands-on learning. Employ “AI champions” to support peers.
- Policies and Guidelines: Ensure users know organizational policies, approved tools, data privacy rules, and proper response to AI errors or malfunctions.
- Access to More Information: Provide easy access to detailed tool information, such as validation summaries or FAQs, to build user trust and understanding.
- Feedback Mechanisms: Set up straightforward feedback channels so frontline users can report issues, ask questions, and contribute to improvements.
- Ongoing and Refresher Training: Offer continuous education through refreshers and updates, shifting from basic use to optimization as staff gain confidence.
Formalizing Transparency and Accountability
Many organizations are creating formal policies to promote transparency around AI use:
- AI Transparency Policy: Develop an official statement outlining when AI is used, patient notification, consent, privacy, and human oversight. Board endorsement can help allocate resources for training and supervision.
- Privacy Notices: Add clear language in Notices of Privacy Practices (NPP) to inform patients about the use of AI in their care, emphasizing clinician supervision and safeguards.
- Governance and Oversight: Set up or assign an oversight committee to monitor AI implementation, ensure compliance, track feedback, and keep training current.
- Risk-Based Consent Framework: Classify AI tools by risk level—low-risk tools may require only general disclosure, while high-risk tools need explicit consent. Define these categories so staff understand consent protocols.
Ultimately, fostering trust through openness and education helps assure patients that AI is used responsibly, preempting concerns and strengthening public confidence in health care.
Conclusion: Fostering a Culture of Transparency and Learning
The effective integration of artificial intelligence into health care relies fundamentally on both meaningful transparency and comprehensive education. It is essential that patients are consistently informed when AI is involved in their care, regardless of their familiarity with technical details, ensuring they remain engaged and aware participants in their treatment process. Similarly, clinicians and health care staff must approach the implementation of AI tools with confidence, underpinned by robust training programs and clear operational guidelines.
Health care is inherently collaborative, and as AI systems become integral members of this team, it is important to introduce and train these technologies with the same diligence applied to new personnel. By adopting an open and deliberate strategy for incorporating AI, organizations can optimize the advantages of these technologies – such as increased efficiency, enhanced insights, and improved support – while steadfastly upholding patient trust and safety.
To learn more about the IHI Leadership Alliance and opportunities to participate in future AI Accelerators, please visit our website.
Brett Moran, MD, is SVP and Chief Health Officer at Parkland Health.
Amy Weckman, MSN, APRN-CNP, CPHQ, CPPS, is an IHI Director.
Natalie Martinez, MPH, is an IHI Project Manager.
Photo by rawpixel.com
You may also be interested in:
- AI Governance: Maximizing Benefit and Minimizing Harm for Patients, Providers, and Health Systems
- Turn on the Lights Podcast: Health Care AI at Speed
Read more about the IHI Leadership Alliance and how to join this group of innovators