Doug Bonacum, Vice President of Patient Safety and Quality at Kaiser Permanente in Oakland, California, is faculty for IHI’s Patient Safety Executive Development Program. In this interview with IHI Content Development Manager Jo Ann Endo, Bonacum talks about the fundamental changes necessary for the future of patient safety while also reinforcing the continual need for patient safety basics.
Watch a video of Doug Bonacum discussing patient safety
Q: Why is improving reliability so important for patient safety?
I think reliability is the cornerstone of patient safety. The principles around reliability come from outside health care, and the emphasis of high-reliability organizations on standardization. While recognizing that what we do in health care is extremely complex and we can’t overly simplify things, we can make sure we’re approaching care delivery in a consistent fashion. Then, based on patient preferences or patient condition, we may vary from those standards, but at least we know what we’re varying from and so does the rest of the care team.
There’s too much variation in health care practice, often resulting from the variation in individuals’ education and training or what experiences they have during their careers. Variation in practice adds unnecessary complexity to an already incredibly complex health care system. Without standardization, it’s hard to improve, it’s hard to know what’s best, and it’s hard to train and orient new people. I think of reliability as grounded in standardization and simplification of care pathways and supporting workflows.
Q: Are standardization, reliability, and checklists in health care still viewed as “cookbook medicine”?
I think people’s understanding of standardization and simplification has matured over the years. Early complaints about “cookbook medicine” came from not understanding the intent. Don Berwick used to talk about standardized practice as guidelines that we can depart from based on two things: patient preference or patient condition. There’s nothing “cookbook” about that.
To me, saying that we can vary standard practice based on patient condition was a strong signal to the medical community about what we might call “the art of medicine.” Adjusting our approach based on patient preference might be more about the “art of service.” Modifying our approach based on patient condition or patient preference gives us freedom, but it also ensures we’re all working from the same playbook. Health care is a team sport, and great teams wouldn’t take the field without a shared playbook.
Q: How does improving reliability benefit the care provider?
We’ve been able to demonstrate that we get better outcomes by standardizing and simplifying workflows, and every health care worker wants better outcomes for their patients. From an operational perspective – when we can agree on the way to do our work, and who is responsible for executing different parts of the workflow – frustration, interruptions, and inefficiencies decrease. This allows clinicians to focus on the patient. It makes clinicians’ work simpler and easier when we implement reliable design principles.
Q: Why is it important to understand human factors to improve patient safety?
No matter how smart and well-intentioned we are as clinicians, we cannot change the human condition, but we can change the conditions under which humans operate.
Human factors looks at the conditions under which humans operate, and tries to change the interface – say, between a person and technology, or a person and other people, or a person and the environment – in a way that enables optimized human performance. Understanding human factors means understanding the things that can negatively or adversely impact our performance, and changing the way that work is designed and teams perform together to optimize outcomes.
Q: How do you think about human factors in terms of health technology?
Technology can help address human factors, but it can also make some matters worse. One example of where technology has helped involves counting sponges after intra-abdominal surgery. We achieved a certain level of performance by relying on humans, but now there’s technology to help us improve that performance. The sponges are radio tagged so we can wand the patient after a human performs the sponge count to verify whether the count is correct. In this case, technology is acting as our redundancy system to the human operator, providing information that helps us identify and mitigate a human error before it potentially causes harm.
The flip side might be when health technology provides information in a way that’s intended to be helpful to the care team, but may end up being ignored. For example, a medication alert system that warns a doctor about an order that may interfere with a medication the patient’s already on. However, when there are too many of those reminders and alerts, the doctor might learn to ignore them and when there’s a critical alert, they might go right past it because they’ve learned over time to tune out the multitude of warnings. So, I think technology is really a mixed blessing for health care and for patient safety, and requires us to understand human factors and the causes of failures so we can better use it to mitigate hazards and improve outcomes.
Q: You’ve been teaching about patient safety for a number of years. What do you see as the future of patient safety?
The foundation of patient safety continues to be reliable design and teamwork and communication. We’ve had a lot of focus in these areas, and as a result our systems are safer and getting better.
For the future, one fundamental element of patient safety that we could improve is technical and cognitive competence. We often assume that because people went to certain medical schools, have certain degrees, and have practiced for a certain number of years that they’re technically and cognitively competent. High-reliability organizations outside health care have their operators continually prove their competence over time — we need that in health care as well, especially since medicine is continually evolving and changing.
The example I think of is Captain "Sully” Sullenberger, who, after landing that plane on the Hudson River and saving all those passengers, had the privilege of going back into a flight simulator some months later to prove that he could fly under stressful conditions. That’s just the way the airline industry and other high-reliability organizations work, and health care will need to implement that level of competency evaluation for physicians and nurses in a way that we’re not doing now.
Another issue that impacts safety is patient- and family-centered care. We need to figure out how to better engage the patient and the family as part of the care team and part of the safe, reliable design and delivery of care. This will become more and more important as care moves out of the hospital environment to the ambulatory care arena. We need to engage the patient and the family in fundamentally different ways than they are now, and recognizing that the care that we’re giving, which is now so hospital-centric, is eventually going to be provided in the ambulatory environment, often in the patient’s home or where they work. The patient is the true primary caregiver and we might think of the family or community as the new floor nurse. Until we start thinking about health care delivery in this way, ambulatory patient safety will remain an elusive aspiration.