IHI safety experts Frank Federico, Carol Haraden, and others comment on the ongoing struggle to make health care safer while detailing their own efforts to further the all-important goal of a safe health care system.

Measuring Safety

Last modified by Frank Federico on Thursday, Nov 01, 2012


Measuring safety is crucial to improving health care, yet we still struggle to do it well.


Much of the challenge lies in safety's very nature: it is, inherently, a non-event—the absence of harm. So by necessity we focus instead on measuring what might be called "non-safety," usually in the form of harmful events. Fewer of these harmful events, we assume, means a safer health care system.


But which events do we count? How do you tally non-safety?


Our answer to that simple question is hugely consequential. Some efforts focus on infections, for instance. These are crucial harm events, but they are not the whole picture. Their absence does not, in and of itself, tell us that care delivery is safe.


At IHI, we've developed a freely available method of determining harm events that we call The Global Trigger Tool. Its advantage is comprehensiveness. Its drawback is time-intensiveness. We believe it's a trade-off worth making. Here's how the method works, as described in a recent Health Affairs paper:


“Closed patient charts are reviewed by two or three employees —usually nurses and pharmacists, who are trained to review the charts in a systematic manner by looking at discharge codes, discharge summaries, medications, lab results, operation records, nursing notes, physician progress notes, and other notes or comments to determine whether there is a “trigger” in the chart. A trigger could be a notation indicating, for example, a burn, a fall, or a reaction to a medication. Any notation of a trigger leads to further investigation into whether an adverse event occurred and how severe the event was. A physician ultimately has to examine and sign off on this chart review.”


We tested this system against other widely used methods to detect adverse events, and the results were stunning. The trigger tool detected 354 adverse events, while tools based on automated chart review fared far worse.  The Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Indicators detected only 35 adverse events. The hospitals’ voluntary reporting systems? Just four.


You’ve all heard the old iceberg trope – 10 percent above water, 90 percent below. Well here it is, in sobering statistics. And those errors that occur below the water line of measurement aren’t actually invisible – not to patients, not to their families, and not to providers or to the functioning of our health care system.


The trigger tool is less time intensive than some other chart review methods, but as I mentioned earlier, it is not as cheap as automated methods such as AHRQ’s Patient Safety Indicators. Manual review simply takes more time. There is no way around that.


But nor is there a way around the danger of missing so many adverse events. Think about what other audits you conducting at your hospital that may be just as time-intensive. Surely, safety should rate the same level of attention and commitment.


So we want to hear from you:


How do you measure harm in your organization? 


Do you believe that you are capturing the entire scope of harm?


The good news, from our perspective, is that hospitals and regulators are increasingly using the trigger tool to identify the broader universe of adverse events. We will keep working to continue this trend—because only by facing the reality of adverse events can we truly address them.

 

 

 

Average Content Rating
(1 user)
Please login to rate or comment on this content.
User Comments

Follow Me
Subscribe
Blog Archive
Tags