Why It Matters
Are we measuring what matters most to keep our patients safe from harm? Or is too much of our attention determined by the hope of incentives and fear of penalties?
SIGN UP FOR IHI EMAILS
Processing ...

The Future of Measuring Patient Safety

By IHI Multimedia Team | Monday, December 11, 2017

measuring-patient-safety

There were at least 57 sessions at the 2017 IHI National Forum in Orlando that included the words “measure” or “measurement.” Clearly there is strong interest in quantifying the results of our efforts to improve the quality of health care.

But are we measuring what matters most to keep our patients safe from harm? Or is too much of our attention determined by the hope of incentives and fear of penalties?

Fifteen years after the Institute of Medicine brought public attention to the issue of medical errors and adverse events, the National Patient Safety Foundation (which has since merged with IHI) convened an expert panel to assess the state of the patient safety field and set the stage for the next 15 years of work. The result was a report a report titled Free from Harm: Accelerating Patient Safety Improvement Fifteen Years after To Err Is Human. One of its key recommendations is to “create a common set of safety metrics that reflect meaningful outcomes.”

The Lucian Leape Institute (now part of IHI) plans to convene a new expert panel to focus on this recommendation. The following paragraphs excerpted from Free from Harm will guide the panel’s efforts.

Measurement is foundational to advancing improvement. It helps clarify goals, establish a shared sense of purpose, and confirm that organizations are heading in the right direction over time. However, measurement also carries the potential for unintended negative effects. Inaccurate measurement obscures the true state of affairs, leading either to ill-advised complacency or efforts disproportionately targeted on minor problems. The quantity of measures now required by different regulatory bodies can distract attention from important goals, and the task of collecting and analyzing data is overwhelming. Another problem is the unintended use of metrics (e.g., AHRQ patient safety indicators, meant as a screening tool, being used for payment penalties) and unintended consequences of metrics currently in use (e.g., financial penalties to low-resourced hospitals as a consequence of readmissions measurement), which can have deleterious consequences.

Some progress has been made in measurement over the past 15 years. Measurement is now considered routine in many areas of health care in a way it was not previously. Organizations nationwide now regularly measure HAIs using reliable, validated definitions that have gained national consensus. Many states now require reporting of HAIs. A growing number of measures now assess what matters to patients: the patient experience. The Hospital Consumer Assessment of Healthcare Providers and Systems survey and related tools provide standardized measurements of the patient experience of care, including some items directly related to patient safety such as discharge communication. These tools are widely used and are now tied to Medicare reimbursement, increasing their visibility and importance. The past 15 years have also witnessed increased transparency in measurement. Mortality and complication rates of many hospitals are now posted publicly — an activity far less common prior to the IOM report.

Numerous measurement challenges are, however, very specific to patient safety. First, unlike with other aspects of quality, there are not widely used measures for safety. Administrative data doesn’t work well for safety measures. The current measurement methodology, which often relies on retrospective surveillance via claims data or chart review, fails to detect all instances of errors, harms, and “never events.” Cross-cutting safety measures are not available from routine data; it is difficult to use large databases to find ADEs or diagnostic errors.

The metric denoted “total adverse events” is too heterogeneous to provide meaningful data for improvement, yet it is often used as a primary metric for assessing patient safety. Measuring adverse events provides a general lay of the land, indicating what types of safety problems commonly arise and a rough sense of their relative frequency. However, for any given type of adverse event (HAIs, ADEs, surgical complications, diagnostic errors), we often do not measure reliably enough to be able to show improvement over time. Another problem is that the classification of adverse events may change over time; with new harms or changes in what reviewers regard as preventable, even if hospitals successfully lower many known preventable adverse events, the preventable adverse event rate might look unchanged. In addition, any category of adverse event has so many heterogeneous causes that any given intervention may not reduce the rate enough that a change is detected. Even for outcome measures that seem relatively straightforward, such as VTE rates, there can be inaccuracies (e.g., hospitals that screen and test more may look worse although giving better care).

Chart review is an alternative to administrative data, but it is extremely labor intensive. Even when simplified by using instruments such as the IHI Global Trigger Tool, these tools may be too blunt to detect improvement. These tools can only identify the specific adverse events in the tool and can only detect events that are actually documented.

Significant effort has been spent on organizational reporting systems, which was a core recommendation of the original IOM report, but these efforts have often provided little value to organizations in terms of actual improvements. A recent study identified five barriers hindering the effectiveness of incident reporting: poor processing of incident reports, inadequate physician engagement, insufficient visible subsequent action, inadequate funding and institutional support of incident reporting systems, and inadequate use of emerging health information technology (health IT). According to another report, “we collect too much and do too little,” and we should refocus efforts to ensure that reports lead to actual improvement. Voluntary reporting to central organizations such as ISMP and the FDA has been more effective. For example, ISMP runs a centralized voluntary error-reporting program to which any care professional or organization can report medication errors. ISMP then uses its expertise to share feedback, best practices, and lessons learned from these errors very broadly through alerts and newsletters. More work needs to be done to optimize organizational reporting and determine how to expand effective centralized programs.

Finally, all of these methods (claims, chart review, reporting) are retrospective and reactive. In thinking about prevention, we need more and better ways to identify and measure risks and hazards in real time, or proactively, to potentially intervene before an adverse event occurs. For example, identifying patients at risk of an ADE based on number of medications taken and other factors would allow intervention with a pharmacist before the event occurs.

To turn the tide, the safety field needs to establish standard metrics that span the entire care continuum. Processes and tools also need to be developed to identify risks and manage hazards proactively (e.g., identify early signs of clinical deterioration). Safety reporting systems should be improved to ensure that appropriate systems improvements are implemented as a result of these reports. Better strategies are needed to increase measurement of outcomes that matter to patients via patient-reported outcomes or safety concerns. Finally, once the standard metrics are in place throughout the care continuum, incentives should be devised for innovation and further improvement.

There are many organizations, researchers, and patient safety professionals working on improving patient safety measurement. Do you have an idea or opinion to share? Contact IHI Vice President, Frank Federico, at ffederico@ihi.org to let him know what you think.


You may also be interested in:

first last

Average Content Rating
(0 user)
Please login to rate or comment on this content.
User Comments

​​