Video Transcript: What Are the Dangers of Alert Fatigue?

Bob Wachter, MD; Professor and Associate Chairman, Department of Medicine, University of California, San Francisco


I think alert fatigue may be the most kind of “clear and present danger” of computerization that we’ve seen, in terms of its impact on patient safety. In [my] book, I tell a story of a kid at my place who we gave a 39-fold overdose of a common antibiotic to. And the initial error was just a simple glitch; the doctor didn’t realize the screen was on milligrams per kilogram rather than milligrams. It happens.

But, what was really remarkable was an alert fired to the doctor and said that this is an overdose. The doctor clicked out of it. An alert fired to the pharmacist. The pharmacist clicked out of it. And you might, as an outsider, say, “Oh, how could they be so careless to do that?” Until you realize that in a month at UCSF, at my institution, the doctors get 30,000 alerts. The pharmacists get 160,000 alerts. And those are just the computerized pop-up boxes.

In a month, in our 70 intensive care units, the computers that record your heart rate and your blood pressure and your oxygen level, they throw off 2.5 million alerts. To the point that there is an audible alert every six or seven minutes, and one of the nurses who was interviewed for the book was asked, “How would you know to be scared about a patient? The alerts are going off every two seconds.” And the nurse said, “Silence. If there were no alarms going off, then I’d be really worried.” I mean, think about how crazy that is.

But, we have not yet sort of maturely thought about this issue of alerts. We have this idea that sure, if these two drugs might interact with each other, we’ll fire off an alert and then leave it up to the doctor or the nurse to deal with it. That shows no understanding of human factors and user-centered design.

What do we have to do? We have to go through each of the alerts — this is pretty painstaking — and say, “This particular alert, every time it pops up, people click out of it.” If that’s the case, we need to get rid of it. Because I think we’ve approached alerts [with the feeling that] if they’re clicked out of, then, okay, no harm no foul. That is absolutely untrue. An unnecessary alert is dangerous. Because it makes it that much less likely you will pay attention to the next one that is necessary.

Some of the solution here will be big data. So, when I spend the day at IBM and talk to the Watson people, they are beginning to think about a system where all of the machines talk to each other. Where they can look, and say, “This particular alert in this circumstance with this kind of patient always was clicked out of, and, so, we’re not going to fire it.”

They can also begin to say — maybe it’s not IBM, but some other company will weave the machines together and say — “It turns out every time the kid in the intensive care unit brushes his or her teeth, the alarm goes off because the computer thinks that the heart rate is going 200 times a minute. But, the blood pressure didn’t change. That is physiologically impossible.”

Right now, those two machines don’t talk to each other. So, if it looks like the heart is going really fast, an alert goes off, even though the blood pressure hasn’t budged. So, you could envision a world where those two machines talk to each other, and the alert doesn’t go off because they know that’s impossible.

Right now, we are so early in this, and so immature in this, and I think that has been one of the biggest surprises. We touted alerts and alarms as being one of the main advantages of computerization. And I think right now, it’s a disaster.