Go to GoReading for breaking news, videos, and the latest top stories in world news, business, politics, health and pop culture.

Improving Quality in the Intensive Care Unit

109 10
Improving Quality in the Intensive Care Unit
This is Andy Shorr from the Washington Hospital Center in Washington, DC with a Pulmonary and Critical Care literature update. An article was published recently in JAMA that all of us who practice in the intensive care unit (ICU) need to be aware of. It was published in the January 26 edition, and it dealt with quality initiatives in the ICU. The authors were Scales and colleagues from Ontario.

Quality and safety in the ICU remain, appropriately, a major focus. However, it has not always been clear that the science published in the name of quality is actually quality science. We've all read articles that talk about some bundle or intervention for this or that, which, when it was put into place, miraculously eradicated the adverse event we were worried about. Many of these studies are single center. They're observational. They don't always enumerate what the intervention was. They don't always have good diagnostic criteria. There may not be a control. There is no blinding in terms of the assessment. These studies have numerous limitations, and often they don't tell us what happened after the intervention was no longer actively being studied. What happened after interest in the intervention decayed?

This article by Scales and colleagues tries to address all these issues and also to focus on quality issues and quality initiatives outside of academic centers because most of the work has come from academic centers. Therefore, the findings may not be generalizable or have external validity relative to nonacademic centers. These investigators in Canada performed a very pragmatic cluster randomized trial -- a randomized trial intervention. They looked at 15 hospitals, enrolled approximately 9000 patients during the overall study period, and focused not only on describing specific areas of interest, which were dealing with breathing trials, prevention of catheter-associated bloodstream infections, prevention of ventilator-associated pneumonia (VAP), and deep vein thrombosis (DVT) prophylaxis. They also focused on a specific process for educating and improving clinician, nursing, and respiratory therapist awareness about these issues.

It was a multifaceted educational initiative that had didactic pieces up front; it had reinforcement pieces and it had audit and feedback. They applied this process systematically to the intervention hospitals and compared the intervention experience with a pre-period when there was no intervention. They also compared the intervention experience with the control hospitals. They randomly assigned hospitals to using the interventions or not, in an effort to get higher rates of compliance with these safety initiatives. They focused on head of bed elevation and other interventions used for preventing VAP. They focused on use of daily breathing trials. They focused on DVT prophylaxis. They focused on preventing catheter-associated bloodstream infections.

They found that this approach, studied rigorously, was actually effective in some scenarios. For example, rates of head of bed elevation (semirecumbency, an important aspect of VAP prevention) went up substantially with their quality initiative, whether they compared this with an earlier period or with a control hospital, and it didn't decay over time when they looked at a follow-up period. The same was true for a number of interventions to help prevent catheter-associated bloodstream infections. This was proof of concept that you can have a systematic approach to quality improvement and, more importantly, you can rigorously study a systematic approach to quality improvement and look at changing process and behaviors, which is often very challenging.

For several interventions they did not see a difference. There was no difference between intervention and control hospitals in the use of daily breathing trials or in the use of DVT prophylaxis. That was likely, as the investigators comment, because rates were already so high at baseline. Rates of DVT prophylaxis were nearly 90%. They pushed it a little, but when you're already at 90%, getting it higher becomes challenging. It also becomes challenging to show a statistically significant difference even if you do get it higher. It's kind of a ceiling effect.

The message is: if you have limited resources, which we all do, and you're trying to figure out what quality initiatives you really need to hone in on at your institution, pick something with room for improvement. Don't just say, "we're doing DVT prophylaxis this month because we should do it." Do it because you decided you have a DVT prophylaxis problem. Don't just say, "we're doing all these things for catheter-associated bloodstream infection." Decide what your catheter-associated bloodstream problem is, have some good background data, and then figure out how to intervene.

One of the strengths of this study that is important to point out is they really did try to deal with the Hawthorne effect. That is what randomization addresses, but there's always the possibility that people know they're being watched so their behavior changes. They tried to address that by looking at what happened when the study wasn't formally going on, and they looked at behaviors later on in terms of their decay, and they didn't see much change. It is actually possible to change process, to change behavior, and do it in a sustained fashion. We shouldn't just throw up our hands and say, "we can't do it. We're upset with all these people talking about quality. I can't get there." That's not acceptable, but you have to be strategic in how you apply your resources. This isn't something that's not doable in a community setting.

That's a very important message because the vast majority of critically ill patients in the United States today (or anywhere for that matter) are not in academic teaching hospitals. They're in more local institutions. The major weakness of this analysis, albeit reasonable because of their sample size, is that the investigators didn't specifically look at mortality as an endpoint and didn't specifically look at actual rates of VAP, actual rates of catheter-associated bloodstream infection, or actual rates of DVT between intervention and control hospitals. That would be the gold standard of what we want to see. Yes, you've proved that you've changed behaviors, but does changing those behaviors actually mean something?

Most of us would say that there are good data that preventing catheter-associated bloodstream infections certainly helps to prevent morbidity and may prevent mortality similarly to that for VAP prevention, so we are making that leap of faith here. It would have been nice if we could have shown that with these data. We just can't because it wasn't the purpose of the study, and I don't think the investigators should be faulted for that because they had to do a study that was executable. I urge you to read the article. It's in the January 26 edition of JAMA, and I think it's important for all of us who practice in the ICU to understand how we can actually do quality science when we're looking at how to do the science of quality. This is Andy Shorr from Washington, DC.

Source...

Leave A Reply

Your email address will not be published.