Skip to main content

Advanced Search

Advanced Search

Current Filters

Filter your query

Publication Types



Newsletter Article


Issue of the Month: What Evidence Do We Need for Quality Improvement?

By Vida Foubister

The use of rapid response teams to identify and treat patients prior to a crisis is one of the Institute for Healthcare Improvement (IHI) 100,000 Lives Campaign's six strategies to prevent avoidable deaths.

But the clinical evidence supporting rapid respond teams remains "complicated," leaving considerable tension between health care professionals who believe their use is an effective quality improvement intervention that should be implemented—now—and those who believe that further data are necessary to prove their value.

"Both are right," says Frank Davidoff, M.D., executive editor at the IHI and editor emeritus of the Annals of Internal Medicine. Observational data from individual institutions that have deployed rapid response teams (also called medical emergency teams) suggest that many of them can dramatically improve patient outcomes. However, a cluster-randomized controlled trial published in The Lancet last year was inconclusive, due to extremely wide variation in the effectiveness of these teams among the hospitals studied.

"How do you proceed in a field where two opposite sides are right? That's probably going to be true for a long time," Davidoff says.

Harvesting Local Knowledge
The attention to health care quality and safety that followed the Institute of Medicine's report, To Err Is Human, has led organizations nationwide to implement various interventions in an effort to improve care processes and outcomes. "What I would love is a way to learn from all this effort systematically," says Donald M. Berwick, M.D., M.P.P., IHI president and CEO.

But much of the evidence surrounding quality improvement remains imperfect. This is in part because those implementing the changes, typically frontline health care workers without extensive training in health sciences research, are primarily focused on improving the care provided at their institutions.

"Usually we are trying to improve in practice what has already been scientifically established to be efficacious treatment," says Paul B. Batalden, M.D., professor of pediatrics and of community and family medicine at Dartmouth Medical School. "The problem is that it is not regularly or dependably done. Or we might be trying to prevent errors by establishing methods of certain transmission of information."

Unlike the clinical sciences, in which the discovery and dissemination of new knowledge are paramount, sharing successful innovations is secondary to many involved in quality improvement efforts. Thus, the interventions that are implemented are rarely well-designed experiments that easily lend themselves to publication and/or replication by other institutions.

Further, health care organizations are complex systems. There is no guarantee that a quality improvement intervention that was successful in one hospital will be successful when applied to the same process in another hospital. "If the goal is the spread of an innovation, it is not sufficient to describe only the innovation," explains Paul E. Plsek, director of Paul E. Plsek & Associates Inc., in Complexity and the Adoption of Innovation in Health Care, a report that came out of the National Institute for Health Care Management Foundation conference. "We must also develop a language for describing the nature of the context in which it was successful."

What's more, while well-designed clinical trials remove context as a variable to ensure that the findings will be generalizable, "to improve care involves engaging the particular context—its processes, its habits, its traditions, its identity—so that the generalizable evidence might be made part of that context," Batalden says.

This means the particular context of the organization carrying out an intervention, and the various elements that made it successful in that environment, must be documented. Only then can other organizations tease out pieces of the change process that are most likely to be effective in their individual context.

"Quality improvement is not quality improvement is not quality improvement," says David H. Gustafson, Ph.D., a research professor in the department of industrial and systems engineering at the University of Wisconsin-Madison. "We need to not only understand whether quality improvement works, but we need to understand what aspects of it work and what aspects of it work for what kind of problem areas."

Implementing Lessons from Practice
In some respects, it's not surprising that there is conflicting evidence about the value of many quality improvement interventions. As Jeremy Grimshaw, MBChB, Ph.D., director of the Clinical Epidemiology Programme at the Ottawa Health Research Institute, points out, this is a relatively new area of research, there has been limited interdisciplinary research—involving social and organizational scientists—in the field, and funding for quality improvement research has been inconsistent.

The Agency for Healthcare Research and Quality, the main federal agency focused on improving the delivery of health care, receives a fraction of the funding Congress allocates to medical research ($320 million last year compared with the $29 billion received by the National Institutes of Health) and much of those funds are directed toward research on information technology. This leaves most researchers competing for grants provided by private organizations, though some health care institutions have recognized the value of local quality improvement initiatives and have begun funding health services research.

Those working in clinical research, which has long held up randomized, controlled trials as a gold standard, have only recently agreed to standards—the CONSORT (Consolidated Standards of Reporting Trials) guidelines—to report study findings.

Such standards also are needed for quality improvement research, says Davidoff, who recently proposed draft publication guidelines for the field. "The reporting end is not a sideline," he says. "It's an important and integral part of the scientific process and it has not been appreciated in the quality improvement arena."

These standards, however, aren't enough, says Kaveh G. Shojania, M.D., Canada research chair in patient safety and quality improvement and assistant professor of medicine at the University of Ottawa. Study designs also need to be improved. In fact, it's critical to do so because a faulty quality improvement process, like a poorly studied drug, could lead to patient injury or death. "We're kidding ourselves if we think that just because we're working on quality improvement we can't cause harm," Shojania says.

Additionally, most institutions have a relatively small budget for quality improvement activities and they cannot afford to waste it on ineffective and inefficient projects. And failures, when they do occur, can be frustrating for clinicians who have taken time out of their busy schedules to implement the change—potentially hampering future quality improvement efforts.

Despite those risks, Shojania and Grimshaw conclude in an article published in Health Affairs last year that many quality improvement efforts "proceed on the basis of intuition and anecdotal accounts of successful strategies for changing provider behavior or achieving organization change." This includes, he says, interventions that try to replicate the successes of "exemplary organizations" and those that are based on simple before-and-after studies, which fail to take institutional or environmental changes into account.

Because randomized controlled trials are not always feasible, Shojania recommends that researchers look to other designs which, though less rigorous and demanding, provide better data than simple observational studies. These include controlled before-and-after studies, which measure change at both the intervention institution and a comparable institution, and interrupted-time series, which assess change at several time points.

Two Distinct Views Remain
"At this point in time, there have been way too many stories told about what has happened without really good quantitative evaluations," says Gustafson.

But sometimes, says Lucian Leape, M.D., adjunct professor of health policy at Harvard School of Public Health, observational data can be "pretty hard to argue with." He points to the recently published finding that 68 hospitals in Michigan were able to eliminate central line infections for six months, replicating the outcome of a similar program led by Peter Pronovost, M.D., Ph.D., medical director of the Center for Innovation in Quality Patient Care at Johns Hopkins University. In sum, the Michigan project, with 127 intensive care units participating, is estimated to have saved 1,578 patient lives, avoided 81,020 ICU days, and saved more than $165.5 million health care dollars.

The program's effectiveness, in this example, was measured by comparing hospitals' experience before and after the intervention. "Now, in classic scientific research, that's not considered as good as a randomized trial," says Leape. "On the other hand, when [the number of central line infections] goes down to zero for six months … it wasn't just a good idea, it was an idea that had been worked out in detail."

Research led by Susan D. Horn, Ph.D., senior scientist at the Institute for Clinical Outcomes Research in Salt Lake City, Utah, has resulted in the development of another option to the randomized controlled trial, a comprehensive observational approach. Called clinical practice improvement, the approach is structured to decrease biases generally associated with observation research by using bivariate and multivariate associations among patient characteristics, process steps, and outcomes. The data generated by these studies can convince organizations to try interventions, even when they differ from their internal recommendations. "If we didn't have the data, the conversation, frankly, would end," she says. "It definitely removes barriers."

Pushing Forward on Multiple Fronts
Overall, the experts here agree that many approaches are necessary to evaluate the effectiveness of quality improvement, and that both rigorous scientific experimentation and experience-based learning have important roles to play in this process.

Instead of developing new forums to disseminate quality improvement information, David C. Leach, M.D., executive director of the Accreditation Council for Graduate Medical Education, believes that "the task for all of us is to organize information in a way that makes it more useful."

Davidoff, of the IHI, also wants the field to increase the use of social science approaches such as narrative studies to evaluate the effectiveness of experiential learning as a quality improvement tool. (The role of narrative in clinical research was discussed in a recent article published in the Journal of the American Medical Association.)

"What happens when you apply the methods of science to the study of experiential learning?" Davidoff asks. "The more you learn about how to learn by experience, the better you can do that kind of learning."

Publication Details