Skip to main content

Advanced Search

Advanced Search

Current Filters

Filter your query

Publication Types

Other

to

Newsletter Article

/

In Focus: Making Mortality Data More Meaningful

By Sarah Klein

Publishing mortality data is a bit like grabbing the third rail on a train track. Few things agitate and mobilize hospitals more quickly than publicly comparing the rate at which patients die after receiving care in their institutions. The first time the federal government attempted it, in the early 1990s, the backlash was so intense the government scrapped the program.

That release, which examined death rates at hospitals nationwide related to nine conditions and surgical procedures, was arguably premature—forced upon the government by a flurry of Freedom of Information Act requests before statisticians had a chance to make a full set of risk adjustments. Hospitals nationwide pounced on the data, claiming they unfairly prejudiced patients against their hospitals, while pointing out some legitimate explanations for their low rankings. Even a concession from top officials that the information was not yet ready for public consumption failed to stop the criticism.

A Second Try
The Centers for Medicare and Medicaid Services (CMS), which recently has made value-based purchasing a priority, was understandably cautious when it decided to release some data on hospital mortality rates for Medicare beneficiaries. The agency chose only two measures to make public—death rates within 30 days of admission with a diagnosis of heart failure or acute myocardial infarction—and did so under the most generous of terms. (Next year, CMS will release data on rates of mortality among Medicare beneficiaries hospitalized for pneumonia.) Instead of comparing hospitals to top performers, CMS ranked them according to how their performance compared to an average hospital. The public learned only whether a participating hospital's results were better than expected from an average institution, as expected, or worse than expected. The actual mortality rates weren't released to the public, unless hospitals chose to disclose the numbers themselves.

"The feds were trying to come up with a statistic that would be as close to unassailable as possible," says David Schulke, executive vice president of the American Health Quality Association, the group representing Quality Improvement Organizations, which are charged with helping those deemed poor performers to improve.

Partly as a function of these changes, the results seem improbable to many. More than 98 percent of the nation's hospitals were deemed to perform as expected in treating heart attack and heart failure. For heart attack, only 17 performed better than expected, while seven performed worse than expected. In treating heart failure, 38 hospitals performed better than expected, while 35 performed worse than expected. Even the most unsophisticated observers assume, "there are more good hospitals and probably more bad hospitals," says Denise Love, M.B.A., executive director of the National Association of Health Data Organizations, which represents public and private data agencies.

The method served a valuable purpose, says Harlan Krumholz, M.D., professor of medicine, epidemiology, and public heath at Yale University, who together with colleagues from Harvard and Yale universities developed the analytical methodology. It helped to reacquaint hospitals with the public reporting of health outcomes and encouraged quality improvement and self-examination, rather than embarrassment. "We want to get people engaged in a culture of quality improvement and a focus on outcomes from the patient's perspective," rather than arguing about the data, Krumholz says. "We were very conservative."

But many observers say that engagement hasn't happened. Some hospitals are still disputing the validity of the data, arguing they aren't properly adjusted to take into account palliative care cases and patients with do-not-resuscitate orders. At the same time, some data experts are arguing the methodology has not gone far enough in identifying problem hospitals.

Methodology Examined
CMS used a method to adjust the data that was designed to ensure hospitals with low volumes would not be disproportionately affected by those results. In those instances, the agency factored in the results of all hospitals. It is a statistical method that allows CMS to increase its confidence in the reliability of the data, but it lessens the likelihood that that low-performing hospitals will stand out, says Edward Hannan, Ph.D., a professor of health policy, management, and behavior at the University of Albany School of Public Health, who helped create the model New York State uses to determine mortality rates for coronary artery bypass graft surgery in New York hospitals.

The method can be problematic when analyzing procedures for which volume affects outcomes. Because there is a tendency for hospitals that perform fewer procedures to get inferior results, relying on data from other hospitals to judge them may distort that. "You're disguising the result," Hannan says. It may also enhance the performance of high-volume hospitals relative to others. The reason is statisticians have a higher degree of confidence in those results and therefore don't blend them with outcomes from other hospitals.

But Krumholz argues that the adjustment is not only important, it is essential to avoid overstating problems in hospital outcomes. When declaring a hospital an outlier, "You need to be very, very confident," he says.

The hospitals haven't criticized this aspect of the data analysis, perhaps because the method favors them. Instead, many institutions have voiced concerns that the data were not adjusted to take into account hospice patients and do-not-resuscitate orders, which they claim would add to hospitals' overall mortality figures. Another objection is that CMS relied on administrative claims data, rather than clinical data from medical records.

There's a rationale for not using clinical data, experts say. "Acquiring detailed clinical information is incredibly expensive," says Lisa Iezzoni, M.D., M.Sc., associate director of the Institute for Health Policy at Massachusetts General Hospital and professor of medicine at Harvard Medical School.

Some states have begun to incorporate clinical information into mortality data, including Pennsylvania. One important benefit of doing this is the impact it has on providers. "I don't think we would have the support of the physician community [if we didn't]," says Joe Martin, communications director for the Pennsylvania Health Care Cost Containment Council. Others are watching Maine to see how its database, which combines administrative and clinical data from all payers in the state, is used.

In the meantime, there may be cost-efficient alternatives to collecting clinical data. A study by Michael Pine, M.D., M.B.A., president of Michael Pine & Associates Inc., a consulting firm, demonstrated that combining administrative data with lab results comes very close to approximating the medical record. "What's really wonderful about this is that more than 80 percent of the hospitals in the U.S. have lab data electronically," says Anne Elixhauser, Ph.D., a senior research scientist in the Agency for Healthcare Research and Quality (AHRQ) who worked on the study. AHRQ is currently funding projects at hospitals to develop a means of linking lab results to administrative claims for this purpose.

Future Reports
In the meantime, other observers are hoping CMS will make subsequent iterations of the mortality reports more meaningful to consumers by focusing on conditions for which patients shop for medical care. Patients who are suffering from a heart attack will be hard pressed to research hospitals before going to them for treatment. But a woman who has nine months lead time will put a lot of research into the hospital where she will give birth, says Judith Hibbard, Ph.D., professor of health policy at the University of Oregon. Patients also prefer measures that are more intelligible, such as infection rates and drug errors. "The measures themselves don't lend themselves to thoughtful choice," Hibbard says.

One change to Medicare data that is already under way is the addition of present-on-admission indicators, which help to distinguish conditions that arise during the hospital stay (such as hospital-acquired infections) from those that patients have at admission. Such indicators have been used in state programs in California and Pennsylvania for more than a decade. CMS plans to add them in 2008, to avoid paying for avoidable complications in care.

Using present-on-admission indicators is not as straightforward as it seems. "The examples of states have shown it is hard to make it happen in a way that is consistent across hospitals," Dr. Iezzoni says. Another risk is that hospitals will use that coding to game the system, by making their care appear more flawless than it is.

Despite the challenges to Medicare's recent release, many are happy to see the government return to outcomes reporting, which represents the next frontier. The majority of U.S. hospitals have been reporting data on adherence to processes of evidence-based care for acute myocardial infarction, congestive heart failure, and pneumonia to CMS via the Hospital Quality Alliance since 2003. The notion that someone is reporting death rates tends to focus providers' attention on their quality, even when they disagree with the methodology, Love says. "They may be defensive, but they do deploy," she says. Pennsylvania officials believe so, too. When the state's mortality program began 10 years ago, "mortality rates were statistically significantly higher [than they are today]," Martin says.

It's also important that information makes providers a little nervous, Love says. "Nothing happens unless people are a little uncomfortable," she says.

Editor's Note: A previous version of this In Focus article incorrectly stated that CMS originally attempted to release mortality data in the mid-1980s. The original attempt was in the early 1990s.

Publication Details