Skip to main content

Advanced Search

Advanced Search

Current Filters

Filter your query

Publication Types



Newsletter Article


In Focus: Paying Attention to Performance Data

You can listen to a conversation with the researchers by playing the audio file (5:57), posted below.

IMPORTED: www_commonwealthfund_org__usr_img_Jha_Epstein.jpg

IMPORTED: www_commonwealthfund_org__usr_img_MP3_speaker_icon.jpg (The Researchers' Perspective (MP3)

By Sarah Klein

When Harvard School of Public Health faculty Arnold M. Epstein, M.D., and Ashish K. Jha, M.D, M.P.H, published their first analysis of data supplied by hospitals to the Hospital Quality Alliance in 2005, the results weren't very reassuring.

Not only was the quality of hospital care for three conditions—acute myocardial infarction (AMI), heart failure (HF), and pneumonia—less than optimal, it varied widely by geographic region and type of hospital. It also varied across conditions within individual hospitals, disproving the popular notion that a good hospital is good at treating all problems. A subsequent study by Epstein and Jha showed why the widespread variation was problematic: poor performance on these indicators, which measure adherence to evidence-based processes of care, was associated with higher risk-adjusted mortality for each of the three conditions.

It appears that hospitals have been tracking their performance. An update of the analysis, done by the same Harvard researchers for the current issue of Quality Matters, shows hospitals nationwide made great strides in improving their performance in just 18 months. The median score, which reflects a summary of performance on all 10 quality indicators, increased five points to 89.1, according to the second analysis, which compared HQA data from July 2005 to June 2006 to data from January to December 2004. The improvement was even more dramatic in the treatment of pneumonia and heart failure, for which performance had been low in the first analysis. The data also showed that, for all three conditions, the gap between the scores of the worst-performing hospitals and the best-performing hospitals narrowed.

The second analysis also examined data supplied by hospitals on 12 additional measures, which cover the original three conditions and one new area: the prevention of surgical infections. While hospitals did not perform as well on the new measures as the older ones, they were clearly paying attention to them. Performance declined only a small amount as a result of the additional measures.

Quality Matters asked Epstein and Jha to share their interpretation of the results from this second analysis.

Q: Were you surprised by the results?

Jha: The magnitude of increase in quality scores across the board was surprising. We saw dramatic improvements in scores in all conditions. The biggest surprises were in areas like pneumonia care, where national scores increased nearly 10 points. We had expected that, while individual organizations might show dramatic improvements, national scores would rise by one or two points for a condition. The magnitude of the change seems to suggest that public reporting itself can be a powerful catalyst for improvement.

Q: In what areas did you see the greatest improvement and why?

Jha: The largest improvements were in those conditions where the initial scores were lower, such as congestive heart failure and pneumonia. This is, of course, not surprising since these conditions represented the greatest opportunities for improvement.

Epstein: We have seen this pattern of improvement in several studies of pay for performance. When we draw attention to performance, those at the lower end of the spectrum seem to make the greatest improvements.

Q: The current set of data measures hospital performance on 22 measures, more than double the previous year. Did adding measures lower their scores?

Jha: Expanding the data set had only modest effects. When we included measures that were recently added to the public reporting program, there was only a slight decrease in national performance on these quality scores.

Q: What do you conclude from that?

Jha: We expected to see a bigger drop because hospitals had a much shorter time period to focus on these new measures. The fact that the drop was modest suggests that hospitals likely knew these measures were going to be adopted for public reporting and began to focus in these areas.

Q: In a year-to-year comparison on the same measures, it appears the low performers improved significantly in some categories, while top performers stayed more or less where they were. What do you make of this?

Jha: Given that the top performers were already well above 90 percent, it is not surprising that their performance didn't change very much. First, they had very little room to improve. Second, improving when your performance is already very high is very costly, and we suspect that most high-performing hospitals wisely chose to invest their quality improvement efforts elsewhere.

Epstein: In two studies of pay-for-performance programs, one by Meredith Rosenthal, Ph.D., and colleagues and the other by Peter Lindenauer, M.D., M.Sc., and colleagues, we have seen the largest improvement by medical groups and hospitals that initially had the lowest performance and were not likely to attain levels of performance that would be above the pay line. There are two potential overlapping explanations: improvement is easier when initial performance is low; and pay for performance not only motivates providers by the financial incentive, but it also has signaling value that heightens the impact of public reporting and particularly effects low-performing providers.

Q: Did you notice any significant changes in the state rankings?

Jha: While there are some differences in state rankings, those that tended to be high performers in the last report are still the high performers, and vice-versa.

Q: Are we setting the bar high enough using composite scores? Right now, hospitals are getting credit for providing some of the required care. Would we see more improvement if hospitals only got credit when they followed all of the recommended guidelines? If such an all-or-nothing approach were used, how would you introduce it without provoking a backlash from the provider community?

Jha: I do think that the utility of these composite measures is diminishing as performance gets very high. All-or-nothing measures would, of course, be more challenging and therefore give providers something to strive toward. The backlash from the provider community would be manageable if the HQA program chose to go down this road.

Epstein: I think we need to wrestle more with the issue of cost-effectiveness. We need to try to improve performance for quality indicators that represent the greatest opportunity to improve health outcomes. Similarly, we need to focus attention on indicators where improving current levels of performance can be done cost effectively. Getting from 99 to 100 percent may be laudable but difficult and expensive to achieve.

Q: How else would you refine the current measures?

Jha: We clearly need more breadth. These measures are narrowly tailored toward care for three common medical conditions (plus infection prevention in surgery). Given that these three conditions represent less than 15 percent of hospital care for the elderly, developing measures for other conditions and focusing on other aspects of care, such as patient safety, is critical.

Q: At what point do such process measures become too prescriptive?

Jha: Given that each of these measures is evidence-based, even if it is prescriptive—I favor using them. The issue is that people might choose to focus on these and ignore other aspects of care. Therefore, more global measures, such as patient outcomes, are valuable.

Q: Is there a risk of unintended consequences? Could the zeal to hit targets lead to overtreatment?

Jha: You always have to worry about unintended consequences. The solution is to put measurements in the data system to test for them.

Q: Should we retire measures once hospitals achieve the desired results?

Jha: There is very little value in active surveillance of measures where everyone has achieved performance in the mid to upper 90s. Therefore, removing these measures from an active surveillance or incentives program makes sense.

However, given that we don't know if "retiring" them will lead to a drop in future performance on these measures, some ongoing monitoring to ensure that these essential aspects of care are still being delivered at a high rate is important.

Q: What do you think about exempting hospitals that demonstrate consistently high performance? How would you define high performance: a perfect score or something less?

Jha: Even high-performing hospitals should be subject to ongoing performance measurement and monitoring. This performance measurement program is not particularly onerous and ensuring that there is universal participation is valuable.

Perfect scores are difficult to achieve and not necessarily the goal. Clinical documentation is rarely perfect and, occasionally, organizations will fail to achieve perfection due to issues of documentation. It is more important that an organization be near perfect consistently across conditions over a long period of time.

Publication Details