Friday, November 20, 2009

Has public reporting impacted health care at all?

Public reporting of health care performance measures has been a hot topic for several years. It already had a good head of steam in 2005, my first year of blogging, when policy mavens hoped it would be transformative. Last year Bob Wachter even wondered if we were entering a golden age of transparency.

But how was it supposed to work? The conventional wisdom was that it would work by influencing consumers' choices. That never came to pass, though, and Bob suggested another effect:

Shockingly, in the past five years, these mantras have proven to be way off the mark. Instead,

1.Some rudimentary quality data has been placed on the Web
2.Few people are looking at these data, and virtually no real patients are making their healthcare purchasing decisions based on them.
3.And yet… hospitals are doing organizational cartwheels trying to improve their performance on the publicly reported indicators.

Although #2 is surprising, #3 is truly flabbergasting -- it demonstrates the power of shame and embarrassment as motivating forces.


So, shame and embarrassment are motivating hospitals! Yes, it's true, but is it a good thing? In my view such motivation amounts to little more than a form of institutional narcissism---it's all about us (the institution) and our report cards, and has little to do with patients. (As regular readers know I believe that, with few exceptions, performance measures are weak, sometimes non-evidence based and occasionally even harmful).

OK, all cynicism aside: has this transparency movement had any meaningful impact at all? Up to now we've had no evidence one way or the other. That brings me to the EFFECT study, presented at AHA 2009 and published on line in JAMA. The conclusion in the published abstract doesn't give you the real flavor of the study but here's what it said:

Results The publication of the early feedback hospital report card did not result in a significant systemwide improvement in the early feedback group in either the composite AMI process-of-care indicator (absolute change, 1.5%; 95% confidence interval [CI], –2.2% to 5.1%; P=.43) or the composite CHF process-of-care indicator (absolute change, 0.6%; 95% CI, –4.5% to 5.7%; P=.81). During the follow-up period, the mean 30-day AMI mortality rates were 2.5% lower (95% CI, 0.1% to 4.9%; P=.045) in the early feedback group compared with the delayed feedback group. The hospital mortality rates for CHF were not significantly different.

Conclusion Public release of hospital-specific quality indicators did not significantly improve composite process-of-care indicators for AMI or CHF.


So, all in all, public reporting had little impact. I'll get back to that barely statistically significant improvement in MI mortality in a moment.

This study had a somewhat roundabout design. Both comparison groups of hospitals were subject to public reporting. The difference was, in the “intervention group” the reporting was early and accompanied by a media blitz. In the comparison group the reporting was later and there was no media blitz. According to post reporting surveys the intervention hospitals did scramble to enhance their performance, but in a heterogeneous and disorganized manner, accounting for the lack of measured difference in overall adherence to indicators between the comparison groups.

Now what about that improvement in MI mortality? It turns out that it was largely attributable to an improvement in STEMI mortality. Although the percentage of hospitals achieving time-to-reperfusion benchmarks didn't differ between the two groups the findings suggested that in the intervention group there may have been a shorter time to reperfusion. If so it appears to have been driven entirely by time to administration of thrombolytic agents (PCI was used infrequently, but, hey, this was Canada). The only difference in process indicators attributable to the intervention was the frequency of thrombolytics administered in the ER, before transfer to intensive care. Of note, the survey indicated that ten hospitals in the intervention group decided, after the reporting, to allow ER physicians to administer thrombolytics without specialty consultation.

Although I believe performance measures tend to be weak, time to reperfusion is a notable exception. The process change by which hospitals gave ER docs autonomy in the administration of thrombolytics may have made a difference.

So, after all these years and all this momentum, evidence for an impact of public reporting is slim. Institutional narcissism, up to now, has been largely ineffective, being driven by weak and perfunctory process measures. The quality movement needs to adopt a more thoughtful and nuanced approach to improvement.

1 comment:

Anonymous said...

My hospital is "doing cartwheels" trying to get the foley out on post-op day #2 - even for pts in the ICU on the vent.

Do you really pick who should do your CABG or aortic valve replacement by how many of his patients get their foley out on hospital day #2? The data is meaningless...