According to Bob’s postmortem what we really need is better science and a more circumspect approach, and that those who call for a moratorium on the performance movement have gone too far:
And now we have Groopman and Hartzband arguing that we should take a “time out” on quality measures, leaving it to doctors to make their own choices since only they truly know their patients. Do we really believe that the world will be a better place if we went back to every doctor deciding by him or herself what treatment to offer, when we have irrefutable data demonstrating huge gaps between evidence-based and actual practice? Even when we KNOW the right thing to do (as in handwashing), we fail to do it nearly half the time! Do the authors really believe that the strategy should remain “Doctor Knows Best"; just stay out of our collective hair? Pullease...
Ouch. I kind of liked the Groopman and Hartzband piece. Yes, we need better quality. Yes, there is a huge gap between evidence and practice. The question is what should we do? That’s where Bob and I part company. I don’t believe the performance movement will get us there. I’ll go a step further and question what appears to be a basic premise of his, which is that the measurement and public reporting of performance has anything at all to do with real quality.
So why is the performance movement a failure? Let’s count the ways.
First, what about the science? In his post Bob notes:
Some critics have even suggested that we put a moratorium on new quality measures until the science improves.
But the problem is not the science. The problem is its misappropriation by policy mavens many of whom, I dare say, understand little about real world application of such science.
The science, for example, which informs us that the effectiveness of adult pneumococcal vaccine is somewhere between slim and zilch is just fine, thank you. What about the blood sugar debacle? Here the rush to clamp every hospitalized patient’s glucose between 80 and 110 was based on the misinterpretation of perfectly sound science. But in their enthusiasm for a single study the misguided policy wonks ignored one of the first principles of critical appraisal: ask yourself whether your patient was in that study. Most hospitalists and intensivists working in the real world knew right off the bat that their patients were not represented by that study. In the case of perioperative beta blockers the policy wonks extended the findings of good science far beyond what appropriate critical appraisal warranted.
In some cases science gave us an important, generalizable lesson for real world care but the policy wonks couldn’t see the utter folly of translating the evidence into a “metric.” Such was the case with the four hour antibiotic rule.
Some performance measures were based on science that was strong, generalizable, mature and straightforward to implement, yet they still failed. What could be more evidence based and easy to implement, we thought, than the heart failure measures? So some were more than a little surprised when, in the Optimize-HF database, those measures turned out to be a bust! Why? I dissected the reasons here. The short version of that analysis is that while there’s no doubt the measures are evidence based and underutilized, promulgating them as performance metrics may do all sorts of things that negate their effectiveness.
So what can we do about the horribly low uptake of evidence into practice? Bob cites data suggesting that it’s only around 50%. For many conditions it’s much lower than that. Multiple systemic and cultural barriers exist. Some of the barriers have been created by the performance movement itself! (I explain here, here, and here). The enormous number and complexity of those barriers should make it obvious that no easy fix exists. If any effective fix exists it will be multifaceted and complex. It will involve education and the development of better tools to help doctors put evidence into practice. It won’t come from Washington in the form of more “metrics.”
Bob concludes his post with:
From where I sit, of all our options to meet the mandates to improve quality and safety, tenaciously clinging to a Marcus Welbian (and demonstrably low quality) status quo or creating tests that can be passed by appearing to be working on improvement seem like two of the worst.
I agree. But I wouldn’t stop at two. I’d add performance metrics to that list of failures.
1 comment:
Great post.As usual when this topic comex up I am compelled to quote Goodhart's Law which states " Once a measure is made a target it will loose the information content that would qualify it to play such a role"
Post a Comment