Tuesday, October 05, 2010

P4P will never work

--and this recent JAMA study highlights just one of the reasons: external factors, particularly patient demographic characteristics, influence scores:


Conclusion  Among primary care physicians practicing within the same large academic primary care system, patient panels with greater proportions of underinsured, minority, and non–English-speaking patients were associated with lower quality rankings for primary care physicians.


The authors, interviewed in the New York Times, offered some broader perspectives on why the measures fail. One is that payment incentives for some measures may divert attention and resources away from more important measures. Your hospital, for example, may have a great report card for pneumococcal vaccination (which doesn't work very well at all) but does it have a well developed post resuscitation bundle? If not, it may be because it's not a performance measure. Another is that many important dimensions of quality are intangible and can't be measured well.


Ultimately though the authors, like many policy leaders, seem stuck on this idea that if we can just find the right measures and more sophisticated ways of measuring them then the performance game will work:


“Pay-for-performance can work,” said Dr. Clemens S. Hong, lead author and a general internist at the Massachusetts General Hospital, “but we need more sophisticated measures to make sure we are actually measuring physician quality.”


But the pure notion of performance measurement is flawed and that's what the authors miss. DB sums it up nicely this way:


I do not understand the fascination with P4P.  The accumulating data shows all the flaws of P4P.  We should recall Onora O'Neil's words:

Yet faith in performance indicators is hard to dislodge. Every time one performance indicator is shown to be inaccurate, shown to encourage perverse behaviour, or shown to mislead the public, eager people imagine that they will find other performance indicators free of such adverse effects. Experience suggests that they are as mistaken as those who produced the last lot of indicators. 

The author of the JAMA paper has done a great service, but still has a delusion that if we just could find the right measures …  We cannot find the right measures through audit.  Perhaps we could use expert observation, but the bean counters would not like that.  We want to achieve and judge excellence, but the task just is not amenable to audit data.  We should drop that fantasy.  P4P in medicine is, and will be  FLAWED.

No comments: