Brad Flansbaum addressed this topic recently, blogging at The Hospitalist Leader. The post, interestingly, had little to do with the blog's overall focus on leading hospitalist groups. One implication of the post seemed to be a pitch for a greater role of policy and expert panels weighing in on patient care decisions. Was he talking about government leaders, guideline authors or something else? He didn't say. If I knew of any particular political leanings on Dr. Flansbaum's part perhaps I could make a guess, but I don't.
Somehow the post rubs me the wrong way. It's difficult to criticize because though it asks provocative questions, talks around many things and implies many things it doesn't make a declarative statement about much of anything.
But at the risk of erecting the straw man (with apologies in advance if I do) I'll try to parse it out. First, the background for the post is a new study in the Journal of General Internal Medicine (JGIM) which compared the way a sample of doctors and a sample of patients looked at numbers from a hypothetical clinical trial:
Respondents were asked to interpret the results of a hypothetical clinical trial comparing an old and a new drug. They were randomly assigned to the following framing formats: absolute survival (new drug: 96% versus old drug: 94%), absolute mortality (4% versus 6%), relative mortality reduction (reduction by a third) or all three (fully informed condition). The new drug was reported to cause more side-effects...
RESULTSThe proportions of doctors who rated the new drug as more effective varied by risk presentation format (abolute survival 51.8%, absolute mortality 68.3%, relative mortality reduction 93.8%, and fully informed condition 69.8%, p less than 0.001). In patients these proportions were similar (abolute survival 51.7%, absolute mortality 66.8%, relative mortality reduction 89.3%, and fully informed condition 71.2%, p less than 0.001). In both doctors (p = 0.72) and patients (p = 0.23) the fully informed condition was similar to the absolute risk format, but it differed significantly from the other conditions (all p less than 0.01). None of the differences between doctors and patients were significant (all p greater than 0.1).
There's a lot we could unpack here but what are the essential findings of the study? First, we all, doctors and patients alike, are subject to framing bias. No surprise there. Then there was the fact that docs and patients responded similarly when presented only the relative risk reduction. No surprise there, either. In fact it says very little because the information presented to the respondents provided no test of critical analytic skill. With only the relative risk reduction presented the respondent has no way to assess the effect of framing. The astute response to such a scenario would be “it depends” (on the baseline risk, raw numbers, etc) but I'm sure the survey questions didn't offer that option. What might raise some eyebrows is that when absolute risk reduction and comprehensive information were presented patients and doctors still responded similarly.
The authors of the paper were appropriately guarded in their conclusion:
CONCLUSIONSFraming bias affects doctors and patients similarly. Describing clinical trial results as absolute risks is the least biased format, for both doctors and patients. Presenting several risk formats (on both absolute and relative scales) should be encouraged.
This paper addressed framing bias. It did not address whether doctors are better in the real world at critically appraising the medical literature than patients. It did not purport to do so. The hypothetical scenarios presented were too simple to be a meaningful test of critical appraisal. To conclude that doctors are no better at interpreting medical literature than the lay public is not only ridiculous on its face (a little Bayesian thinking might be in order here) but goes far beyond anything demonstrated by the study. Yet Dr. Flansbaum dances around that very idea.
So, again trying hard to avoid the straw man, here are my reactions to a couple of the other questions raised in Dr. Flansbaum's post:
Are auto mechanics more worthy of their fiduciary duty than doctors?
The opening sentences of Dr. Flansbaum's post read:
I assume, incorrectly perhaps, that mechanics have a basic knowledge of their craft such that routine auto repairs require little effort. The tasks do not supersede the expected competency of the repairperson, and the customer can expect a car that operates at the time of pick up. A small percentage of jobs may stretch that assumption, but that is okay by me.
But the snark in the next two paragraphs hint that he doesn't feel quite the same about doctors.
Do doctors need policy setting expert panels in order to practice EBM?
From Dr. Flansbaum's post:
I also observe that politicians object to “meddling” when EBM-based policies from expert committees passively (or actively) affect the doctor-patient relationship, especially as it relates to decision-making and the counsel we provide. Just watch the nightly news—sound bites abound. This relationship is sacrosanct after all, and our advice is authoritative and 98.7% correct. Who would question a physician after all?
Well, first, I know of no pundits or politicians who assert that we should never question doctors, or that doctors are authoritative or 97% correct. Now if Dr. Flansbaum is pitching for more policy-based control of physicians' practices it's strange that he would invoke EBM. EBM, much like “patient safety” and “antibiotic stewardship,” is one of those catchy terms all too easily used to bolster a specious argument.
The true notion of EBM does not apply here at all. In fact, according to David Sackett's original definition, EBM can only be applied with individual clinical expertise at the level of the individual patient. So there can be no such thing as “EBM based policies from expert committees.” I've heard many arguments in favor of more central control of doctors' practices. Though I disagree with that position I respect many of the arguments. Just don't invoke EBM. However much there may be to like about central control of medical practice, it isn't EBM.
So back to the title of the post. What kind of help do we really need? As I've said before the focus needs to be on improving access to educational resources so doctors can more quickly and easily apply the best external evidence to individual patient care.
Oh, BTW---were the editors of JGIM in too big a hurry to get this paper out or were the spell checkers at the Journal on the blink that day?