Friday, December 05, 2014

When it comes to Bayesian statistics are we dumb as rocks?

Apparently the bloggers at Emergency Medicine Literature of Note think so based on this paper. Three quarters of survey respondents over a wide range of training and experience got the answer to this problem wrong:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person's symptoms or signs?”

This assumes that the test had a sensitivity of 100%.

One of the bloggers surveyed his residents and got a similar rate of wrong answers.

The answer, as revealed in the paper, is 1.96%. The authors accepted any “ballpark” answer (2% or less) as correct for survey reporting purposes. I got the right answer but cheated a little by consulting this resource.

So what's the real problem here? I think it's a little over the top to say we're dumb as rocks about Bayesian statics, but the blog author is correct, in my opinion, in his assertion that, as a profession, our overall foundation in EBM is poor. (I would digress for a second to add that it goes way beyond our inability to do the math; or misunderstanding of EBM is pervasive on many levels).

I think most of us understand Bayesian principals qualitatively. We know, for example, not to rely on the D dimer assay as a rule out for VTE in a high risk population. That's Bayesian thinking. But the math is not something we do everyday. The challenge question set a trap for the survey respondents by applying a test with good inherent characteristics (low false positive rate) to a low disease prevalence population. Unless you really stop and think you're tempted to jump to an inappropriately high probability of disease.

No comments: