As I mentioned in the previous post a non-quantitative description of Bayesian analysis is that new information must always be interpreted in light of prior information. That's intuitive to most clinicians when they think about clinical trial results. Research findings that disagree with prior research are likely to be viewed with more skepticism. Although Bayesian analysis applies quantitative methods to the incorporation of prior knowledge there are subjective elements. This subjectivity has been the basis for criticism of Bayesian analysis. Such criticism is misinformed because current frequentist methods, with their well known p values and confidence intervals, take no account of prior knowledge and are not as objective as they appear. In fact, assumptions concerning truth in the real clinical world based on a low p value of a single study may be little more than leaps of faith. P values in research reports are analogous to the specificity of laboratory reports in clinical diagnosis as discussed in the previous post. Neither specificity levels nor p values directly address whether a finding is true. In fact both are based on negative assumptions (absence of disease and absence of treatment effect in a clinical trial, respectively). The p value calculation assumes no treatment effect (the null hypothesis). The value itself denotes the probability of obtaining the experimental result if the null hypothesis is true.
To estimate the probability that a research finding reflects generalizable truth one must determine the posterior probability (the probability that the hypothesis is true given the prior knowledge and the new data). This is analogous to the positive predictive value I referenced in part 1 and is ignored by the current frequentist methods of evidence based medicine (EBM). EBM works fairly well most of the time when research results are more or less in line with prior knowledge. But because it ignores prior knowledge, especially basic science, it fails in the evaluation of implausible claims. Consider some of the wooiest of woo, e.g. homeopathy and therapeutic touch. Occasional “research” studies will show positive treatment effects with statistically significant p values. But prior knowledge from basic science, bringing to light the implausibility of such claims, would result in a pre-test probability (prior probability) of virtually zero. Both quantitative Bayesian analysis and plain old common sense tell us that the degree of proof needed to overcome such overwhelming prior odds would have to shake the world of chemistry and physics, requiring a fundamental re-writing of the text books.
Kimball Atwood, in a post from last year in Science Based Medicine, explained EBM's failure to evaluate the claims of complementary and alternative medicine this way:
Evidence-Based Medicine (EBM) is not up to the task of evaluating highly implausible claims. That discussion made the point that EBM favors equivocal clinical trial data over basic science, even if the latter is both firmly established and refutes the clinical claim. It suggested that this failure in calculus is not an indictment of EBM’s originators, but rather was an understandable lapse on their part: it never occurred to them, even as recently as 1990, that EBM would soon be asked to judge contests pitting low powered, bias-prone clinical investigations and reviews against facts of nature elucidated by voluminous and rigorous experimentation. Thus although EBM correctly recognizes that basic science is an insufficient basis for determining the safety and effectiveness of a new medical treatment, it overlooks its necessary place in that exercise.
In part 1 I gave the first corollary of Baye theorem for poets, surgeons and the rest of us: extraordinary claims require extraordinary proof. Now I'll give you the second corollary: evidence based woo is still woo.