At first glance that may seem odd. After all it's often the critics of CAM who want it to be “evidence based”, and although there's one camp of woosters who claim to be evidence based a second camp complains that the methods of EBM are inadequate to evaluate their modalities! I maintain that the second camp happens to be correct, but for the reasons I explained in part 2 and not for the reasons they claim.
Bayesian thinking addresses the problem by evaluating claims of CAM in light of prior science. In a quantitative sense, a scientifically implausible claim like homeopathy would have a prior probability P(A) of virtually zero. Because it is in the numerator of the equation and is infinitesimally small the new (clinical trial) evidence would have to be overwhelming (sufficient to shake the foundations of basic science) in order to support the claim. In common sense terms that means “extraordinary claims require extraordinary proof.” So, for the outlandish claims of woo you can do Bayesian analysis without even bothering with the math. Before I was even aware of Bayesian analysis to evaluate clinical trials I realized this principle intuitively; homeopathy and many of the other forms of woo just made my baloney detector go off.
Bayesian analysis requires appropriate selection of which prior evidence to use. This means including appropriate basic science (which is why systematic reviews and meta-analyses are not enough) and including only that prior information which is applicable. For the extremely implausible claims it's pretty easy. For some questions in mainstream clinical medicine it gets tricky.
An article on sepsis in PloS ONE from May of last year illustrates some of the problems. The authors attempted to apply Bayesian methods to the Surviving Sepsis Campaign bundle and reached this conclusion:
Our results demonstrate that the strength of evidence (statistical and clinical) is weak for all trials, particularly for the Low-Dose Steroid and EGDT trials. It is essential to replicate the results of each of these five clinical trials in confirmatory studies if we want to provide patient care based on scientifically sound evidence.
They further stated that application of only moderately skeptical analysis of prior evidence leads to the conclusion that the evidence is too weak to recommend early goal directed therapy, low dose steroids, intensive insulin therapy, activated protein C or low tidal volume ventilation. At least for EGDT (early goal directed therapy) and low tidal volume ventilation that's a pretty surprising statement and bears examination.
For purposes of this discussion I'll focus on EGDT which, by most, accounts, with the possible exceptions of early antibiotic therapy and source control, is the most robust of the sepsis bundle recommendations. Why did the authors give it such a low rating? I think they made two mistakes in the incorporation of prior evidence. The first is that while they looked at many clinical trials they ignored the more basic evidence that establishes the background and biologic rationale for EGDT, such as the extensive research cited in this paper. The second mistake is that the negative trials on hemodynamic optimization which they cited (see references 40-47) did not examine whether early hemodynamic optimization in the ER is beneficial, which was the premise of EGDT explicitly laid out in Rivers' original paper.
I could go on at length about the use and misuse of EBM, how it fails in the evaluation of CAM modalities and how Bayesian methods can help when properly applied, but enough for now. Perhaps the best way to end is with this perspective from one of Kimball Atwood's posts at Science Based Medicine:
Thus although EBM correctly recognizes that basic science is an insufficient basis for determining the safety and effectiveness of a new medical treatment, it overlooks its necessary place in that exercise.