In a spirit of friendly debate let me answer some of Nick’s objections. He takes a more optimistic view the last decade of medical education, arguing that for some time after the Flexner report medical education remained dogmatic, only in more recent years embracing the rigorous scientific principles of evidence based medicine (EBM). While this popular view of medical education would seem to be in diametric opposition to what I said, it’s not so simple.
So, if you subscribe to Dr. Donnell's narrative, you'd be inclined to believe that for nearly a century, physicians were consistently trained to critically evaluate scientific literature. You would think the tests that they employed, and the therapies that they prescribed, were based on a strong foundation of supporting evidence.
In fact, this was not the case. Medical education was dogmatic after Flexner. However, instead of calling upon the wisdom of the ancients, or their chakras, doctors relied on animal physiology experiments and the observations of a few brilliant, dead clinicians.
The treatments that medicine espoused throughout the 20th century had a basis in science, to be sure, but whether these therapies were really helping patients was unknown -- and often not even properly studied.
Unknown? What about antimicrobial agents, vaccines and CPR? Granted, clinical investigation early in the 20th century didn’t meet the standards of EBM as we know it today, but this was not the result of mainstream medicine’s adherence to dogma. According to M.L. Meldrum’s fascinating paper on the history of the randomized controlled trial, efforts were made in the early 20th century to garner evidence on the clinical effectiveness of therapies but they were beset with lack of funding and organization. The American Medical Association, at about the same time it helped commission the Flexner Report, began initiatives aimed at the evaluation of drug effectiveness, including publications such as Useful Drugs. Controlled studies became increasingly common in the 1930s. The publication in 1935 of Fisher’s The Design of Experiments was a landmark development in the progress towards clinical trials. In the late 1940s shortages of streptomycin allayed ethical concerns about randomized trials of this agent in the treatment of tuberculosis. These examples from Meldrum’s paper comprise just a portion of the time line of advances in clinical investigation in the first half of the 20th century.
When I was a medical student in the early 70s many randomized clinical trials including the University Group Diabetes Program study, the Coronary Drug Project, the Lipid Research Council trial, several early hypertension trials and the early coronary artery bypass trials, early trials on anticoagulation for myocardial infarction and thromboembolism were completed or underway in planning or implementation. Although the term “evidence based medicine” was not popularized until 1992 history shows that throughout the 20th century medicine was indeed gradually becoming more and more evidence based.
Nick goes on to criticize what he apparently believes to be an over emphasis in 20th century medical education on the basic sciences:
Of course, medical school also emphasizes scientifically determined biochemical pathways, with their opportunities for intelligent drug interventions. However, upon entering the wards, a significant student function is to push fluids and dole out cold remedies. Which ones? How much? Until recently, there was little scientific guidance for these decisions; students learned to simply do what their mentors and colleagues were doing.
No wonder CAM gained a foothold. If students were being made to learn arcane trivia and give time-honored but untested therapies, why not invoke energy fields and pressure points?
Well, the reason why not should be obvious.
The principal point of his argument is that medical education in the last decade has advanced critical thinking and evidence based medicine, not pseudoscience:
Students are now trained to critically appraise the literature. They can determine likelihood ratios for a diagnosis on the basis of a test result, and calculate how to properly judge a new therapy. They can point out the inherent biases and methodologic shortcomings in a study. Equipping future doctors with the tools of EBM has encouraged critical thinking about the way medicine is practiced, and has helped expose the inadequate underpinnings of 20th-century medicine's diagnostic and therapeutic modalities.
But recent surveys suggest that’s not true. A recent study published in JAMA demonstrated that medical house staff had a very poor working knowledge of the quantitative aspects of EBM. Here’s a link to a study showing a lack of proficiency among medical house staff in EBM searching.
So, there’s reason for concern that while medical schools talk a good game about EBM they’re not very effective in teaching it. But more striking to me is the irony of teaching EBM alongside the uncritical promotion of pseudoscience.
Orac weighs in here about the Roundtable.