Tuesday, January 11, 2011

The decline effect

---refers to a waning of the magnitude of positive research findings over time. In other words while initial research may lead to a slam dunk conclusion subsequent attempts to replicate it, though sometimes positive, are often less robust. Other times, of course, they don't stand up at all. Such was the topic of a recent piece in The New Yorker.

Aside from a few flaws in the article (its implication that early popularity of HRT was bolstered by randomized controlled trials, credulity towards acupuncture and a statement that research on cardiac stents suffered from the decline effect) it did a fair job of illustrating some of the booby traps of conducting and reporting research.

The article explored several explanations for the effect including regression to the mean. No single one is adequate. There are probably many contributing factors. Initial research findings may be more likely to suffer from investigator and publication bias than those of subsequent investigations, which may bring more skepticism to the picture. Sometimes initial studies have design issues that make them unsuitable for generalization or widespread change in practice such as small size or anomalies in the sample that was studied. Early studies on glycemic control in hospitalized patients and perioperative beta blockers come to mind.

So do we throw evidence based medicine (EBM) and science based medicine (SBM) out the window? Not at all. This is just a reminder that though some science is for practical purposes settled, much of it is tentative. That's why EBM regards the most current evidence as an important attribute of best evidence.

Related content at DB's Medical Rants.

No comments: