As proponents of comparative effectiveness research (CER) point out, we have many treatments for the same illness which are known to work because in clinical trials they work better than placebo. The clinician wants to know which among these treatments is best for a given condition. Although such questions are simplistic in the real clinical world they serve as starting points for CER, of which we have many examples. But the wealth of CER studies already at our disposal contains lessons in clinical trial design which should sound a note of caution about an inherent vulnerability: CER is uniquely susceptible to design rigging.
Here’s how it works. Say you want to compare drug A with drug B. Suppose you have a conflict of interest that biases you towards drug A, and you have an incentive to make drug A look better than drug B. There are several easy ways to rig the design and accomplish this and, as I will illustrate, it has been done many times. The simplest way is to give the wrong (or a suboptimal) dose of drug B.
Take, for example, the RECORD studies on VTE prophylaxis in patents undergoing hip and knee arthroplasty. In these studies rivaroxaban, developed by Bayer, the company which funded the trials, came out looking superior to enoxaparin. But look at the study design. The dose of enoxaparin given in the trials was 40 mg once daily. Dosing of enoxaparin for VTE prophylaxis following hip and knee surgery is somewhat controversial but by most accounts 40 mg daily is below the optimal dose. Thus the design appears to have favored rivaroxaban.
Here’s another example. In a comparative effectiveness study concluding that enoxaparin was superior to unfractionated heparin for VTE prophylaxis in stroke patients the investigators, who had ties to Sanofi-Aventis, the makers of enoxaparin, apparently forgot to give the optimal dose of unfractionated heparin.
Besides suboptimal dosing there are other ways to stack the deck in comparative effectiveness research. ALLHAT, which was spun as positioning thiazides as the unequivocal starting drugs of choice in hypertension, was designed to place lisinopril at a disadvantage compared to chlorthalidone. (Rather than elaborate myself I’ll let you read hypertension expert and ALLHAT investigator Michael Weber’s analysis here).
I could go on and on but will give just a couple more examples. Two comparative effectiveness megatrials of thrombolytic therapy in myocardial infarction, GISSI 2 and ISIS 3, designed adjunctive heparin use in a manner which placed TPA at a disadvantage.
Of course comparative studies are important. They answer a type of clinical question not addressed by placebo studies. Placebo controlled trials, on the other hand, tend to have cleaner and simpler designs for the reasons I point out above.
I believe clinical researchers are basically honest. I don’t believe most of the design flaws in these studies resulted from deliberate, thoughtful efforts on the part of the investigators. But I do believe biases and conflicts creep in, especially when given opportunity by the unique vulnerabilities of comparative effectiveness research. And we know there are conflicts of interest, huge conflicts, in the new government funded program.
3 comments:
To think that government funded or managed comparative effectiveness research will automatically be free of biases is to ignore not only the many obvious and some not so obvious ways to stack the analysis but also to ignore the history of the many ways that special interests influence the activities of government organizations. Thanks for another great posting.
Couldn't a journal reviewer insist that if a trial had been completed using what is thought, in the literature, to be a sub-optimal dose of a drug, that the trial be continued using a more optimal dosage?
Maybe the problem is that journals have no incentive to say "no", and ask for more work to be done in an (expensive and otherwise well-done) study? Lots of good but flawed studies sell more paper and get more fees, and even may be argued usefully advance the field (after all, as an intellectual exercise, poorly done trials still tell us something: that they could be better done). The drug companies, as a whole, who are mostly paying for this research, certainly don't have any motivation for there be better done comparative studies: fuzzy, confusing studies work better for them, marketing wise.
The word bias has too many meanings, and is subject to rhetorical mismanagement when attempting to compare different kinds of bias as equivalent. For example, the members of the drug makers, as whole, have an incentive to keep questions of effectiveness as open as possible. (So do researchers as a group.) Individual companies may wish one drug to appear better than another. The government has an incentive to keep costs down. Those are three different motivations that may lead to completely different kinds of bias, not always usefully comparable.
Many institutions limit access to their online information. Making this information available will be an asset to all.
Post a Comment