As proponents of comparative effectiveness research (CER) point out, we have many treatments for the same illness which are known to work because in clinical trials they work better than placebo. The clinician wants to know which among these treatments is best for a given condition. Although such questions are simplistic in the real clinical world they serve as starting points for CER, of which we have many examples. But the wealth of CER studies already at our disposal contains lessons in clinical trial design which should sound a note of caution about an inherent vulnerability: CER is uniquely susceptible to design rigging.
Here’s how it works. Say you want to compare drug A with drug B. Suppose you have a conflict of interest that biases you towards drug A, and you have an incentive to make drug A look better than drug B. There are several easy ways to rig the design and accomplish this and, as I will illustrate, it has been done many times. The simplest way is to give the wrong (or a suboptimal) dose of drug B.
Take, for example, the RECORD studies on VTE prophylaxis in patents undergoing hip and knee arthroplasty. In these studies rivaroxaban, developed by Bayer, the company which funded the trials, came out looking superior to enoxaparin. But look at the study design. The dose of enoxaparin given in the trials was 40 mg once daily. Dosing of enoxaparin for VTE prophylaxis following hip and knee surgery is somewhat controversial but by most accounts 40 mg daily is below the optimal dose. Thus the design appears to have favored rivaroxaban.
Here’s another example. In a comparative effectiveness study concluding that enoxaparin was superior to unfractionated heparin for VTE prophylaxis in stroke patients the investigators, who had ties to Sanofi-Aventis, the makers of enoxaparin, apparently forgot to give the optimal dose of unfractionated heparin.
Besides suboptimal dosing there are other ways to stack the deck in comparative effectiveness research. ALLHAT, which was spun as positioning thiazides as the unequivocal starting drugs of choice in hypertension, was designed to place lisinopril at a disadvantage compared to chlorthalidone. (Rather than elaborate myself I’ll let you read hypertension expert and ALLHAT investigator Michael Weber’s analysis here).
I could go on and on but will give just a couple more examples. Two comparative effectiveness megatrials of thrombolytic therapy in myocardial infarction, GISSI 2 and ISIS 3, designed adjunctive heparin use in a manner which placed TPA at a disadvantage.
Of course comparative studies are important. They answer a type of clinical question not addressed by placebo studies. Placebo controlled trials, on the other hand, tend to have cleaner and simpler designs for the reasons I point out above.
I believe clinical researchers are basically honest. I don’t believe most of the design flaws in these studies resulted from deliberate, thoughtful efforts on the part of the investigators. But I do believe biases and conflicts creep in, especially when given opportunity by the unique vulnerabilities of comparative effectiveness research. And we know there are conflicts of interest, huge conflicts, in the new government funded program.