Wednesday, June 16, 2010

Proponents of a ban on industry supported CME won't accept their burden of proof

---because they have no evidence to sustain it. Instead they rely on a system of popular beliefs to support their arguments. That's what I said in my Medscape Roundtable piece on this subject last year. Despite personal attacks and misrepresentation of that article I stood by my premise.

Since that time there have been two important studies, one from Cleveland Clinic and the other from UCSF, which address this issue. Both were published in last January's issue of Academic Medicine. Both studies showed a lack of association between commercial support and content bias.

First the UCSF study:

Method: Cross-sectional study of 213 accredited live educational programs organized by a university provider of CME from 2005 to 2007.

Results: Mean response rate for attendee evaluations was 56% (SD 15%). Commercial support covered 20%-49% of costs for 45 (21%) educational activities, and less than or equal to 50% of costs for 46 activities (22%). Few course participants perceived commercial bias, with a median of 97% (interquartile range 95%-99%) of respondents stating that the activity they attended was free of commercial bias. There was no association between extent of commercial support and the degree of perceived bias in CME activities. Similarly, perceived bias did not vary for 11 of 12 event characteristics evaluated as potential sources of commercial bias, or by score on a risk index designed to prospectively assess risk of commercial bias.

Conclusions: Rates of perceived bias were low for the vast majority of CME activities in the sample and did not differ by the degree of industry support or other event characteristics. Further study is needed to determine whether commercial influence persisted in more subtle forms that were difficult for participants to detect.

The last sentence of the conclusion raises the question of whether participants' perceptions are valid outcomes to measure. These were large studies, and if the pharmascolds contend that they're not valid outcomes they'll have to premise that argument on credulity and lack of critical skills on a massive scale among the participants.

The Cleveland Clinic study was even more impressive:

Method: The authors analyzed information from the CME activity database (346 CME activities of numerous types; 95,429 participants in 2007) of a large, multispecialty academic medical center …

When analyzed by type of funding relative to commercial support—none (149), single source (79), or multiple source (118)—activities were deemed to be free of commercial bias by 98% (95% CI: 97.3, 98.8), 98.5% (97.5, 99.5), and 98.3% (97.4, 99.1) of participants, respectively. None of the comparisons showed statistically significant differences.

One of the Cleveland Clinic investigators was interviewed. He had this to say:

We thought we would find the highest rates of bias in activities with a single funding source and the lowest rates of bias in activities with no commercial support, and that multiple-supported activities would fall somewhere in between. We also thought we would find a higher rate of bias in activities that were commercially supported than in those that didn’t have any commercial support.

While we are proud of the fact that we’re very strict about complying with the Accreditation Council for CME’s Standards for Commercial Support, we thought if we tested the data, we would probably find these hypotheses to be true. In fact, we found the opposite—that commercial support did not result in a perception that the activity was biased.

Sometimes data refute popular assumptions. He went on:

The most surprising was that activities that have absolutely no funding associated with them—activities for which the Cleveland Clinic as a CME provider absorbs the cost within its operations—had a higher level of perceived bias than the other two categories. It was surprising because 1) it indicates that perceived bias is not associated with industry support; and 2) it points to bias in content being caused by something other than the funding source of the activity.

Our data ranked single-funded activities—which we thought would be most at risk of bias—as being most free of bias. Activities that were multifunded fell in between.

It should be re-emphasized that these differences were small and none were statistically significant. He concluded:

Policymakers need to pay attention to this sort of data because, if they’re looking to evaluate issues of bias, this shows that it’s not coming from the pharmaceutical industry’s provision of educational grants. What that funding is doing is allowing us to produce great education. Instead, policymakers should review what good CME providers do (the ones that follow the rules) and emphasize their best practices as examples to follow. Also, we all have to realize we’ll never be able to reduce bias to zero—everyone, faculty and learners alike, has some level of bias in their views. The results of our study show that the effect of industry support on participants’ perception of bias within CME activities is minimal. Further, CME providers that have suitable oversight to ensure compliance with the ACCME’s SCS can be successful in implementing commercially unbiased education, regardless of funding source. This quite conclusively shows that the prohibition of commercial support is not needed.

More from the Policy and Medicine blog here and here.

No comments: