Until recently there was no research quality evidence (yes, there was a lot of soft science on the psychology of influence and performance indicators attributable to drug rep promotion but nothing about outcomes from CME) to guide the discussion. (For convenience here I'll loosely use the terms pharmascold and pharmapologist to describe the proponents and opponents, respectively, of restrictive policy change). Absent such research quality evidence the debate usually followed a pattern. The pharmascolds made their appeal to popular belief (industry money is dirty, so if it supports CME it has to be degrading) and numerous anecdotes. (For a nice collection of anecdotes read Daniel Carlat's blog; he's got a bunch of them). Aside from the fact that collections of anecdotes do not equal data, there are anecdotes to support either side. I could cite plenty of examples of non-supported CME offerings that reflect presenters' biases. The government sponsored NCCAM CME offerings, for example, reflect a pervasive bias about what scientific standard ought to be applied to health claims. Even the highly respected UptoDate, from which I often obtain CME credit, is biased, containing many articles which conclude with authors' recommendations based on what they prefer at their institutions. On the other side the pharmapologists argued about unintended consequences and said “show me the evidence to justify policy change and its negative consequences.”
Which leads me to a post from yesterday by Thomas Sullivan (for the sake of discussion we'll call him a pharmapologist) at Policy and Medicine concerning his point-counterpoint with Dr. Howard Brody who takes the pharmascold position (his main post in question being here.) Much of the discussion was over recent research data, a huge new database which, in the form of three published studies, looked to see if there was a difference in perceived bias between industry and non-industry funded activities. (There wasn't. You can find links to the studies in the Thomas Sullivan post). He provided a detailed point by point summary of the exchange, so I'll just make a couple of observations. I read with interest this paragraph from his post:
CME providers are concerned about the decreasing support for CME programs because it means fewer programs for health care practitioners, less innovative and collaborative programs, greater inconvenience for doctors in both timing and geography, larger and less interactive programs, and broader programs that do not address the specific needs of target audiences.
In addition, with waning industry support offerings are becoming more expensive. Increasing distance to meeting sites inflates travel expenses. If you get an educational stipend those expenses can eat it up pretty fast. In my own case the last 15 CME hours for 2010 were out of pocket at around $100 an hour. (For that particular meeting, which may be on its last legs, the course directors have had to fork up some of their own money just for its sponsoring institution to break even.)
Do I think I'm entitled to industry support? Not at all. But the fact remains that as support diminishes my options for CME are more and more restricted. Like the health care system that provides my stipend, I'm on a budget. I have competing financial demands. When money's not an object choices increase. When it becomes an object they diminish. I live with that fact with humor, not rancor or self pity. As a matter of fact I really, really get a laugh out of one tired pharmascold argument: that by giving up industry support doctors somehow “take control” of CME. I'm not sure how that's supposed to work, because as industry support slips away I'm less and less in control of my options.
And about those three studies? Dr. Brody dismisses them with a bit of circular reasoning:
What did the studies show? When physicians attend CME programs, they have to check off boxes on an evaluation sheet, stating whether they do or don't think that the presentation they just listened to showed inappropriate or excessive commercial bias. What all three studies showed is that the vast majority of docs, most all the time, check the NO box. To me that suggests that either the docs are lazy about what boxes they check, or else that they may be unable to detect bias when it might actually exist.
The reasoning is circular because its conclusion is assumed in its premise. It goes something like: “commercial CME is excessively biased compared to non-industry CME. These docs didn't report that. Ergo all these docs (well over a million in the studies, by the way) are either too lazy to give appropriate responses or lacked the ability to detect bias.” It's as if the idea of inappropriate bias attributable to commercial support in comparison to non-supported CME is so self evident as to be axiomatic. External evidence be danged.
Dr. Brody then offers up the straw man and shifts the burden of proof with this:
To suggest that a study that consists of these data show positively that no bias exists in CME programs seems a far stretch. (There might in fact be no commercial bias in CME programs, but you'd need far better methods than in these three studies to know that.)
No one's claiming that no bias exists in CME programs. As to the burden of proof, shouldn't that be on the shoulders of those who want major policy change with all its potential unintended consequences?