Our pharma overlords will be displeased...

There's an oft-quoted saying that's become a bit of a cliché among skeptics that goes something like this: There are two kinds of medicine: medicine that's been proven scientifically to work, and medicine that hasn't. This is then often followed up with a rhetorical question and its answer: What do call "alternative medicine" that's been proven to work? Medicine. Of course, being the kind of guy that I am, I have to make it a bit more complicated than that while driving home in essence the same message. In my hands, the way this argument goes is that the whole concept of "alternative" medicine is a false dichotomy. There is no such thing. In reality, there are three kinds of medicine: Medicine that has been shown to efficacious and safe (i.e., shown to work); medicine that has not yet been shown to work (i.e., that is unproven); and medicine that has been shown not to work (i.e., that is disproven). So-called "complementary and alternative medicine" (CAM or, its newer, shinier name, "integrative medicine") consists almost completely of the latter two categories.

Part of the reason why this saying and its variants have become so commonplace among those of us who support science-based medicine is that they strike at a common truth about medicine, both science-based and "alternative." That common truth is that there should be one science-based standard of evidence for all treatments, be they "alternative" or the latest creation of big pharma. That point informs nearly everything I write here (well, maybe not the occasional fluff bit that I post here). What that means is a single, clear set of standards for evaluating medical evidence, in which clinical evidence is coupled to basic science and scientific plausibility. Indeed, one of my main complaints against CAM and its supporters has been how they invoke a double standard, in which they expect their therapies to be accepted as "working" on the basis of a much lower standard of evidence. Indeed, when they see high quality clinical trials demonstrating that, for example, acupuncture doesn't work, they will frequently advocate the use of "pragmatic" trials, lower quality trials of "real world effectiveness" that do not adequately control for placebo effects. It's putting the cart before the horse.

If there's another aspect to the tactics of those hostile to science-based medicine, it's that they really, really don't like big pharma. I realize that Lord Draconis will be displeased to hear this, praised by his scaly self, but it's true. Or maybe Lord Draconis won't be displeased, because these people will never become part of the over all plan for universal subjugation. I kid, of course (or do I?), but what I'm not kidding about is that I can't recall how many times I've been called, in a typical knee-jerk fashion, a tool or minion of big pharma for pointing out that this CAM modality or other is without a basis in science, so much so that I even coined a term for it: the pharma shill gambit. None of this is to say that big pharma is not guilty of abuses of science. I've written about them. Steve Novella's written about them. Our friend Ben Goldacre, who is well known as a critic of unscientific medicine, writes about them regularly and just released a book about them entitled Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. Indeed, he's written what is sure to become a widely cited article for The Guardian entitled The drugs don't work: a modern medical scandal, which makes me think that the alleged pharma conspiracy isn't doing such a good job or Ben's a deep operative working for a plan that I as a mere pharma minion can't yet comprehend. Either way, the caricature of supporters of SBM as compliant pharma drones is just that, a caricature.

Admittedly, however, Steve, Ben, our SBM bloggers, and I are not typical physicians. We are active in the skeptical movement and spend considerable time and effort writing and speaking about issues of skepticism, science-based medicine (SBM), and bad science. What about more typical physicians? How do they view these issues? CAM supporters like to paint a picture of doctors as mindless drones who do whatever their big pharma paymasters tell them to do, prescribe what they're told to prescribe, and shun "alternative medicine" and CAM, just as they're told to do. (If you don't believe me, just peruse Mike Adams' NaturalNews.com or Joe Mercola's website for a while.) That's why I was particularly amused by a study that's hot off the presses in the New England Journal of Medicine by Kesselhem et al. entitled A Randomized Study of How Physicians Interpret Research Funding Disclosures.

After noting the trend towards increasing disclosure of funding sources and increasing rigor in (and therefore frequency and prominence) disclosing these sources in medical journals (a trend, I note, that I fully support), Kesselheim et al. note:

The methodologic rigor of a trial, not its funding disclosure, should be a primary determinant of its credibility. Skepticism about results that is based on a trial's funding sources may be less appropriate for well-controlled, double-blind, randomized trials than for poorly controlled or unblinded trials, in which conflicts of interest may have a stronger effect on interpretation of the data. However, little is known about how information concerning study design interacts with information concerning funding sources to influence physicians' interpretation of research. We therefore conducted a randomized study involving simulated research abstracts to assess the role that disclosure plays in physicians' interpretation of the results of medical research.

Although I tend to agree that the design of a study should be the primary determinant of how that study is evaluated, one can't help but note that selective reporting of results can taint even the best designed trial. So, although I agree that the funding source might be less of a problem in large, well-designed trials, I think that Kesselheim et al. might be a bit too quick to dismiss it. Fortunately, that isn't the focus of their study. Their focus is to ask what the effect of funding source disclosure is on how physicians interpret the literature right now.

To do this, Kesselheim et al. decided to test their hypothesis that knowledge of the funding source of a study affects how physicians interpret that study. They thus developed hypothetical scenarios in which three new drugs were evaluated for the treatment of common disorders seen by primary care physicians: angina, diabetes, and hyperlipidemia. Three fake drugs were dreamed up, with a set of fake abstracts to go with them describing studies of various methodological rigor thusly:

“Lampytinib” would be used for dyslipidemia in patients who had unacceptable side effects from statins, “bondaglutaraz” would be used for diabetes and low levels of high-density lipoprotein cholesterol in patients who were taking metformin and a sulfonylurea and who were unwilling or unable to add insulin, and “provasinab” would be used for angina in patients with untreatable multivessel coronary disease who were taking maximal doses of beta-blockers. Since internists report that they frequently read only the abstracts when reviewing the medical literature, we created abstracts describing trials of these drugs in which we varied the drug being tested, the trial's methodologic rigor, and the funding source. For each drug, one trial had a high level of rigor, one had a medium level of rigor, and one had a low level of rigor. The features defining these levels were based on guidelines and on our experience in conducting randomized trials and in studying evidence-based drug-evaluation and prescribing practices...

Differences in methodologic rigor among the trials were consistent across drugs. All the trials had similar effect sizes, and statistically significant results.

We then added a variable describing the trial's funding status. Each abstract included one of three variations: no funding source mentioned, funding by the National Institutes of Health (NIH), or funding by a pharmaceutical company, with financial involvement in that company on the part of the lead author. (Companies named in the disclosure statements were randomly selected from the top 12 global pharmaceutical enterprises.)

This process resulted in 27 different abstracts with a variety of methodological rigor and different funding sources. The authors then recruited subjects from the American Board of Internal Medicine mailing list, starting with a potential sample of over 45,000 physicians and whittling it down to 503 who reported spending at least 40% of their time per month in patient care activities, less than 50% of which could be in the emergency department, intensive care unit, or cardiac catheterization lab. Physicians in the sample then received two postcards and three e-mail messages from the ABIM stating that they had been randomly selected to participate in a study investigating how physicians make prescribing decisions. The e-mails contained the names of the lead investigators and sponsoring institutions, a link to the online survey, an opportunity to opt out, and an offer of a $50 honorarium for completing the survey. These were followed up with a printed version of the invitation and a telephone reminder.

Those who agreed to participate were presented with three abstracts:

Participants were presented with three abstracts, each of which described a trial of a different hypothetical new drug. Participants were told to assume that the hypothetical drug had recently been approved by the Food and Drug Administration and was covered by insurance and that the abstract was from a “high-impact” biomedical journal and written by academic physicians at established U.S. universities. We randomly varied the level of methodologic rigor and the disclosure so that the three abstracts that the physicians received described trials at each level of methodologic rigor and with each disclosure variation.

The selection of abstracts can be found in the Supplemental Data, along with the survey questions.

The response rate (53%) was surprisingly good for an online survey. (Quite frankly, I almost always ignore these things when I get them. I realize that I shouldn't, at least not always, but there always seems to be something I have to do.) Also, reassuringly, there was a strong correlation between the physician respondents' perception of the methodological rigor of the trials and the actual methodological rigor of the trials. (If there were not, it would be…bad.) Physicians were more likely to prescribe drugs supported by high-rigor trials than medium-rigor trials and less likely to prescribe drugs supported by low-rigor trials than by medium-rigor trials. So far, so good. There's nothing surprising here.

Next, the authors looked at how the funding source affected the physician's perception of their willingness to prescribe new drugs. What do you think they found? Did the revelation that a trial was funded by pharma affect physicians' willingness to prescribe a new drug? If so, how? If you believe Mike Adams, it either shouldn't matter or physicians would eagerly compete to prescribe a new drug, just as their pharma masters tell them to. It's a good thing that most readers here don't believe Mike Adams (I would say all readers, except that there are a few trolls here who clearly subscribe to Mike Adams' silliness), or they'd be surprised at the results.

It turns out that physicians view pharma funding sources negatively, and the revelation of a pharma funding source for a trial resulted in the physicians reporting that they were less likely to prescribe a new drug than if the funding source was not reported or was reported to be the NIH:

We found clear associations between the funding disclosure variations and physicians' perceptions of a trial's rigor and results. Regardless of the actual study design, physicians were less likely to view a trial as having a high level of rigor if funding by a pharmaceutical company was disclosed than if no disclosure statement was included (odds ratio, 0.63; 95% CI, 0.46 to 0.87; P=0.006) Similarly, in comparisons with trials for which no funding was listed and regardless of the study design, physicians were less likely to have confidence in the results of trials funded by industry (odds ratio, 0.71; 95% CI, 0.51 to 0.98; P=0.04) (Figure 2B) and were less willing to prescribe drugs described in such trials (odds ratio, 0.68; 95% CI, 0.49 to 0.94; P=0.02) (Figure 2C). These effects were even greater when industry-funded trials were compared with trials described as having NIH support.

Not only did doctors tend to downgrade the rigor of a trial if pharmaceutical funding were revealed, their confidence in the trial, and their willingness to prescribe the drug, they were half as willing to prescribe drugs studied in industry-sponsored trials as they were to prescribe drugs studied in NIH-funded trial by a ratio of two to one. True, this is a survey, with all the attendant difficulties and pitfalls of surveys, but it does "ring true" to me and to, I bet, many other physicians. Indeed, it almost reaches the level of a "Well, duh!" result. Most physicians do tend to be much more skeptical of pharma-funded trials than of other trials. I was, however, surprised by the strength of the effect in this particular survey. This is not by any means a weak effect. If it is in any way representative of the bulk of practicing internists out there, it actually represents a major problem for the pharmaceutical industry, as the authors of this study point out:

These findings have important implications. Despite the occasional scientific and ethical lapses in trials funded by pharmaceutical companies, it is also true that the pharmaceutical industry has supported many major drug trials that have been of particular clinical importance. Excessive skepticism concerning trials supported by industry could hinder the appropriate translation of the results into practice. For example, after publishing the results of a large, well-designed trial describing a new use for a widely prescribed class of drugs, a leading biomedical journal noted that many of its readers believed that the results of the trial did not justify a change in clinical management, citing industry funding as a key reason for this conclusion.

And I actually agree with this. Given the well-known lapses in ethics and deceptive research in the pharmaceutical industry, it is by no means a simple question to determine how much weight to assign to a funding source disclosure. If the magnitude of skepticism exhibited by the physicians in this survey — based solely on knowing that a study was funded by pharma — is anywhere near what typical physicians exhibit, it might be too much. My first thought reading this study was that perhaps the reluctance to change prescribing habits was nothing more than the well-known conservatism of physicians. Few of us change practice based on a single clinical trial. There are exceptions (for instance, the Z0011 trial about the need for axillary dissection after a positive sentinel lymph node biopsy in breast cancer resulted in a near-immediate change in practice in most academic medical centers), but in general it takes several studies before most physicians come to be convinced that a new treatment is better than the old treatments.

On the other hand, this study also made me wonder if perhaps we place too much confidence in research funded by the NIH. It is true that the NIH uses rigorous peer review to decide what research is funded. It is also true that, these days, competition for NIH grants is more intense than it's been in at least 20 years, if not more intense than it's ever been, because of the paucity of funding, a situation that looks as though it will not get better any time soon and might well become very worse very soon if the "fiscal cliff" issue is not resolved. Be that as it may, while the current tight funding process in the NIH does tend to make sure that crappy studies are less likely to be funded, it also imposes a troublesome conservatism on science simply because there is not money to spare for "riskier" projects. Even so, NIH funding is not a guarantee of quality science by any means. Just peruse this blog for studies funded by the National Center of Complementary and Alternative Medicine (NCCAM) for ample evidence of this. (As an aside, CAM advocates know that doctors trust the NIH and NIH-funded studies more than other studies; that's the whole reason they supported the creation of NCCAM in the first place. But I digress.)

There is no doubt that there is considerable evidence that pharma funding is associated with an increased likelihood of publishing results favorable to the company funding the trial and contributing to publication bias, although this evidence is sometimes conflicting. For example, a recent study found a correlation with government funding of a trial and positive results, and another study found no higher likelihood of positive outcomes in industry-sponsored trials but a higher likelihood of reporting double-blinding, an adequate description of participant flow, and performance of an intent-to-treat analysis. (This latter study did, however, find a trend towards higher probability of nonpublication of industry-sponsored trials, suggesting contribution to publication bias.) Moreover, several studies have suggested that from a methodological standpoint industry-funded studies published in peer-reviewed journals are of equivalent or higher quality than clinical trials funded by other mechanisms. This is not surprising, given that many clinical trials funded by pharmaceutical companies are done for the purpose of winning FDA approval for a drug, either initially or for expanded indications, and the FDA has stringent requirements for these clinical trials.

There's also reason for hope, and that's where I would beg to differ with Ben Goldacre a bit when he writes;

When trials throw up results that companies don't like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug's true effects. Regulators see most of the trial data, but only from early on in a drug's life, and even then they don't give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.

That is a problem that the Food and Drugs Administration Amendments Act of 2007 was designed to address, although whether it will or not remains to be seen, sa I discussed a few years back. The law mandates publication of all trial results. Since 2008, the FDAAA has required that clinical trials results be made publicly available on the Internet through an expanded “registry and results data bank,” described thusly. Under FDAAA, enrollment and outcomes data from trials of drugs, biologics, and devices (excluding phase I trials) must appear in an open repository associated with the trial’s registration, generally within a year of the trial’s completion, whether or not these results have been published. It remains to be seen whether this law will have its intended effect.

In the meantime, what's a practitioner to do? There's no doubt that, to a large extent, pharmaceutical companies have brought this level of distrust on themselves. Indeed, as I mentioned, this NEJM study doesn't tell me anything that I didn't already know: namely, that most doctors read industry-sponsored studies much more skeptically than they do NIH-sponsored studies and are much less likely to have confidence in them and use them to justify a change in medical practice. It is nonetheless useful in that it provides some measure of hard evidence to support this observation. More importantly, it tells us that it probably isn't a small effect. (Indeed, it would have to be large to be noticed just in daily practice and discussions of studies that doctors routinely engage in.) I also know that most doctors understand that much of what is published in the medical literature ends up being wrong; so that contributes to the caution we all exercise in evaluating new medical literature.

All of this brings us back to the question of what is the appropriate amount of additional skepticism towards new studies that we should exercise when those studies are pharma-funded above and beyond the skepticism we normally exercise for studies funded by other sources? Thinking about it a bit more, I've come to the conclusion that most doctors, at least in this sample, probably have it about right or may even be a little more skeptical of pharma than necessary. I'm also satisfied to know that, contrary to Mike Adams' caricaturing, my colleagues are not all mindless pharma drones — not by any stretch of the imagination.

That's OK, though. It leaves more filthy pharma lucre for me.*

*In case it wasn't painfully obvious, that was a joke, people. Besides, my price would have been too high. I would have wanted a guarantee of a new iPhone 5 delivered to me at 9 AM on launch day. As it is, I actually had to wait until two days ago to have it delivered. I mean, what's a pharma conspiracy if it can't get its minions the latest tech toys right on launch date? Am I right, people? In fact, setting the damned thing up is the reason why there was no post yesterday; instead of setting it up from a backup of the old phone I decided to set it up as a new phone, and that took time. Even a pharma drone has his priorities.

Categories

More like this

Great study.

I ran across a wonderful claim in of all things, comments on a raw milk article.

"Vioxx killed 500,000 people and raw milk hasn't killed anyone!".

Well...
The second claim is quite likely, since it's the bacterial contamination in raw milk that has killed people for centuries - not the actual raw milk.

The first claim was...fascinating. Half a million people died as victims of a Big Pharma Holocaust? Where? In which countries? Vioxx (1999-2004). That's 100k victims annually!

That's also a terrible business model. You want the people taking your drugs to stay alive so they can purchase more. The negative publicity is sure to impact sales as well.

I don't know if anyone else has run across this claim before, or if anyone knows the origin.

“Vioxx killed 500,000 people and raw milk hasn’t killed anyone!”.

Well…
The second claim is quite likely, since it’s the bacterial contamination in raw milk that has killed people for centuries – not the actual raw milk.

Actually, milk is a known allergen and with it's prevalence I'm sure that very small number (but not zero) of people is actually killed by anaphylaxis triggered by milk every year.

Just wanted to mention that although Bad Pharma hasn't been released in the U.S. yet, anyone can go on Amazon UK and order it.

By Marilyn Mann (not verified) on 27 Sep 2012 #permalink

Well, thinking things through has not, as far as I have seen, been a skill cultivated by those who espouse that sort of alt-med canard.

By Composer99 (not verified) on 27 Sep 2012 #permalink

Thanks Orac for the great post. There is a false dichotomy out there that implies that any doctor critical of unproven medicine must be in big pharma's pocket. I have been accused of this. Funny, I am also criticized for refusing to write for brand-name medicines (sometimes by the same patients).

Thanks Orac for the great post. There is a false dichotomy out there that implies that any doctor critical of unproven medicine must be in big pharma's pocket. I have been accused of this. Funny, I am also criticized for refusing to write for brand-name medicines (sometimes by the same patients).

Goldacre addresses the FDAAA compliance issue in his book I believe as well as discusses it in a Ted Med talk on YouTube. It's not good. As I recall about only 20%.

I don't know what you did wrong, but I got my iPhone 5 delivered to my door at 3 PM on launch day...

By Mephistopheles… (not verified) on 28 Sep 2012 #permalink