Factoid of the Day

From today's WSJ:

Studies of psychiatric drugs by researchers with a financial conflict of interest -- receiving speaking fees, owning stock, or being employed by the manufacturer -- are nearly five times as likely to find benefits in taking the drugs as studies by researchers who don't receive money from the industry, according to a review of 162 studies published last year in the American Journal of Psychiatry. Studies that the industry funded, but in which the researchers had no other financial ties, didn't have significantly different results than nonindustry-funded studies.

Score another one for unconscious biases...

Tags

More like this

The problem with any correlation is the assumption that the most obvious reason for the correlation is correct. Here the most obvious explanation is that those with a conflict of interest are fudging their studies, maybe only unconsciously, but somehow fudging them nevertheless. There is a lot missing here to make that conclusion. How many of these studies had results that turned out to be in error? What would it mean if those with financial incentives actually had the more reproducible results? That they cared more about being careful? How might the study designs of the two categories of studies differ?

I think this is insufficient data to score one for anything. That's something newspapers do announcing the next breakthrough on the basis of one study, and they're wrong to do it. It's not drug company perks that make me say this. I receive none. It is when years ago newspapers proclaimed that coffee drinking was a risk factor for heart attacks. Eventually it turned out that coffee drinkers smoke more, at least they did then, and the increased risk among coffee drinkers was completely explained by smoking. That example taught me this principle forever to be careful of correlations. Tabloid journalists jump to conclusions. Better journalists should know better.

I agree with what you're saying, David, but I would also argue that enough correlations do give you reason to suspect that, regardless of causation, if you're a drug company, it's in your best interests to fund as many studies as possible for more self-interested reasons that just scientific benefit. And time and again, we've seen such instances in various journals in various disciplines. It reminds me of those people that refuse to accept poverty as a legitimate cause of illness. "Sure," they say, "the studies all do point to that fact, but, there are more variables to consider, so we shouldn't list poverty as a cause yet." The same logic can be applied ad nauseum til the end of time, regardless of how much evidence is accumulated. And, of course, I'm not sure how you could get drug companies to go along with an experiment to test the hypothesis anyway.

I agree with what you're saying, David, but I would also argue that enough correlations do give you reason to suspect that, regardless of causation, if you're a drug company, it's in your best interests to fund as many studies as possible for more self-interested reasons that just scientific benefit. And time and again, we've seen such instances in various journals in various disciplines. It reminds me of those people that refuse to accept poverty as a legitimate cause of illness. "Sure," they say, "the studies all do point to that fact, but, there are more variables to consider, so we shouldn't list poverty as a cause yet." The same logic can be applied ad nauseum til the end of time, regardless of how much evidence is accumulated. And, of course, I'm not sure how you could get drug companies to go along with an experiment to test the hypothesis anyway.

It's true that when I used the word "correlation" before, in the back of my mind was the way tobacco companies used to say smoking and lung cancer was "only" a correlation, as if there could be some other factor that caused so much cancer without it being smoking. I'm just saying that a good scientist or a good journalist needs to consider other possibilities.

One thing I didn't mention before was that a red flag went up in me when I read that financial incentives increased the chance of a positive study 5 times. 5 times?!!! Not 20%, which is what I might think unconscious fudging could do, but 500%? This is either outright fraud or someone is comparing apples and oranges. Now that's my intuition. I hope it's experienced intuition, but even if it's raw intuition, a scientist needs such prodding to look at the quality of any study. What were the design of these studies? There is an effect where drug studies based on community use don't do as well as more formal studies where someone stays on the subjects to take their meds, even if it is a placebo. I can't imagine that could account for a 500% difference, but it has to be something.

And this is an issue where long-term experience will show which studies were right. Who will look at it again in 5 years to see? The drug companies may not want to resurrect the issue, even if experience exonerates them. The authors of the larger study might have moved on.

I realize also that it's something of a moot point. I'm sure the drug companies will adapt to whatever perception is. If they can make more money controlling their studies, strictly from a matter of quality control, to get them done faster, and limit spurious incidents, they'll do that. If they have to do that through "clean" investigators, they'll do that. If they have to give up control of evaluation to government or academics, they'll do that.

I'm more interested in the general issue of reading something in the media where I have questions about the study, and no one raises them. I don't think it's good to have a reporter gushing over what a good result this is instead of asking the questions that a good scientist knows to ask.