I've written before, both here and in print, about how FDA policy and drug company practices have allowed drug makers to publish (and the FDA to base approval on) only the most flattering drug-trial results while keeping less-flattering studies in the drawer. Today a New England Journal of Medicine report shows how things change when you include the results from the drawer: The effectiveness of many SSRIs dives to near placebo-level. This despite that the companies design and conduct most of these trials in a way calculated to produce positive results.
When I wrote on this for Scientific American Mind a couple years ago, UCSF professor and Journal of the American Medical Association editor Drummond Rennie, told me, "If a company does ten trials on a drug and two show it helps but eight show it works no better than Rice Krispies, I'm not exactly getting a scientific view if they publish only the two positive studies.... How can we practice sophisticated medicine if the drug companies are hiding their results? That's not science. That's marketing."
The problem remains. Depressing. I recommend a good run.
Benedict Carey has a good story on it at the Times. And here's the meat of the abstract from the NEJM:
Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.
- Log in to post comments
There is another depressing story (this study sought to examine how accurately the published literature conveys data on drug efficacy to the medical community.):
A study of Food and Drug Administration (FDA)-registered clinical trials of 12 antidepressants found a bias toward publication of positive results. Almost all studies viewed by the FDA as positive were published. The clinical trials that the FDA deemed negative or questionable were largely not published or, in some cases, were published as positive outcomes.
For each of the 12 drugs, at least 1 study was not published or was reported in the literature as positive despite a conflicting judgment by the FDA.
The overall effect size of the antidepressants (vs placebo) that was reported in the published literature was nearly one-third larger than the effect size for these agents that was derived from FDA data.
"Selective reporting of clinical-trial results may have adverse consequences for researchers, study participants, healthcare professionals, and patients," they conclude.
These findings are published in the January 17 issue of the New England Journal of Medicine.
Evidence-Based or Biased Evidence?
"You might get the impression from the published literature that [these drugs] are consistently effective; however, the outcome of this study is that they are effective, but inconsistently so," lead study author, Eric H. Turner, MD, from Oregon Health and Science University, in Portland, Oregon, told Medscape Psychiatry.
"Evidence-based medicine is valuable to the extent that the evidence is complete and unbiased," he noted, adding that selective publication of clinical trials can alter the apparent risk/benefit ratio of drugs, which can affect prescribing decisions.
There is another depressing story (this study sought to examine how accurately the published literature conveys data on drug efficacy to the medical community.):
A study of Food and Drug Administration (FDA)-registered clinical trials of 12 antidepressants found a bias toward publication of positive results. Almost all studies viewed by the FDA as positive were published. The clinical trials that the FDA deemed negative or questionable were largely not published or, in some cases, were published as positive outcomes.
For each of the 12 drugs, at least 1 study was not published or was reported in the literature as positive despite a conflicting judgment by the FDA.
The overall effect size of the antidepressants (vs placebo) that was reported in the published literature was nearly one-third larger than the effect size for these agents that was derived from FDA data.
"Selective reporting of clinical-trial results may have adverse consequences for researchers, study participants, healthcare professionals, and patients," they conclude.
These findings are published in the January 17 issue of the New England Journal of Medicine.
Evidence-Based or Biased Evidence?
"You might get the impression from the published literature that [these drugs] are consistently effective; however, the outcome of this study is that they are effective, but inconsistently so," lead study author, Eric H. Turner, MD, from Oregon Health and Science University, in Portland, Oregon, told Medscape Psychiatry.
"Evidence-based medicine is valuable to the extent that the evidence is complete and unbiased," he noted, adding that selective publication of clinical trials can alter the apparent risk/benefit ratio of drugs, which can affect prescribing decisions.