Are we playing it too safe in cancer research? (Oops, Orac missed one)

This is just a brief followup to my post this morning about yesterday's NYT article on cancer research. An excellent discussion of the NYT article can be found here (and is well worth reading in its entirety). In it, Jim Hu did something I should have done, namely check the CRISP database in addition to PubMed. A couple of key points follow about the examples cited in the NYT article.

Regarding Dennis Slamon:

I hate to criticize Dennis Slamon, because the HER2 to Herceptin story is a great one. But the image one gets of his research program being saved by a friend from Revlon while the NCI ignored him isn't consistent with what you get when you search for his grants in the CRISP database. Slamon got an NCI grant in 1984 to work on "oncogenes in physiologic and pathologic states". Two NCI grants are cited in the 1987 Science paper showing HER-2 amplification in breast cancer (one was probably for the collaborator's lab), and he's been pretty continuously funded by NCI since then. So I'd love to know what this story applies to.

Me too. Regarding Ellen Jaffe:

Eileen Jaffe has studied the enzymology of porphobilinogen synthase under a 20-year multiply renewed grant from the National Institutes for Environmental and Health Sciences. Recently, she's been working on an idea called morpheeins, which she's patented as the basis for drug discovery. I have no idea what was in the grant, but what I see doesn't scream "missed opportunity to cure cancer" at me.

Which was my thought, too, looking at her publication record. Finally, regarding Louise R. Howe's studies on signaling and cancer:

The plan, said the investigator, Louise R. Howe, an associate research professor at Weill Cornell Medical College, is first to confirm her hypothesis about the pathway in breast cancer cells. But even if it is correct, the much harder research would lie ahead because no drugs exist to block the pathway, and even if they did, there are no assurances that they would be safe.

I have no idea what Kolata has against Dr. Howe's project. The same could have been said about HER2 in 1987.

Or about any number of oncogenes and targeted therapies. Yikes! The same could be said about what I'm working on. Oh, no, that must mean I'm not sufficiently innovative for Kolata's taste...

More like this

A couple of weeks ago, NEWSWEEK science columnist Sharon Begley wrote an article entitled From Bench To Bedside: Academia slows the search for cures. It was a rather poorly argued bit of polemic, backed up only with anecdotes that came across as sour grapes by scientists whose grant proposals the…
The recent issues of Newsweek and TIME both carried sobering articles about the state of cancer research. Newsweekâs Sharon Begley reports that cancer is on track to claim 565,650 lives in the U.S. this year, and that number isnât a whole lot better than it was in 1971, when President Nixon signed…
Here we go again. Because he's been in the news lately, I've been writing a lot about the "brave maverick doctor" known as Stanislaw Burzynski who claims to have spectacular results treating normally incurable cancers using something he calls antineoplastons. Unfortunately, the reason Burzynski has…
Like many biomedical investigators, I've been sweating it over the resubmission of an R01 grant my collaborator and I worked furiously on and submitted on November 1. He's the principal investigator, but I'm a coinvestigator with 25% effort; I also wrote one of the three specific aims and most of…

Hi Orac. Guess it betrays my (our?) age, but the first thing I thought of when I saw CRISP was my first time registering for classes at U of M. Long lines and punch cards with the warning not to "bend, fold, spindle or mutilate".

Ah the memories... :)

So what is this CRISP database?

CRISP is a database of NIH awards.

http://crisp.cit.nih.gov/

You can learn some information about a grant by knowing how the grant numbers are coded. CRISP doesn't have the dollar amounts; some of that information can be gleaned from

http://report.nih.gov/award/State/state.cfm

The state by state data only go back to 1992, however, while CRISP goes back much further. In both cases, the data will not show things like funding via subcontracts where one PI is named, but a chunk of the funding goes to a subaward.

There is a very interesting recent article in PLOS which suggests that as research topics become more popular, the individual results become less reliable.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005…

I think this suggests that there is an optimum size of research efforts in a field, and that expansion beyond that size has adverse effects on research effectiveness.

In engineering one can accelerate the schedule of a large engineering project by spending more money on it. The rule of thumb is that the schedule can be accelerated by 50% (that is it takes half the time) by spending 10 times more on it (but this heuristic can only be invoked once). This is for engineering projects, where known principles are applied to known problems to produce (presumably) known outcomes. When there are significant unknowns as are inherent in medical research (such as is the fundamental problem even capable of a solution), the cost to accelerate progress must be more than 10x. Until a program with an unknown outcome is successful, its outcome remains unknown.

If the goal is to accelerate the transition from basic science to treatments, the engineering heuristic would seem to indicate that if you spend 10x more on transition stuff, you will cut the time in half. Where does that 10x more money come from?

Which basic science projects do you cancel (nearly all of which will produce incremental progress) to accelerate progress on what commercialization effort (some of which will completely fail)?

One of the least likeable things about many US citizens is there ability to absolutely ignore the rest of the world. Everything is compared in isolation.

There is a simple test of whether high-risk high-reward medical projects don't see enough time in the US, and that is to compare the advances made in Europe, Japan and elsewhere.

If breakthroughs are generally made somewhere else, then the US system is probably broken. If the majority of discoveries are in the US, then it would seem that prima facie that US research is on a reasonable path.

Compare the "War on Cancer" with two boondoggles mentioned in the previous article: the Star Wars money-sink and the "War on Drugs" fiasco (mentioned by a commenter). In both these two cases the US spends an incredible amount more than the rest of the world. Yet has pretty much zero to show for it. That suggests strongly that the money is being wasted.

Unsurprisingly, most of the rest of the world has noticed this and pointed out that the US policies on Star Wars and the War on Drugs are pointless (not to mention many US citizens, of course).

In contrast everyone follows the US lead in medical research. The very top graduates of other countries -- the sort motivated by the quality of the research, not the money -- flock to work at US research institutions.

You'd need to come up with some hard evidence the US system was broken to overcome that comparison, in my opinion. Not some piffle based on speculation about what might be different.

Mark P- nice argument, but no. You'd have to at least look at progress in biomedical research per dollar spent to have any idea if money was being wasted.
Even that doesn't address a huge number of confounding factors that complicate analysis of different countries.

I totally agree we should look at other countries, but I don't buy that if we just obtain data suggesting we're doing something right then we don't need to think about improving the system any more.

Re: arguments that Dennis Slamon had grant funding to support his HER2 research. I googled "Dennis Slalom Revlon" and easily found an interview with him. He asserts that he wouldn't have been able to do what he did without the support of Revlon and some other foundations (emphasis mine):

DS: You go from the outside to the inside pretty quickly if you're fortunate enough to be involved in a success like Herceptin. So the fact that it worked so well in metastatic disease, and ultimately in early disease, means that a lot more people believe our ideas than when we started in 1986. The problem is, we needed the help back then. not now. The drug could've been and should've been available to patients seven years before it was, and if it weren't for donor money, it would've been another five to seven years beyond that. We're talking about not having Herceptin until maybe 2008 or 2010."

(http://www.standup2cancer.org/node/194?page=2)

And:

"If we hadn't had Revlon, the Entertainment Industry Foundation, Lilly Tartikoff and the Los Angeles community get behind us, I can assure you with a great deal of certainty that we'd never have been able to accomplish what we accomplished, and certainly not in the time we accomplished it. And that's what we want to do in Stand Up to Cancer."

(http://www.standup2cancer.org/node/194?page=5)

And:

Q: How much did the Revlon part of development help? Is part of the slowdown in some of these areas a lack of a new infusion of funding?

A: It's a huge part of it, and the Revlon funding, as I've said in the past, made all the difference in the world. Had we had to depend on federal funding to do this, we'd never have been able to get it done. The process by which grants are submitted, reviewed, approved and funded is incredibly long, and it reduces ideas to lowest common denominator approaches. Approaches that are innovative frequently don't get funded. They have to be vetted in a study section of a panel of twenty or twenty-five experts, all of whom have their own bias that they bring to the table, understandably.

(http://www.standup2cancer.org/node/194?page=3)

The entire interview is quite informative and supportive of the NY Times article.

I've been on federal grant review panels and can attest to the different opinions that all members bring, and how the composition of the panel plays a huge part in scoring. Two panels occasionally review the same application, often with very different results. We all have different areas of expertise that naturally influence how we analyze an application. I've also seen innovative ideas from newbies get shot down in favor of known researchers who've "done good work."

The federal grantmaking agencies need to fund a mixture of high-risk and low-risk ideas. Right now the balance is tipped way, way too far toward the low-risk ideas.

I've been on NIH study sections too, as well as study sections for private granting agencies, you know.

Actually, Slamon's interview is a bit schizophrenic. He goes on about how a "Dream Team" will have a track record, which he of course has now but did not have in 1986. So in my mind it's highly unlikely that he would have been part of any StandUp2Cancer dream team back in 1986, had the organization existed back then. The interview also strikes me as being a bit of apple polishing, too, if you know what I mean. SU2C is giving Slamon crap loads of money; so he's not going to say that their approach won't work. Again, I hate to rag on Dennis Slamon. As I said in my previous post

http://scienceblogs.com/insolence/2009/06/are_we_playing_it_too_safe_in…

I admire him as a scientist, but I do think he's being a bit self-serving here.

No, I think the point stands. It's easy to talk about identifying "high risk, high impact" research, but it's very hard to do now, before the proposed approach demonstrates success. Hindsight is 20-20. We know Slamon's idea was good because it produced results. Back in 1986, no one knew with any certainty that Herceptin would be the success it was (which, let's not forget, only benefits around 20% of breast cancer patients, the ones whose tumors express HER-2), and there were lots of other competing ideas that looked just as likely to be promising. It's all well and good to say that with more money Herceptin would have come to market several years earlier. It might even be true. But the flip side of backing speculative ideas is that you'll be wrong far more often than you're right. There's no real data to show which approach brings more impact to the problem of cancer; that the approach of backing promising approaches that don't have a lot of data yet to back them up will work better than what we do now is nothing more than an assumption.

Unfortunately, none of these advocates of "transformative" initiatives like SU2C have convinced me that they're any better at identifying projects that are more likely to lead to huge advances in cancer treatment than anyone else, including the NIH, is. They are good at making the rich get richer, though, by doling out money to established investigators in huge "Dream Team" projects. Young investigators, for all the protestations otherwise, don't have a track record and are unlikely to benefit. Time will tell if this approach yields any more "cures" than the current approach. I suspect it'll be a wash, but I'd love to be shown to have been wrong.

Ah the UM CRISP. The basement of Angell.

As for Slaton, he has a money pipeline, why would he slag it? Of course he would massively emphasize the importance of the Revlon money.

Kolata's piece was very shallow. Did she even include a line about cancer being not a single disease but many?

It's easy to talk about identifying "high risk, high impact" research, but it's very hard to do now, before the proposed approach demonstrates success. Hindsight is 20-20....

There's no real data to show which approach brings more impact to the problem of cancer; that the approach of backing promising approaches that don't have a lot of data yet to back them up will work better than what we do now is nothing more than an assumption.

Which is why both approaches need to be funded. We need to move away from focusing too much on low-risk proposals with buckets of preliminary data. Funding some high-risk projects is essential. Naturally, some will fail --- this is the price you pay for taking a chance. The assumption is that a few will pay off big.

Right now, NIH and many (most?) of the large private agencies fund pretty much only low-risk projects. The NSF is better but still tends to fall on the low-risk side. This makes an anti-risk culture that's toxic to serious innovation.

Lee Smolin wrote about the problem very eloquently in the last chapters of his book The Trouble with Physics. He was writing about theoretical physics, but the problem is generalized. He even made reference to this fact in the book when he described his attempts to publish a paper in the Chronicle of Higher Education on the subject. It was rejected --- not because he was too radical, but because the idea had been discussed ad nauseum for the humanities and social sciences (p. 345 of my edition).

Lack of risk taking is a serious, generalized problem right now.

There's also a very recent paper on the subject in Medical Hypotheses called Why are Modern Scientists so Dull?*

*Charlton, B (2009) Med Hyp 72:237-243.

Yikes, the abovementioned Medical Hypotheses paper is lacking in substance...unless I missed the part where the author provides evidence for his assertions.

Which is why both approaches need to be funded.

So basically, we need a HARPA (health care version of DARPA)?