When human subjects protection stifles innovation, part II

Back in late December, I came across an op-ed piece in the New York Times written by Dr. Atul Gawande, general and endocrine surgeon and author of Complications: A Surgeon's Notes on an Imperfect Science and Better: A Surgeon's Notes on Performance, that struck me as a travesty of what our system for protecting human subjects should be, as it did fellow ScienceBlogger Revere.

In brief, the article described an action by the U.S. Department of Health and Human Services' Office of Human Research Protection that, on its surface, appeared to be a case of bureaucracy hewing to the letter of the law and totally ignoring its spirit. The case involved a quality improvement program implemented by Johns Hopkins University to reduce the rate of catheter infections in intensive care units throughout Michigan. The incredibly subversive and dangerous measure (yes, I'm being sarcastic) was to formalize the implementation of a checklist before central venous catheters were inserted that, among other incredibly reckless measures, required that the physician inserting the catheter wash his hand, don a sterile gown, carefully prep the patient's skin with standard antiseptics such as iodine or chlorhexidine, and drape the area with sterile drapes. The result, reported in the New England Journal of Medicine1, was a massive and nearly immediate reduction in the rate of catheter-associated sepsis from 2.7 infections per 1,000 catheter-days to zero (that's right, zero), where it remained for 18 months. Given that approximately 80,000 catheter-associated infections occur in U.S. ICUs each year, resulting in as many as 28,000 deaths and costing at least $2 billion, you'd think the study of how altering the system by which large organizations (hospitals) work in order to make sure that best practices are rigorously followed can result in improved outcomes would be just the sort of thing that the NIH would want to encourage, and results are rarely that dramatic in medicine.

Not in this case.

What actually happened is that the OHRP received a complaint about this research, launched an investigation, and shut it down. At the time the story was reported, Revere considered this an example of the risk-averseness and "corporate legal" mentality inherent in a government bureaucracy, while I considered it an example of how bureaucracies over time tend to evolve so that they interpret the law and the regulations derived from it in the most rigid, strict way possible. The key issue appeared to be the definition of what constitutes "human subjects research." Indeed, the OHRP originally ruled that, because this intervention constituted "research," full institutional review board (IRB) approval and informed consent from every patient involved in the study. This was different than the IRB at Johns Hopkins, which had ruled that the study was exempt from IRB oversight. The first problem was, on a strict basis the OHRP was correct. The second problem was, the OHRP was correct only because the investigators had bothered to study the results of its quality improvement (QI) intervention. In essence, the ludicrous implication of this ruling was that it's acceptable to implement well-accepted QI interventions such as the checklist that the Hopkins researchers did but that it's human subjects research if the hospital bothers to try to figure out if the QI intervention did what it was anticipated to do. To boil it down: Make systemic changes that are likely to improve patient care, but if you try to figure out if they actually do you will be subject to the full weight of the government's regulations and protections for human research subjects, or, as Ben Goldacre put it:

You can do something as part of a treatment program, entirely on a whim, and nobody will interfere, as long as it's not potty (and even then you'll probably be alright). But the moment you do the exact same thing as part of a research program, trying to see if it actually works or not, adding to the sum total of human knowledge, and helping to save the lives of people you'll never meet, suddenly a whole bunch of people want to stuck their beaks in.

Ben is a little more sarcastic about it than I am, as I understand from professional experience the reasons for the rules and the importance of oversight to protect human subjects. There really do need to be a "whole bunch of people" sticking their beaks in. It is the implementation of those rules in recent years that I have a problem with, as well as the confusing and often contradictory way in which "human research" is defined for regulatory purposes.

Last Thursday, the New England Journal of Medicine weighed in on the controversy with two commentaries, Quality Improvement Research and Informed Consent by Frank G. Miller and Ezekiel J. Emanuel, and Harming Through Protection by Mary Ann Baily (Hat tip: Revere). What these two articles make clear is that our regulations and rules for human subjects protection are screwed up and in dire need of an overhaul. The reason is that, for quality improvement and other research with zero or minimal risk to research subjects, the regulations can be incredibly onerous. As Dr. Baily puts it:

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

In my experience, part of the problem is a combination of risk-averseness of IRBs which, even though they admit that a proposal qualifies for an exemption of IRB review under federal guidelines, are too cautious to rule so, for fear of sanctions if it's wrong, plus "mission creep," in which IRBs are inserting themselves into research that was never intended to be considered "human subjects research" to the point of stifling such research, a problem I've complained about before. Indeed, in my institution, this mission creep was not just confined to the IRB, but to the scientific review board (SRB), whose purpose is to screen human subjects research protocols for scientific merit, not human subjects protection, and to make sure that the institution has the resources to perform the research; however, our SRBs have a distressing tendency to start picking at the humans subjects protections in the protocols, something that is not their job. Uncertainty about federal regulations and how the OHRP will interpret them is likely to contribute to this "mission creep," and this uncertainty is well described by Dr. Baily (emphasis mine):

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn't find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published, the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP's disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can't easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term "human-subjects research" and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If you want to get an idea of how complex it can be to determine whether or not research is considered "human subjects research" under federal law, all you have to do is head to this page and peruse the decision charts on, for example, whether a study is human subjects research or under what conditions the requirement for informed consent can be waived. It's no wonder that a conservative interpretation of the regulations led the OHRP to rule that this was indeed human subjects research. The problem is not entirely the OHRP; it's also the rules. Although a case could be made that the research was exempt from IRB review, under a strict interpretation of the rules, that case would be weak, and there's the problem. Moreover, not all cases of QI research are as clear-cut as this one with regards to minimal risk.

Drs. Miller and Emanuel emphasize a different aspect of this case in their commentary. The fundamental question is, in reality, systems research (how changing a system can improve quality of care) versus human subjects research (which is designed to discover generalizable knowledge that can be used to improve care). They state the ethical conundrum thusly:

Such [quality improvement] research, however, poses an apparent ethical conundrum: it is often impossible to obtain informed consent from patients enrolled in quality-improvement research programs because interventions must be routinely adopted for entire hospitals or hospital units. When, for instance, research on a quality-improvement initiative that affects routine care is conducted in an intensive care unit (ICU), surgical suite, or emergency room, individual patients have no opportunity to decide whether or not to participate. Can it be ethical to conduct such research without informed consent?

They argue, and quite correctly in my opinion:

To judge whether quality-improvement research can be ethical without informed consent, it is necessary to examine particular studies in light of the ethical purposes of informed consent. Informed consent is meant to protect people from exposure to research risks that they have not agreed to accept, as well as to respect their autonomy. None of the quality-improvement interventions in this case were experimental. They were all safe, evidence-based, standard (though not always implemented) procedures. Consequently, patients were not being exposed to additional risks beyond those involved in standard clinical care. Using a protocol to ensure implementation of these interventions could not have increased the risks of hospital-acquired infection. Moreover, the participating hospitals could have introduced this quality-improvement protocol without research, in which case the general consent to treatment by the patients or their families would have covered these interventions. The only component of the project that constituted pure research -- the systematic measurement of the rate of catheter-related infections -- did not carry any risks to the subjects. Thus, the research posed no risks.

Although informed consent for research participation was not, and could not have been, obtained, the absence of such consent did not amount to any meaningful infringement of patients' autonomy. Consequently, there could be no reasonable or ethical grounds for any patient to object to being included in the study without his or her consent.

I agree that the Hopkins research was clearly about as close to zero risk as human research can get. In fact, I'd argue that it's "negative risk" in that it's almost impossible to conceive of how a patient could be harmed by the requirement that well-established infection control methods be required through a checklist. Moreover, as both commentaries point out, there is already a mechanism in place by which the research performed by the Hopkins team in Michigan hospitals could have been approved by its IRB without the requirement of informed consent by each patient whose data was included in the study. It's a process known as "expedited review" that can be applied to minimal risk research, and the Hopkins study clearly met the criteria for it of "collection of data through noninvasive procedures (not including anesthesia or sedation) routinely employed in clinical practice" and "research including materials (data, documents, records, or specimens) that have been collected or will be collected solely for nonresearch purposes (such as medical treatment or diagnosis)." Unfortunately, many IRB chairs are so risk-averse and not sure about how the OHRP will interpret the rules that they take the safest path: requiring full IRB review and approval. Moreover, even meeting the criteria for expedited review will not necessarily absolve protocol investigators from requiring informed consent; that is a separate question. And people wonder why fewer physicians remain interested in doing clinical research.

This story does have somewhat of a happy ending, though, although only through the caving of the OHRP to the negative publicity that this story has generated with a rather bizarre retractions. The introduction to one of Miller and Emanuel article points out that the OHRP has ruled that the Hopkins QI project can be started up again:

The Office for Human Research Protections (OHRP) -- part of the U.S. Department of Health and Human Services -- has concluded that Michigan hospitals can continue implementing a checklist to reduce the rate of catheter-related infections in intensive care unit settings (ICUs) without falling under regulations governing human subjects research. Dr. Kristina C. Borror, director of the offices Division of Compliance Oversight, sent separate letters to the lead architects of the study, Johns Hopkins University and the Michigan Health & Hospital Association, outlining findings and offering researchers additional guidance for future work.

[...]

OHRP noted that the Johns Hopkins project has evolved to the point where the intervention, including the checklist, is now being used at certain Michigan hospitals solely for clinical purposes, not medical research or experimentation. Consequently, the regulations that govern human subjects research no longer apply and neither Johns Hopkins nor the Michigan hospitals need the approval of an institutional review board (IRB) to conduct the current phase of the project.

In other words, as Ben Goldacre put it, "now - since it turns out the research bit is over, and the hospitals are just putting the ticklist into practise - they may tick away unhindered." I couldn't have put it better. It just doesn't get any more Through the Looking Glass than that.

Having thought about this case for a while, I would be quite as hard on the OHRP as Revere is, although I do believe that what seemed to have affected the OHRP is a hidebound bureaucratic mindset whose hypercautiousness didn't even allow the possibility of suggesting to the researchers that there was a mechanism under federal regulations for the research to continue without having to submit it to full IRB review and requiring informed consent from every patient whose infection data was tracked for the study. I also have to wonder who complained about the study to the OHRP. Now there's someone who really needs a lesson in common sense and critical thinking.

Categories

More like this

When this story first broke, there was some discussion that informed consent was also needed from the physicians who were being told to implement the checklist.

The rationale was something like the concern that patient infection rates might be compared for different MDs, and correlated to the MDs' compliance with the checklist. At least in theory, MDs with higher than average patient infection rates could incur sanctions or increased malpractice insurance premiums, putting them at "more than minimal risk." Thus, the argument went, this consituted human research on the physicians that required their informed consent.

I wonder what ever happened to that aspect of the dispute (or if it was never really one of OHRP's concerns at all)?

Based on my experience of human lab classes for science and medical students, and whether such classes need specific human IRB, or equivalent, approval, I could believe the "whistle-blower" was either a box-ticking member of the IRB peeved that they had been outvoted on deeming the thing "no detailed review reqd", or someone lower down the food-chain, e.g. "departmental human research consent adviser and IRB conduit person".

The Germans have a word, "Beamte", or functionary, for such box-ticking types, and they don't just turn up in strictly functionary (administrative) roles. A common theme, though, is to take umbrage when they feel that people have bypassed any part of the box-ticking chain over which they preside, and from which they derive some part of their influence or position.

By UK skeptic (not verified) on 25 Feb 2008 #permalink

Having thought about this case for a while, I would be quite as hard on the OHRP as Revere is, although I do believe that what seemed to have affected the OHRP is a hidebound bureaucratic mindset whose hypercautiousness didn't even allow the possibility of suggesting to the researchers that there was a mechanism under federal regulations for the research to continue without having to submit it to full IRB review and requiring informed consent from every patient whose infection data was tracked for the study.

Phew..... 86 words. Not your longest sentence ever but what a doozy!

I get this weird image of someone asking a patient in the ER: "Do you want the doctor to wash his hands?"

You missed the next sentence in the OHRP ruling. They pretty much admitted they were wrong. In addition, the letters offer new guidance for future quality improvement research that poses minimal risk to human subjects, such as the Johns Hopkins study. Dr. Borror wrote that such research would likely have been eligible for both expedited IRB review and a waiver of the informed consent requirement.

You know you are in the presence of dysfunctional regulations when people can't easily tell what they are supposed to do.

That quote from Dr. Baily's commentary in NEJM provoked a rueful laugh. I defy anyone to look at a federal regulation involved in the practice of medicine in the U.S. and "easily tell what [he/she is] supposed to do." (OK, I exaggerate - but not a lot.)

Regarding reasons why bureaucracies like OHRP come out with decisions like this -

I suppose one can invoke concepts like "mission creep," but that may be more sophisticated than actually required to explain the problem.

Thinking of local analogous examples, such as kids being suspended from schools for possessing "drugs" (aspirin), or for having a "weapon" (nail clipper, used for its intended purpose), I think the explanation can be boiled down to a word: Idiots. Every time you think you've constructed a regulation so painstakingly (part of the reason for abominably overcomplicated regulations) that implementation is foolproof, someone comes up with a better fool to put in charge of implementation.

In this case the "better fools" are people of surpassing academic accomplishment who are afflicted with terrible cases of can't-see-the-forest-for-the-trees-itis. These are the folks who are so bound up parsing subclauses that any difference between a kid with a headache taking aspirin and someone dealing crack on school grounds seems a minor point. Thus the reason to have HRP regulations in the first place - to protect people's health (and dignity) - retreats into obscurity in comparison to whether checking the success of measures that are plainly health-protecting might be considered "research" under the definition in sub-sub-paragraph (a)(1)(C)(iii).

We apparently have no word in either English or American to describe those bureaucratic monsters who put regulation before all else. I wish we did. We could then at least know who we were talking about. School administrations are full of these people as are help desks, government agencies and corporate hierarchies. These people literally can regulate someone to death without qualms knowing that "they followed the regulations." It's as if there is a certain class of humans who worship only "the word" and never "the spirit" or "the intent" and who are incapable of determining either.

I'll second what both Jud and Oldfart above said, and add another facet: institutional forgetfulness. When new protocols are developed, there's always some underlying rationale, some reason for implementing the thing in the first place.

But over time, the people who put the thing together originally will move up or on, and the institutional memory is lost. The newer people who come along and end up in charge of overseeing the protocol have no connection to the original thinking that preceded the thing; they only know that it exists as a rule that has to be followed, and probably don't have the imagination to ask "what was the original point of this thing?" or even look it up, and so you get the kind of folks Oldfart talks about, who worship only the word and never the spirit or the intent.

Or as Jud says, idiots.

By North of 49 (not verified) on 26 Feb 2008 #permalink

I also have to wonder who complained about the study to the OHRP. Now there's someone who really needs a lesson in common sense and critical thinking.

You never know (so I'll play devil's advocate): it might have been someone who sees that the rules are a hideous mess, that get in the way of sensible research, so provoked the idiots in charge into exposing the mess, so as to kick up the fuss needed to steer the process back towards somewhere at least vaguely within sight of sanity. Precisely because this was such a cut-and-dry case of no risk of harm to patients (e.g. there was no control group of hospitals being held back from implementing the check-list, so as to be able to compare statistics), it was the perfect case to force the petty bureaucrats to rule on. If they let it pass, precedent: if (as happened) they don't, outcry.

This is a tactic sporadically used by magistrates in Britain: when the law is an ass, they'll rule as sensibly as they can get away with until they get a case where the law's flaws are so blatant that, by ruling to its letter they can provoke the outcry that'll make parliament change the assinine law, or enable higher courts to overturn the law on appeal. It doesn't happen every day, but it's been used.

We apparently have no word in either English or American to describe those bureaucratic monsters who put regulation before all else.

Petty functionary or tin-pot bureaucrat. Not a single word in either case, but they're standard expressions. Still, as Jud said, idiot'll do fine.

(Why, oh why, do I need to enable JavaSpit to make this form work ?)