Categories
Bioethics Clinical trials Medicine

When human subjects protection stifles innovation

The other day, I happened across an Op-Ed article in the New York Times that left me scratching my head at the seeming insanity of the incident it described. The article, written by Dr. Atul Gawande, author of Complications: A Surgeon’s Notes on an Imperfect Science and Better: A Surgeon’s Notes on Performance, described what seemed on the surface to be an unbelievable travesty:

In Bethesda, MD, in a squat building off a suburban parkway, sits a small federal agency called the Office for Human Research Protections. Its aim is to protect people. But lately you have to wonder. Consider this recent case.

A year ago, researchers at Johns Hopkins University published the results of a program that instituted in nearly every intensive care unit in Michigan a simple five-step checklist designed to prevent certain hospital infections. It reminds doctors to make sure, for example, that before putting large intravenous lines into patients, they actually wash their hands and don a sterile gown and gloves.

The results were stunning. Within three months, the rate of bloodstream infections from these I.V. lines fell by two-thirds. The average I.C.U. cut its infection rate from 4 percent to zero. Over 18 months, the program saved more than 1,500 lives and nearly $200 million.

Yet this past month, the Office for Human Research Protections shut the program down. The agency issued notice to the researchers and the Michigan Health and Hospital Association that, by introducing a checklist and tracking the results without written, informed consent from each patient and health-care provider, they had violated scientific ethics regulations. Johns Hopkins had to halt not only the program in Michigan but also its plans to extend it to hospitals in New Jersey and Rhode Island.

Having allowed myself, in my usual inimitable way, to be distracted from this article to blog about “natural cures” and some antivaccination lunacy, I found that fellow ScienceBloggers Revere and Mike the Mad Biologist had beaten me to the discussion, one doing a good job and one using the incident as a nothing more than a convenient excuse to indulge his penchant for attacking the Bush Administration. Now don’t get me wrong. Over the last seven years, I’ve come to detest the Bush Administration, which combines hubris and incompetence into a toxic brew unlike any that I’ve seen since I started following politics some 30 years ago. No one but our current President could have altered my politics so quickly. However, it’s definitely going overboard to blame this incident on the Bush Administration. More than likely it would have happened no matter who was President. The reason is that, as Revere realizes, this is far more consistent with government bureaucracy, with its attendant risk-averseness combined with the natural tendency of bureaucracies to widen the scope of their mission and the areas that they regulate, run amok. This is a tendency that is largely independent of the executive or legislative branches of government and appears to be common to virtually all government agencies.

There is no doubt that the Office for Human Research Protections (OHRP) and the Institutional Review Boards (IRBs) that operate under its rules (or institutions very much like them) are absolutely essential to the protection of human subjects. In the decades following the horrific medical experiments performed by scientists in Nazi Germany and physicians in Japan on prisoners, as well as disturbing experiments performed in the United States itself, such as the Tuskegee syphilis experiments, it became clear that rules for the protection of human subjects needed to be codified into law and an office set up to enforce these protections. Thus was born in 1979 (unbelievably late, I know) a document entitled “Ethical Principles and Guidelines for the Protection of Human Subjects of Research” (otherwise known as the Belmont Report). Based on the Belmont Report, the Common Rule was codified in 1991 (less than 17 years ago!) and presently serves as the basis for all federal rules governing human subjects research. All federally-funded research must abide by the Common Rule, and many states have laws requiring that even human subjects research not funded by the federal or state government must also abide by the Common Rule, which regulates the makeup and function of the IRBs.

So far, so good. Unfortunately, like all government bureacracies, the OHRP has had a tendency in recent years to insert itself in areas that it formerly left alone. Indeed, over a year ago, in response to an article in Inside Higher Ed, I wrote about this very problem, namely the recent tendency of institutional review boards (IRBs) and the Office for Human Research Protections (OHRP) to expand their purview in ways that are increasingly bizarre and arguably do not further their mission to insure the protection of human subjects involved in clinical research. For example, IRBs were requiring researchers to get their approval for projects where the chance of any sort of harm coming to the subjects is so vanishingly small that requiring IRB approval borders on the ludicrous. I’m talking about examples like these:

  1. A linguist seeking to study language development in a preliterate tribe was instructed by the IRB to have the subjects read and sign a consent form before the study could proceed.
  2. A political scientist who had bought a list of appropriate names for a survey of voting behavior was required by the IRB to get written informed consent from the subjects before mailing them the survey.
  3. A Caucasian PhD student, seeking to study career expectations in relation to ethnicity, was told by the IRB that African American PhD students could not be interviewed because it might be traumatic for them to be interviewed by the student.
  4. An experimental economist seeking to do a study of betting choices in college seniors was held up for many months while the IRB considered and reconsidered the risks inherent in the study.
  5. An IRB attempted to block publication of an English professor’s essay that drew on anecdotal information provided by students about their personal experiences with violence because the students, though not identified by name in the essay, might be distressed by reading the essay.
  6. A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public.

In light of examples such as the ones documented above in a report by the American Association of University Professors (whose link, sadly, appears to have expired), what happened at Johns Hopkins University over this infection control quality improvement (QI) initiative does not appear so out of the pale or surprising. After all, there were actual patients involved here. As tempting as it is to label the behavior of the OHRP as idiotic, boneheaded, incomprehensible, the cancellation of this research becomes somewhat more understandable if one keeps in mind the tendency of bureaucracies to interpret rules in the most conservative and expansive way possible, often with the input of lawyers who tell them to do everything possible (whether it makes sense or not) within the very letter of the law to minimize risk. Even so, I see this as a case where caution and an increasing hideboundness overruled even the most basic of common sense, or even science. Indeed, if this ruling stands, I pity my poor colleagues involved in trying to do outcomes or QI research, which often involves just this sort of thing: Setting up guidelines based on what we already know to be best practices and then observing whether implementing these guidelines improve overall outcomes in the treatments of different diseases.

There are multiple reasons to conclude that OHRP overreached in this case to a ridiculous extent, and a lot of the misunderstanding comes from not understanding (or accepting) the difference between tinkering, innovation, and research, where arguably guidelines could be viewed as “tinkering.” In fact, it could be argued that these guidelines aren’t even tinkering. First, let’s look at the OHRP rationale for its action:

The government’s decision was bizarre and dangerous. But there was a certain blinkered logic to it, which went like this: A checklist is an alteration in medical care no less than an experimental drug is. Studying an experimental drug in people without federal monitoring and explicit written permission from each patient is unethical and illegal. Therefore it is no less unethical and illegal to do the same with a checklist. Indeed, a checklist may require even more stringent oversight, the administration ruled, because the data gathered in testing it could put not only the patients but also the doctors at risk — by exposing how poorly some of them follow basic infection-prevention procedures.

Yes, I do have to admit that there was a certain warped logic to it, the sort that could only seem compelling from within a protective bubble, safely isolated from the real world. Here’s the main problem. As MedInformaticsMD pointed out, nothing on this checklist was anything that couldn’t be found in any basic introductory surgical and medical text. Making sure that the physician washes his hands and gowns up before doing an invasive procedure such as inserting a central venous catheter? How radical! How dangerous! Come on, this is Infection Control 101, the remedial session for morons. It’s the sort of thing that was first described by Ignaz Semmelweis 150 years ago. There’s nothing “experimental” about the intervention, at least not with respect to patients. Rather, it’s just a test of whether altering the system to require physicians to do what they know they should do anyway would produce a measurable decrease in infectious complications. As a physician, I can understand the concern that such a study might provide ammunition to trial lawyers to go after physicians or hospitals that may not have been as vigilant as they should have been at infection control. I can also understand how some physicians might view such guidelines as intrusions or as mindless. (I myself have said on occasion that once there’s a protocol common sense gets thrown out the window.) However, neither of these are reasons enough for the OHRP to stop such a study. Moreover, from the perspective of protecting patients, this was nothing more than a chart review, which is usually considered very low risk research, especially when the identifying information for patients is anonymized. As Dr. Roy Poses put it:

The decision to shut down this observational research project appeared to be extreme and based on, to be charitable, exceedingly narrow and nit-picking ground. The data collected did not appear to be sensitive; there was no question about protection of its confidentiality; the QI intervention could have been carried out without associated research and without patient informed consent (since the intervention affected physicians directly, not patients); and the study was apparently approved by local institutional review boards (IRBs).

Worst of all, the action of the OHRP, if it stands as a precedent, is likely to have a chilling effect on huge swaths of the discipline known as outcomes research. Here’s where Mike, despite his gratuitous and (in this case, at least) probably unjustified Bush-bashing, made one good point. Since the main objection of OHRP seemed to be that the patients were not asked for informed consent by investigators to use information from their medical records, this is a potential outcome of this decision:

Not only is this an awful decision as it relates to this particular program, and the potential to prevent 30,000 people from dying annually, but as construed, almost any public health intervention to reduce contamination that is not disclosed to patients will be shut down. What happens if a patient decides to object to this hospital-wide study? There are a lot of patients out there, and some of them are fucking morons. Is a data collection pilot program subject to this (beyond the usual HIPAA and IRB concerns)?

Actually, what would happen is that that patient’s data could not be included in the study, nor could that of any patient who refused to sign informed consent. While this wouldn’t shut down the study for an entire hospital, it would have the potential to introduce unknown biases that would be difficult, if not impossible, to control for. At the very minimum it would have the potential to increase the cost and difficulty of such studies, meaning that fewer studies of this type would be done. Moreover, as pointed out by Jim Sabin, there are ways of ethically doing such quality improvement or outcomes research in such a manner that does not require obtaining informed consent from each patient, nurse, and doctor, as well as criteria for differentiating systems interventions from research. The OHRP action is even more bizarre when you consider that JCAHO requires surveillance and tracking for nosocomial infections as the basis for implementing evidence-based infection control programs in hospitals. In other words, inadequate surveillance for nosocomial infections will result in loss of JCAHO accreditation. It would not be unreasonable that a hospital could ethically justify such a program as part of its ongoing attempts to decrease the rate of line infections and the evaluation of whether its interventions are working.

Although I do not discount the possibility that there may have been political interference or that, as Maggie Mahar speculated, someone “was worried that the checklist program would draw too much attention to just how prone to error our healthcare system is,” barring further evidence, my take on this issue is that it’s far more likely to be just another incident consistent with the institutional tendency of OHRP and IRBs to expand their reach and interpret rules ever more conservatively, to the point where ridiculous decisions like this come about. A less charitable way of putting it would be, “Never blame malice when incompetence will explain an action.”

I would be the last person to say that human subjects research should be easy. Any ethically performed research project can’t be easy, and there will always need to be strong safeguards to prevent another Tuskegee experiment. However, there comes a point when making it harder and more onerous to comply with regulations reaches a point of diminishing returns in protecting patient safety at the cost of making research more costly and time consuming to the point of actually causing harm, usually by allowing more deaths or adverse outcomes to occur while research about interventions drags on interminably. It isn’t always easy to tell what the proper balance should be, but in this case it was a no-brainer. Shutting down this infection control initiative was bureaucratic boneheadedness at its most egregious.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading