Institutional review boards overreaching?

Institutional review boards (IRBs) are the cornerstone of the protection of human subjects in modern biomedical research. Mandated by the federal government in the 1970’s in the wake of research abuses of the 20th century, in particular the the horrors of the infamous Nazi biomedical experiments during World War II that were documented in during the Nuremberg trials and the Tuskegee syphilis experiment in which black men with syphilis in rural Alabama were followed without treatment in order to study the natural course of the disease, a study that lasted into the early 1970’s. In the wake of this abuse, a document, based on the work of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1974-1978), the Department of Health and Human Services (HHS) published “Ethical Principles and Guidelines for the Protection of Human Subjects of Research” (otherwise known as the Belmont Report) was published in 1979. Based on the Belmont Report, the Common Rule was codified in 1991 and serves as the basis for all federal rules governing human subjects research. All federally-funded research must abide by the Common Rule, and many states have laws requiring that even research not funded by the federal or state government must also abide by the Common Rule, which regulates the makeup and function of the IRBs that oversee human subject research.

No one would argue that we should go back to the bad old days, before the Common Rule, when the rules governing human research were vague to nonexistent, and human subjects relied on the ethics of individual researchers, which would, quite naturally, vary from researcher to researcher. IRBs, as flawed as they sometimes are, represent the most potent patient advocacy and protection mechanism at present. However, in academic medicine, there has been a perception for a while now that IRBs are expanding their reach and making the approval of human subject research more and more onerous, that they are putting up unnecessary roadblocks to research with minimal or even no risk of harm to human subjects, and that they are expanding their purview to areas into which they were never intended to go. Now the American Association of University Professors is echoing the same concern that I’ve been hearing for a while:

Institutional review boards — never designed for oversight of journalism programs or surveys by sociology majors — have gone way beyond their mandates and purpose, to the detriment of scholarship, says a new report from the American Association of University Professors.

IRB’s serve an important purpose when people who are the subjects of research can face real harm, said David Hyman, an author of the report and a professor of law and medicine at the University of Illinois at Urbana-Champaign. But in cases where the chance for harm is quite low, the IRB process is not needed, he said. IRB review, he pointed out, has been required for projects such as journalism study, oral history research, and simple surveys of family members.

“That’s just nutty,” he said. “People talk to their parents and relatives all the time without IRB approval.”

The report recommends that IRB’s cease reviewing a number of projects where the chance for physical injury to a human subject is slim to nonexistent. When adults are the subject of surveys, interviews, or publicly observed, there is no need for an IRB process, said Jonathan Knight, the AAUP’s point person on academic freedom.

The report lists a number of “more or less familiar horror stories” to back up the claim that the process has gotten out of hand. In one case, a linguist had to get signed approval from the participants of a study who were not literate. In another, a white graduate student was told that he could not interview African-American students on career expectations because the interview might cause trauma.

None of this is surprising to anyone involved in clinical research. Over the last decade or so, IRBs have made the requirements for doing any sort of clinical trial progressively more and more onerous, in some cases going far beyond what is required to guarantee human subjects protection. You may think this is a good thing, and it is–to a point. However, there comes a point when requirements pass beyond the point of ensuring patient safety and autonomy and into the realm of stifling research, or at least making it far more difficult than it already is. It is not clear to me that we have reached that point, but if things keep going the way they are going that point cannot be far off. Indeed, now virtually any study, even one that involves nothing more than patient questionnaires, must receive IRB approval, and, at least at our institution, getting that approval is becoming more and more difficult. The AAUP report notes that, as any human institution with a lot of power tends to do, IRBs appear to be asserting power over areas that they were never intended to regulate:

A linguist seeking to study language development in a preliterate tribe was instructed by the IRB to have the subjects read and sign a consent form before the study could proceed.

A political scientist who had bought a list of appropriate names for a survey of voting behavior was required by the IRB to get written informed consent from the subjects before mailing them the survey.

A Caucasian PhD student, seeking to study career expectations in relation to ethnicity, was told by the IRB that African American PhD students could not be interviewed because it might be traumatic for them to be interviewed by the student.

An experimental economist seeking to do a study of betting choices in college seniors was held up for many months while the IRB considered and reconsidered the risks inherent in the study.

An IRB attempted to block publication of an English professor’s essay that drew on anecdotal information provided by students about their personal experiences with violence because the students, though not identified by name in the essay, might be distressed by reading the essay.

A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public.

These horror stories are no surprise to academic physicians involved in clinical research. No one argues that IRBs shouldn’t have jurisdiction over clinical trials and that they shouldn’t zealously guard patient safety and be sure that the risks of the research do not outweigh its potential benefits. No one is saying that IRBs shouldn’t make sure that informed consent is truly informed. However, in other sorts of research, such as outcomes research involving chart reviews, where the potential for harm is minimal to nonexistent given that it is a review of cases after the fact and that data is pooled, IRBs have developed a distressing tendency to question every detail of the proposed protocol. Protocols that involve nothing more than periodic blood draws are not uncommonly ruthlessly questioned and dissected, for example. It is not at all surprising that they would behave similarly as they move into regulating non-biomedical research. Part of the problem, as the AAUP recognizes, is that the power of IRBs is absolute. There is no appeal:

Under the IRB review procedure, an investigator must obtain prior IRB approval of his or her research protocol before the research can be undertaken.4 Members of a campus IRB are instructed by the regulations to decide, among other things, whether the risks the research would impose on its “subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result.” Thus IRB members are instructed to form their own view of the risks their colleagues’ research would impose on its subjects, and on the importance of the results that might be obtained from the research, and to deny permission to conduct the research if in their view the risks are not reasonable relative to the value of the likely results. There could hardly be a more obvious potential threat to academic freedom.

Moreover, no provision is made in the regulations for an appeal process in case a research protocol is rejected by a campus IRB. It is consistent with the regulations for an institution to provide an appeal process, but where the research is to be federally funded, or the institution has opted for a single review procedure that requires IRB approval, the appeal process would have to be to yet another IRB. We do not in fact know of any institution that makes explicit formal provision for such an appeal.

Lack of an appeal process is relevant in another way. An IRB may demand that a change be made in a research protocol as a condition of approval. Prospective researchers are given an opportunity to try to convince the IRB that the change need not be made, but scheduling difficulties often cause lengthy delays; and in any case, unless the prospective researcher is able to convince the IRB to rescind its demand, the IRB’s demand settles the matter.

The consequences for research can be profound. For example, this occurred in a proposed study on substance abuse:

Nearly eighteen months and 17 percent of the total research budget had to be spent on obtaining the nine IRB approvals that were required for the study to be undertaken. The IRBs demanded many changes in the formatting and wording of the consent and survey forms, and each change demanded by one IRB had to be approved by all the others. The researchers claim that by the end of the process, no substantial change had been made in the protocol, and that the changes demanded had no discernible impact on the protection of human subjects.

It’s a lament heard time and time again regarding clinical trials. One could make a crack about “absolute power corrupting absolutely,” but this clearly isn’t a matter of corruption. It’s more about the all too human natural tendency of such regulatory bodies to expand their reach, even with the best of intentions. It’s a classic case of “mission creep.” Indeed, members of IRBs truly want to fulfil their charge of protecting human research subjects. (And, in fact, they are told again and again that any doubts they have must be aired, no matter how trivial.)

The worst thing is, the increased vigilance doesn’t necessarily add to the protection of human subjects. In my personal experience observing what has occurred in the clinical trials in which my colleagues and I hvae been involved, most of the demands the IRB has imposed have involved questioning every sentence of the informed consent that must be signed and harping on points that only are only tangentially related to human subjects protection. In one protocol, the need for even relatively minor core needle biopsies was questioned as being totally unnecessary, even though such biopsies caused minimal pain, involved minimal risk, and would provide invaluable information about whether the study drug was working or not. All involved a lot of rewriting and a lot of argument with the IRB. Again, certainly the argument that doing human subjects research should be difficult can be made, but it’s getting to the point that researchers are avoiding doing minimal risk human subjects research because they simply don’t think it’s worth it to have to deal with the IRB. Any sort of tinkering with the rules governing IRBs is also fraught with risk. No government official or University President wants to be seen as advocating the loosening of patient protections, which is how any reform runs the risk of being perceived.

So what can be done? The AAUP quite correctly points out that simply exempting “social sciences” and humanities research from IRB oversight is not the right answer, and its reasoning rings true:

We believe that recommendation to be a mistake, on two counts. (1) It is arguable that some social science research has the potential to cause serious psychological harm. An example that generated public anger, and that has come in for much discussion since, is the experiment conducted by Stanley Milgram at Yale in the early 1960s. (In that experiment, the subjects were ordered to do what they were falsely told would cause pain to others as part of a study of learning; the aim of the experiment was to find out how many of the subjects would obey the orders.) We do not address this argument here. We point to it merely in order to bring out that an across-the-board exemption for all social science research is arguably overbroad. (2) Some biomedical research does not impose a serious risk of harm on its subjects–for example, biomedical research that involves no bodily interventions and consists entirely of an effort to acquire survey data.

Instead, the AAUP recommends exempting straightforward questionnaires and interviews or observation of behavior in public places, both of which recommendations seem like a reasonable start, as long as such studies are conducted such that results can’t be linked with the identities of individual subjects. Indeed, even in biomedical research, we already have such an exemption for studies involving only human tissue (blood, pathology specimens, etc.) that have been deidentified, such that investigators can’t link them to the patient from which they came. Such studies do not undergo full IRB review, but are instead granted an administrative exemption, usually by the head of the IRB.

Another step that should be taken is to have an objective study of IRBs and their efficacy and efficiency done. Remember, the plural of “anecdotes” is not “data,” and we have almost no solid, objective data regarding IRBs and how much delay they introduce into the clinical research enterprise. What is even worse, however, is that we have very little objective data to show that IRBs actually do protect human research subjects (other than in comparison with egregious abuses that happened decades ago) and some disturbing anecdotes that suggest that harm to human subjects is more common that we would like. For instance, presumably the gene therapy study that resulted in the death of Jesse Gelsinger had full IRB approval. Moreover, current law seems impotent when mercury militia activists like Mark and David Geier set up their own dubious IRB stacked with their associates and ideological compatriots to rubber stamp their scientifically worthless and ethically challenged clinical trial using Lupron, a drug that shuts down sex hormone synthesis, to treat autistic children. (Indeed, the Geiers even have a blatant conflict of interest in that it is a “treatment” that they are trying to patent.) Certainly, no federal or state regulatory body seems to have called them on it, despite Kathleen Seidel‘s tireless efforts to publicize the abuse. Meanwhile big pharmaceutical companies can go “IRB shopping” for the most lenient IRB to oversee their trials (Evans, D., M. Smith, and L. Willen, Big Pharma’s Shameful Secret, Bloomberg Markets, December 2005).

Under the current system, those who play by the rules (namely, the vast majority of university-based researchers) have to deal with an increasingly onerous set of expectations and requirements, while those who do not (such as the Geiers) or who try to game the system by using for-profit IRBs (some pharmaceutical companies) seem somehow able to bypass the increasingly zeaolous IRB or render it tame. Clearly, a system will always be needed to protect human subjects from overzealous, unethical, or just plain incompetent clinical researchers. The question is: How can we keep and build on what is good and what works in the present system while decreasing the burden on researchers and cracking down on those who attempt to game or bypass this important system to protect patients? Now, more than ever, need good objective data about how well IRBs are actually fulfilling their charge and the costs involved in both lost time and additional expense upon which to base recommendations for reform that protect patients but do not burden investigators unnecessarily with requirements that have little or no relevance to protecting human subjects. There is no doubt that IRBs (or some similar mechanism to review human subjects research) are necessary; the question is how to make them more effective at protecting human subjects with as little impediment to valuable research as possible. It’s a difficult balancing act in the best of times, but impossible without better information.