Cancer research and clinical trials

During the month of June on this blog, I got annoyed not once, but twice. First, I got annoyed at Sharon Begley for a truly annoying and evidence-free (other than cherry-picked anecdotes) broadside against the NIH for its “culture of caution” that, according to her, is largely responsible for the “lack of progress” against cancer over the last 38 years since President Richard Nixon declared “war on cancer.” In essence, Begley blamed the need scientists have for publishing in the highest impact journals they can get their manuscripts into for “delaying” cures or, as I put it, “keeping teh curez from teh sick babiez!!!” To her, the emphasis at the NIH on basic research is keeping translational research from actually being translated to patients. Unfortunately, Begley appeared not to understand that translational research requires a healthy pipeline of basic science recoveries to feed it or that the NIH actually does emphasize translational research. Mike the Mad Biologist also pointed out that Begley appeared to be mistaking a symptom for the real problem.

A couple of weeks later, Gina Kolata of the New York Times picked up the gauntlet and ran with it in a similar direction, only with a different twist. In essence, Kolata, using a few cherry-picked anecdotes and no objective evidence or study, blamed instead the way grants are awarded. To her, it was all those nasty scientists who demand, you know, evidence to support grant applications and can’t see the vision behind “high risk” research. Of course, far be it from me not to admit that the NIH peer review system is all too often risk-averse, but the level of caution is more or less inversely proportional to the level of funding. Also, what non-scientists (and a lot of scientists) seem not to realize is that these “transformative” and “risky” ideas that paid off are (1) far outnumbered by risky ideas that never pay off and (2) usually only recognized as transformative in retrospect. As we used to say in residency, the retrospectoscope’s vision is 20-20, but at the time scientists are trying to pick between “risky” projects, there is often no good way to distinguish between multiple ideas that seem good but maybe only one (or even none) of which will ever actually result in a major improvement in cancer treatment.

Over the weekend, Gina Kolata wrote a followup. This time, she’s a lot more on target (but still manages to lay a big egg at one point) with an article entitled Lack of Study Volunteers Hobbles Cancer Fight:

Not long ago, at a meeting of an advisory group established by Congress to monitor the war on cancer, participants were asked how to speed progress.

“Everyone was talking about expanding the cancer work force and getting people to stop smoking,” said Dr. Scott Ramsey, a cancer researcher and health economist, who was participating in that January 2008 meeting of the President’s Cancer Panel. “Lots of murmurs of approval.”

Then it was his turn.

The biggest barrier, in his opinion, was that almost no adult cancer patients — just 3 percent — participate in studies of cancer treatments, mostly new drugs or drug regimens.

“To me it was obvious,” Dr. Ramsey said. “We can’t improve survival unless we test new treatments against established ones.”

The room fell silent.

“It was one of those embarrassing moments,” said Dr. Ramsey, an associate professor at the Fred Hutchinson Cancer Center in Seattle. He had brought up the subject he said no one wanted to touch.

Forty years after President Richard M. Nixon declared war on cancer, death rates have barely changed. “Why aren’t we getting cures?” Dr. Ramsey said. “This is one of the biggest reasons.”

Right on! I can tell you that this is a huge problem. I’ve now been on the faculty at two major NCI-designated comprehensive cancer centers in my career. Yes, this is anecdotal, but at both of them keeping our clinical trials going by accruing enough patients has been a major concern. It’s one of the criteria by which we judge our clinical research programs, and we absolutely can’t make any progress against cancer without well-designed clinical trials that accrue enough patients to have the statistical power to tell if a new treatment is working better than the standard of care. Overall, the article provides a good explanation of why there are such problems. The problems are at all levels, including patient concerns, roadblocks to physicians enrolling patients in clinical trials, and systemic issues. Many of us in the oncology field are stymied and frustrated as well, because we believe that clinical trials represent some of the best medicine patients can receive. Nowhere else is care so closely supervised, do deviations from protocol need such a detailed explanation, or do adverse events result in as much reporting.

There are many patient concerns and misconceptions out there. One of the most common of which is the fear of getting a placebo. If there’s one thing I want to emphasize right here right now, it’s that placebos are rarely used in cancer clinical trials. The far more common design is to add the new drug being tested to the existing standard of care or to compare the new drug combination being tested against the standard of care. Another concern, which is more difficult to overcome, is that patients do not like the loss of control that comes with joining a clinical trial. The randomization process requires that the patient agree to accept its result, which means that any subjects in a clinical trial might be given standard of care or might get the new drug. Those of us involved in clinical trials know that one of the motivating factors for entering a clinical trial is the hope of receiving a newer, more effective drug; they don’t want the “old” drug. Of course, the novelty effect leads most patients to tend to downplay the potential risks of a new drug and overestimate the potential benefits, but, distorted assessment of risk-benefit ratio or not, we as clinical researchers can’t ignore these concerns. Not surprisingly, it is patients who have the worst disease and the bleakest prognosis who are usually the most willing to “try anything” and sign up for even the riskiest clinical trials, namely phase I trials, sometimes called “first in human” trials.

To give you an idea of how difficult it can be to persuade a patient to accept randomization, look back 30 years or so to breast cancer clinical trials. This was when lumpectomy was coming into vogue and surgical trials were being done to compare modified radical mastectomy (mastectomy plus removal of the lymph nodes under the arm) with lumpectomy and axillary dissection followed by radiation therapy. Imagine being a patient being asked to accept randomization to either losing your breast or not. Never mind that we didn’t know at the time whether the “lesser” surgery of lumpectomy and radiation therapy would be as likely to cure the disease as mastectomy. Giving up that choice required a beneficence that few can manage, and it is doubtful that such a trial could be done today.

There’s yet another issue that concerns me because I work in a highly urban cancer center. Among some groups of African Americans, there is a huge mistrust of doctors. It’s not entirely unjustified either; well do many remember the Tuskegee syphilis experiment and other abuses. Many of these patients are very resistant to signing up for a clinical trial because they honestly believe that human experimentation will be as exploitive of them as the Tuskegee syphilis experiment was. It’s very hard to overcome this at times, but it’s important to try. The main reason is because African American women tend to have a higher incidence of a more aggressive form of breast cancer and thus a lower survival rate and younger age at diagnosis.

From the physician side, there’s a huge impediment to enrolling patients on clinical trials. Not all physicians can design and carry out clinical trials, but many of them who are not affiliated with major academic medical centers can still sign them up for trials. The problem is that there are few incentives, other than a desire to help advance the field, to promote more physicians signing up patients for clinical trials. First off, there is no reimbursement for it. Before the first patient is considered for a trial, it takes a lot of work to get a trial approved by an institutional review board (IRB), as well as a significant infrastructure in order to monitor patients for adverse reactions and response to therapy. To sign a patient up is also a lot of work. A lot of work. It takes a lot longer to explain to a patient a clinical trial and obtain informed consent than it does to give standard therapy, and there is no additional reimbursement. Moreover, unlike conventional chemotherapy, oncologists are not reimbursed more. They lose money:

For 15 years, Dr. John M. Rainey at Louisiana Oncology Associates in Lafayette did his best to enroll patients in clinical trials. He believed in research and thought doctors like him should do their part. But he finally had to stop. Every study was costing his group a few hundred dollars to $1,500 and the bureaucratic requirements were getting out of control.

“When we put a pencil to it, it didn’t make economic sense,” Dr. Rainey said.

First was the institutional review board, the committee that reviews the trial to make sure patients are protected from harm. Every time a study patient had an adverse reaction, every participating medical center had to notify patients and respond to the review board. Many reactions had nothing to do with the drugs, Dr. Rainey said, and instead were related to the patients’ illnesses.

“It became a hassle factor,” he said. “We didn’t have the manpower.” One of the five doctors in the group was spending three to five hours a week filling out forms.

“You could see five or six or seven patients in that time,” Dr. Rainey said.

Few private oncology groups have the resources or manpower, and, even in large academic centers, coming up with the resources to support the expensive infrastructure and bureaucracy necessary to run clinical trials can be a problem. True, the NIH and other sources fund clinical trials, but even at large cancer centers signing patients up for clinical trials can often be a major challenge, logistically, financially, and in terms of persuasion.

This brings me to systemic issues, which include issues of science. As the article mentions, a lot of clinical trials are too small to produce definitive answers and some are redundant. Many end up closing because of insufficient accrual. Last fall, I saw a depressingly amazing talk by Dr. David Dilts, PhD, MBA of the Center for Management Research in Health Care. He pointed out how inefficient and crude our current clinical trial mechanism is. Indeed, so impressed was I that I still have some of my notes. It turns out that 29% of clinical trials never accrue a single patient, while 31% accrue less than five. Overall 63% of clinical trials fail to reach their accrual target, and it costs an institution between $700 and $4,000 month to keep open each low accruing trial. The system is hugely inefficient. Worse, the emphasis in the health care reform bill wending its way through Congress right now on comparative effectiveness research will only make the problem worse. While such research is useful from a practical standpoint to determine whether drug combination X is better than drug combination Y or to determine if dosage schedule A is better than dosage schedule B, it will not result in breakthroughs. Quite frankly, as useful as it is from a practical standpoint, from a strictly scientific standpoint, comparative effectiveness research is about as uninteresting as it gets.

It’s toward the end of the article that Kolata goes a little off the rails. Given the structure of a typical story, it’s not very satisfying to present just a problem, as this story does up until now. It’s far more satisfying to present a potential solution, and this is what Kolata does:

Donald Berry, a statistician at the M.D. Anderson Cancer Center in Houston, wants to use resources more efficiently. To do so, he designed a new sort of study to test experimental drugs for breast cancer.

The study, starting this fall, is a departure from traditional notions of drug testing and cancer treatment.

Participants will be women who are newly diagnosed with breast cancer and at high risk that it will spread in their bodies.

Ordinarily, women with breast cancer have surgery first to remove the tumor in their breast and then have chemotherapy. The problem with removing the tumor right away is that it can take 5 to 10 years to know whether an experimental drug killed any remaining cancer cells. It is easier and much faster to assess an experimental drug’s effects on tumors that remain in the body. So in this study, women will get standard chemotherapy and experimental drugs first. Researchers will do MRI scans to see whether the tumors are responding.

Then, six months later, surgeons will remove the tumor or, if the tumor is gone, tissue from where it used to be, to determine how the cancer responded to the drugs.

The idea of leaving a cancer in place for six months can sound shocking, even dangerous. But cancer researchers say it actually makes no difference whether chemotherapy comes before or after surgery.

The idea of this trial, known as I-SPY is this:

The study involves the use of a contrast-enhanced breast MRI for the evaluation of locally-advanced breast cancer patients undergoing neoadjuvant treatment. The I-SPY study aims to correlate MRI results with molecular markers to identify the right surrogate marker for early response.

The I-SPY informatics effort involves providing informatics support for the I-SPY trial. This involves the integration, and analysis of diverse data types including clinical, MRI imaging, gene expression, Comparative Genome Hybridization (CGH), immunohistochemistry (IHC), Fluorescent In Situ Hybridization (FISH), and cell lysates throughout the breast cancer treatment cycle. By providing an integrative platform designed to correlate molecular data with MRI patterns, study researchers will be able to more effectively identify surrogate markers for early response which will ultimately result in more effective therapies for breast cancer patients.

It’s a very impressive “big science” project, the very sort of thing that the nominee to be the new director of the NIH, Francis Collins, is known for promoting. I’m of a mixed mind on this. On the one hand, there could well be major efficiencies to be had in sharing data. Also, our current methodology for predicting response to chemotherapy is crude at best, even with innovations such as the Oncotype DX finding their way into clinic. This question has been the Holy Grail of oncology for a very long time–decades even. While the I-SPY trial could provide us with a large amount of new information, unlike how it is portrayed in Kolata’s story, I highly doubt it’s the “definitely the answer” to getting answers with fewer patients. The reason is simple: The more markers for response to chemotherapy, be they molecular biomarkers, MRI, or other imaging studies, the more permutations of potential therapies will be. The more genomic subtypes of breast cancer, each requiring a different treatment, again, the more permutations. That’s always been the problem with “personalized medicine.” Validating the correlations between various combinations of biomarkers and response to therapy has always been problematic. Don’t get me wrong; trying to figure out these questions the traditional way would have required ridiculous numbers of patients. Moreover, initial tumor response doesn’t always correlate with a prolongation of survival. True, lack of initial tumor response correlates with a lack of improvement in survival due to treatment, but correlating tumor response to survival has always been dicey.

Another thing that has to be understood is that the final trials, phase III trials based on the results of earlier parts of the I-SPY trials are not likely to take any less time than any other phase III trial. The reason is that, unless the differences in survival between the new treatment group and the standard of care group are truly dramatic, it will still take between five and ten years to see them definitively–particularly if only 300 patients are used in these final trials.

Human subjects and human tissue are the most precious resources in translational science. All the careful and elegant preclinical science in the world, with its careful biochemical and genomic studies, cell culture correlates, and animal models, goes for naught if the ultimate result doesn’t translate into an effective therapy in humans. The only way to validate a therapy that has managed to jump through all the preliminary preclinical hoops is to test it in humans and follow the responses that occur in human tissue. Not only can such studies tell us whether a new therapy works or not but they can reveal the biology behind the disease and its response to therapy. To complicate matters, all of this has to be done under the highest scientific and ethical standards that protect patient autonomy and minimize the risks of harm to research subjects. It’s no wonder that efforts are under way to use fewer human subjects. Who knows? I-SPY may even be a major advance. I just don’t think it will be a panacea for this problem, and that’s what Kolata seems to be implying that it is.