You can tell I’m really busy when I fall behind my reading of the scientific literature to the point where I miss an article highly relevant to topics I’m interested in, be they my laboratory research, clinical interests, or just general interests, such as translational research. As you know, I like to think of myself as a translational researcher. Translational research is research that (or so we try to do) spans both basic science and clinical science; i.e., bridges the gap between basic and clinical science. Now don’t get me wrong; I don’t devalue basic science, and I’ve said so many times before. Without a robust pipeline of basic science developments to try to translate, translational research grinds to a halt. On the other hand, the NIH emphasizes translational research these days. In any grant, if the applicant can’t articulate a reasonable (or at least reasonable-sounding) rationale by which the results could lead to a treatment or greater understanding of disease that could lead to a treatment, that grant’s chances of being funded drop like a rock.
Even though I believe translational research is extremely important, sometimes I think that it’s a bit oversold. For one thing, it’s not easy, and it’s not always obvious what basic science findings can be translated. For another thing, it takes a long time. The problem is that the hype about how much we as a nation invest in translational research leads to an expectation that is not unreasonable that there will be a return on that investment. Such an expectation is often not realized, at least not as fast and frequent as we would like, and the reason has little to do with the quality of the science being funded. It has arguably more to do with how long it takes for a basic science observation to follow the long and winding road to producing a viable therapy. But how long is that long and winding road?
A lot longer than many, even many scientists, realize. At least, that’s the case if the latest paper by John Ioannidis in Science is any indication. The article appeared in the Policy Forum in the September 5 issue and is entitled Life Cycle of Translational Research for Medical Interventions. As you may recall, Dr. Ioannidis made a name for himself a couple of years ago by publishing a pair of articles provocatively entitled Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. I’ve blogged about both before.
Dr. Ioannidis lays it out right in the first paragraph:
Despite a major interest in translational research (1-3), development of new, effective medical interventions is difficult. Of 101 very promising claims of new discoveries with clear clinical potential that were made in major basic science journals between 1979 and 1983, only five resulted in interventions with licensed clinical use by 2003 and only one had extensive clinical use (4). Drug discovery faces major challenges (5-8). Moreover, for several interventions supported by high-profile clinical studies, subsequent evidence from larger and/or better studies contradicts their effectiveness or shows smaller benefits (9). The problem seems to be even greater for nonrandomized studies (9).
In order to figure out how long translational research can take to come to fruition, Dr. Ioannidis looked at some fairly high profile therapies (defined as therapies claimed to be effective in at least one study about it that has been cited 1,000 or more times) between 1990 to 2004. The reason that this particular definition was chosen is to use studies for which a milestone at which a therapy is widely recognized to be effective and safe. That, by the paper’s definition, is the point at which a basic science finding has finally been completely “translated” into an accepted therapy. The problem then becomes to identify exactly when such therapies began as a grain of a basic science finding or an idea based on basic science observations. This is not always easy to identify. In the case of one of my major areas of research interest, tumor angiogenesis, it’s relatively easy to cite Dr. Judah Folkman’s famous 1971 paper proposing the targeting of tumor angiogenesis as a therapy for cancer. Oh, we could quibble over whether that was truly the beginning. After all, it was hypothesized that angiogenesis was important to tumor growth decades before. However, the 1971 paper was the first explicit proposal to develop strategies to target tumor angiogenesis to treat cancer. Be that as it may, I’ve mentioned before that it was thirty years before an actual therapy used in humans was developed as an example of just how long it can take.
Dr. Ioannidis has tried to quantify this gap in a more general manner, which he refers to as the “translation lag.” What he found is quite sobering for people who are anxious for the rapid translation of basic science:
To place each discovery in time, we identified the year when the earliest journal publication on preparation, isolation, or synthesis appeared or the earliest patent was awarded (whichever occurred first). Overall, the median translation lag was 24 years (interquartile range, 14 to 44 years) between first description and earliest highly cited article (see the chart). This was longer on average (median 44 versus 17 years) for those interventions that were fully or partially “refuted” (contradicted or having initially stronger effects) than for nonrefuted ones (replicated or remaining unchallenged) (P = 0.004).
In a secondary analysis, we defined the time of discovery as the first description (publication or awarded patent) of any agent in the wider intervention class (those with similar characteristics and mode of action). Early translational work may be performed with different agents in the same class compared with those that eventually get translated into postulated high-profile clinical benefits. Analyses using information on the wider class of agents showed even longer translation lag, with median of 27 (interquartile range, 21 to 50) years and similar prolongations of the translation lag for refuted interventions.
This is represented in this table:
One thing that you may notice on the table above that Dr. Ioannidis also did was that he also looked at some treatments or interventions that had been highly cited and were later refuted. What makes this interesting is that therapies that were never later refuted by clinical trials after the highly cited clinical trial that was used to support their use had a markedly shorter translation lag, 16.5 years (range 4 to 50 years) in the main analysis and 22 years (range 6 to 50 years) if the wider class of drugs is considered. In contrast, for remedies whose efficacy was later refuted, the translation lag was 44 years, as mentioned in the passage cited above. What could be the explanation? Dr. Ioannidis speculates:
We observed that most highly cited claims that were eventually refuted had a very slow translation history preceding them [e.g., flavonoids, vitamin E, and estrogens were already available for many decades before observational (nonrandomized) studies claimed implausibly large survival benefits in the 1990s]. We conclude that claims for large benefits from old interventions require extra caution as they are likely to be exaggerated.
I’m half tempted to mention that there is a bit of an elephant in the room here in that it makes one wonder about complementary and alternative medicine, and how the existing studies would fare in this sort of analysis. Of course, one would be unlikely to find a paper on a CAM intervention cited 1,000 times showing efficacy. (In fact, for some it would be hard to find a paper strongly demonstrating efficacy of most CAM therapies.) But think of it this way: Many of these therapies have been around for hundreds of years, and there hasn’t been compelling evidence of its their efficacy. Think of homeopathy, for instance. Samuel Hahnemann thought of it over 200 years ago, and we still don’t have evidence of its efficacy.
I know, I know, I’m dragging one of my personal peeves into this, and it is ridiculous to use the term “science” or “translational research” in referring to something like homeopathy, but I couldn’t resist. Mea culpa.
We can quibble about whether his methodology was the most appropriate or whether he picked the correct milestones to compare, but what Dr. Ioannidis shows is that, in essence, a lot of “translational” research takes close to two decades to bear fruit, and it’s fairly uncommon for it to take less than a decade. Moreover, as Dr. Ionnidis points out, less than 5% of promising claims based on basic science ever come to fruition as actual therapies. In other words, translational research is hard. Few promising ideas make it to therapies, and it takes a long time for those that do. Indeed, Dr. Ioannidis makes this recommendation:
Our analysis documents objectively show the long length of time that passes between discovery and translation. As scientists, we should convey to our funders and the public the immense difficulty of the scientific discovery process. Successful translation is demanding and takes a lot of effort and time even under the best circumstances; making unrealistic promises for quick discoveries and cures may damage the credibility of science in the eyes of the public.
His other recommendations are rather obvious: multidisciplinary collaborations with focused targets, incentives for testing claims in high quality, unbiased research, and the that large clinical trials with repeatability are required to demonstrate the efficacy of therapies. One recommendation, however, caught my eye:
New drug discovery is probably essential for common diseases where the existing drug armamentarium has been already extensively screened. Conversely, for uncommon and neglected diseases, the existing drug options may remain largely untested, and old drugs may find interesting new uses (12-14).
Actually, that’s a recommendation that’s been pretty obvious for a long time. For common diseases, existing drugs and related drugs have been tested and retested, and many, many variations on the old have been tried. For substantial breakthroughs, something new is required. Unfortunately, the profit incentive in drug manufacturing tends too much towards “me-too” drugs. The second part is actually very relevant to what I do. Testing such “off-label uses” of drugs can yield surprising results. Indeed, one of my two research projects is based on exactly such an off-label use. However, ironically it is a treatment for a very common disease, breast cancer, that I am testing; so Dr. Ioannidis is not entirely correct. Interesting new uses for drugs do not always make themselves known in uncommon and neglected diseases.
Regardless, however, what the public needs to understand is that translational research takes a long time. What they also need to understand is that the foundation upon which translational research rests is a strong core of basic science research.
REFERENCE:
D. G. Contopoulos-Ioannidis, G. A. Alexiou, T. C. Gouvias, J. P. A. Ioannidis (2008). Life Cycle of Translational Research for Medical Interventions Science, 321 (5894), 1298-1299 DOI: 10.1126/science.1160622