Oh, no! Bad research is killing science!

Over the last few decades, there has been a veritable explosion in the quantity of scientific journals and published papers. It’s a veritable avalanche. Some of the reason for this is simply the increase in the number of scientific researchers that has occurred over the last few decades. Another reason I’d suggest is that there are now numerous whole fields of science that didn’t exist 30 years ago, fields such as genomics, HIV/AIDS, angiogenesis, and various technologies that have come into their own in the last decade or so. It’s not surprising that these disciplines would spawn their own journals. Whatever the causes of the proliferation of scientific and medical journals and the explosion of the scientific literature in recent years, it has resulted in several problems that those of us in medicine and science have experienced firsthand. These include the extreme difficulty in keeping up with the medical and scientific literature (which is even worse for those of us trying to do medicine and scientific research), the difficulty finding reviewers for all the submitted manuscripts, and large numbers of papers that few people read and cite.

Sadly, for every problem, there is at least one proposed solution. Often at least one or more of these proposed solutions are incredibly misguided or even wrong–sometimes painfully wrong. I saw just such a proposed solution to the very problem listed above that is painfull wrong. Well, actually, only some of the solutions are painfully wrong, but the entire list of them strike me as quite wrong-headed. I’m referring to an article published a week ago in The Chronicle of Higher Education by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble, entitled We Must Stop the Avalanche of Low-Quality Research. The authors lay out what they perceive as the “problem” thusly:

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.

Yes, and if only all those average and below average researchers would stop muddying up the scientific literature with the fruits of their labors! I mean, what are they thinking? That they have something to contribute too? After all, shouldn’t science be like Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average? It’s making it hard for all the truly great scientists to distinguish themselves because of all the dross mudding up the scientific literature and making it harder for their brilliant work to be noticed. Now, I’m as much for meritocracy in science as anyone else, but deciding that whole swaths of the scientific literature are useless dreck that we’d be better off without? Even I wouldn’t go that far!

If we’re to believe this article, the effects on science are devastating. They include, but are not limited to, prominent scientists being besieged by requests to be on editorial boards, foundation and grant agencies scrambling for reviewers for grant applications, a huge demand on graduate students and postdoctoral fellows to publish early and often, and increasing difficulty for reviewers to know enough about the background of a topic to be able to do an adequate review of manuscripts submitted to a journal. Some of these do seem to be problems. Certainly, the increasing costs to university libraries of subscribing to ever-escalating numbers of journals is a serious problem. Unfortunately, the apocalyptic tone of the article makes it seem as though these problems are destroying science when they are not.

Of course, the profoundly misguided basis of the authors’ complaint in their article is the assumption that the increasing number of citations must correlate with importance and quality. This assumption underlies virtually every aspect of the discussion and every proposed solution, of which there are three, all designed, if we’re to believe the authors, to accomplish this:

Only if the system of rewards is changed will the avalanche stop. We need policy makers and grant makers to focus not on money for current levels of publication, but rather on finding ways to increase high-quality work and curtail publication of low-quality work. If only some forward-looking university administrators initiated changes in hiring and promotion criteria and ordered their libraries to stop paying for low-cited journals, they would perform a national service. We need to get rid of administrators who reward faculty members on printed pages and downloads alone, deans and provosts “who can’t read but can count,” as the saying goes. Most of all, we need to understand that there is such a thing as overpublication, and that pushing thousands of researchers to issue mediocre, forgettable arguments and findings is a terrible misuse of human, as well as fiscal, capital.

You know, it strike me that it must be really, really nice not to be one of those plebians churning out “forgettable” and “mediocre” articles. It must be so gratifying to be one of the elite ones whose papers would never, ever be effected by solutions that seek to stem the tide of such allegedly awful–or at least below average–research.

Unfortunately, there is a germ of a reasonable point buried in all the hyperbole. I can’t comment so much on basic science departments, but in clinical departments there does appear to be a tendency to look at quantity of publications more than quantity. I’ve had colleagues who’ve published scads and scads of papers, perhaps four times as many as I have, solely by mining surgical databases for correlations and publishing articles on surgical techniques. In contrast, I seldom publish more than two papers a year (the sole exception being one year when a lucky confluence of events led to seven publications), and sometimes there are years during which I publish no papers. Part of this is a function of having a small laboratory, usually with only one or two persons working for me. Part of it is because doing fairly hard core molecular biology combined with xenograft experiments in mice takes a long time. In other words, I’m living the dream, so to speak, publishing in excellent, albeit not top tier, journals like Molecular and Cellular Biology, Blood, and Cardiovascular Research, and not “flooding the literature” with mediocre research. Oddly enough, I sometimes have a hard time convincing my surgical colleagues that this is every bit as good as publishing two or three times as many papers in the surgical journals. It would be one thing if these publications in the surgical literature represented clinical trial publications. Those take as much, and sometimes a lot more, time than a basic science paper to go from idea to publication. But that’s not what I’m talking about.

So there is a grain of truth in the complaint. The problem is that the authors equate quality and importance of papers almost exclusively with citations and impact factor. Yes, citations probably do correlate somewhat with the importance of a paper to its field, but, as you will see, the authors of this critique take the emphasis on IFs to a ridiculous extreme.

On to the proposed solutions:

First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing.

This is actually not a bad idea. In fact, with the new NIH biosketch format, the NIH strongly suggests that applicants for grants only list the 15 best and most relevant publications on their biosketches for purposes of supporting their application. Universities already appear to be moving in this direction, at least when it comes to reviewing faculty for promotion and tenure. Sadly, then the perversity begins:

Second, make more use of citation and journal “impact factors,” from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher’s publication record, the publications on a CV might look considerably different than a mere list does.

This is just plain bizarre for a number of reasons. Unless you happen to be a scientist doing work that will be of wide interest to many disciplines of science, it’s unlikely that you’ll ever be published in Science or Nature. I’ve had a paper in Nature, but only as the third author and only back in the 1990s. I don’t expect ever to have another paper in Nature or a first paper in Science unless a crack opens in the sky and lighting hits my lab, causing spontanous generation of life or something like that. I’m OK with that. I’d be more than happy to publish in quality journals in my field, such as Cancer Research or Clinical Cancer Research or in surgery, like Surgery, Annals of Surgical Oncology, and Journal of the American College of Surgeons.

Perhaps the biggest problem I see with tying solutions to IFs comes in subspecialties. If I want to publish in, for example, breast cancer journals, none of these will ever have a really high IF because they cater to a relatively specialized readership. In many cases, I’d do better and get more readers who are actually potentially strongly interested in the work I do if I were to publish in the literature that those scientists read. True, there are journals that border on being “throwaway” journals, but everyone in the field know which journals those are. More importantly, when it comes to being a reviewer reviwing articles, there’s no way to know which articles are going to continue to be cited a year from now or five years from now and which articles will never be cited. As FemaleScienceProfessor mockingly put it, it’s a great excuse to turn down requests to review manuscripts. After all, we can assume that the vast bulk of papers will have little impact and only rarely, if ever, be cited.

The third proposed solution definitely rubs me the wrong way:

Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal’s Web site. The two versions would work as a package. That approach could be enhanced if university and other research libraries formed buying consortia, which would pressure publishers of journals more quickly and aggressively to pursue this third route. Some are already beginning to do so, but a nationally coordinated effort is needed.

This strikes me as a very bad idea indeed. For one thing, journals already consign too much data and too many figures to online supplemental files. It’s damned distracting to me when I read a paper. Moroever, Nature and Science papers, for instance, are already hard enough to read because of how short they are. I would argue that it’s not shorter papers that we need. It may well be a matter of personal preference, but I much prefer longer, meatier papers with more detail, like those in Cell or Genes & Development. Such papers give authors more room to explain the significance of their findings and put them in context, something that’s damned near impossible in the super short attenuated format used by journals such as Science and Nature. It’s a format I’ve always hated.

Balancing quality and quantity in academia is a problem, but it’s not a new problem. What irritates me about the “solutions” proposed in this article is that they represent elitism, but not a good form of elitism. Rather, they represent a top-down mandated form of elitism that presumes to be able to predict what science will pan out and what journal articles will be important. Science doesn’t work that way.