Categories
Medicine Science

Oh, no! Bad research is killing science!

Over the last few decades, there has been a veritable explosion in the quantity of scientific journals and published papers. It’s a veritable avalanche. Some of the reason for this is simply the increase in the number of scientific researchers that has occurred over the last few decades. Another reason I’d suggest is that there are now numerous whole fields of science that didn’t exist 30 years ago, fields such as genomics, HIV/AIDS, angiogenesis, and various technologies that have come into their own in the last decade or so. It’s not surprising that these disciplines would spawn their own journals. Whatever the causes of the proliferation of scientific and medical journals and the explosion of the scientific literature in recent years, it has resulted in several problems that those of us in medicine and science have experienced firsthand. These include the extreme difficulty in keeping up with the medical and scientific literature (which is even worse for those of us trying to do medicine and scientific research), the difficulty finding reviewers for all the submitted manuscripts, and large numbers of papers that few people read and cite.

Sadly, for every problem, there is at least one proposed solution. Often at least one or more of these proposed solutions are incredibly misguided or even wrong–sometimes painfully wrong. I saw just such a proposed solution to the very problem listed above that is painfull wrong. Well, actually, only some of the solutions are painfully wrong, but the entire list of them strike me as quite wrong-headed. I’m referring to an article published a week ago in The Chronicle of Higher Education by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble, entitled We Must Stop the Avalanche of Low-Quality Research. The authors lay out what they perceive as the “problem” thusly:

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.

Yes, and if only all those average and below average researchers would stop muddying up the scientific literature with the fruits of their labors! I mean, what are they thinking? That they have something to contribute too? After all, shouldn’t science be like Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average? It’s making it hard for all the truly great scientists to distinguish themselves because of all the dross mudding up the scientific literature and making it harder for their brilliant work to be noticed. Now, I’m as much for meritocracy in science as anyone else, but deciding that whole swaths of the scientific literature are useless dreck that we’d be better off without? Even I wouldn’t go that far!

If we’re to believe this article, the effects on science are devastating. They include, but are not limited to, prominent scientists being besieged by requests to be on editorial boards, foundation and grant agencies scrambling for reviewers for grant applications, a huge demand on graduate students and postdoctoral fellows to publish early and often, and increasing difficulty for reviewers to know enough about the background of a topic to be able to do an adequate review of manuscripts submitted to a journal. Some of these do seem to be problems. Certainly, the increasing costs to university libraries of subscribing to ever-escalating numbers of journals is a serious problem. Unfortunately, the apocalyptic tone of the article makes it seem as though these problems are destroying science when they are not.

Of course, the profoundly misguided basis of the authors’ complaint in their article is the assumption that the increasing number of citations must correlate with importance and quality. This assumption underlies virtually every aspect of the discussion and every proposed solution, of which there are three, all designed, if we’re to believe the authors, to accomplish this:

Only if the system of rewards is changed will the avalanche stop. We need policy makers and grant makers to focus not on money for current levels of publication, but rather on finding ways to increase high-quality work and curtail publication of low-quality work. If only some forward-looking university administrators initiated changes in hiring and promotion criteria and ordered their libraries to stop paying for low-cited journals, they would perform a national service. We need to get rid of administrators who reward faculty members on printed pages and downloads alone, deans and provosts “who can’t read but can count,” as the saying goes. Most of all, we need to understand that there is such a thing as overpublication, and that pushing thousands of researchers to issue mediocre, forgettable arguments and findings is a terrible misuse of human, as well as fiscal, capital.

You know, it strike me that it must be really, really nice not to be one of those plebians churning out “forgettable” and “mediocre” articles. It must be so gratifying to be one of the elite ones whose papers would never, ever be effected by solutions that seek to stem the tide of such allegedly awful–or at least below average–research.

Unfortunately, there is a germ of a reasonable point buried in all the hyperbole. I can’t comment so much on basic science departments, but in clinical departments there does appear to be a tendency to look at quantity of publications more than quantity. I’ve had colleagues who’ve published scads and scads of papers, perhaps four times as many as I have, solely by mining surgical databases for correlations and publishing articles on surgical techniques. In contrast, I seldom publish more than two papers a year (the sole exception being one year when a lucky confluence of events led to seven publications), and sometimes there are years during which I publish no papers. Part of this is a function of having a small laboratory, usually with only one or two persons working for me. Part of it is because doing fairly hard core molecular biology combined with xenograft experiments in mice takes a long time. In other words, I’m living the dream, so to speak, publishing in excellent, albeit not top tier, journals like Molecular and Cellular Biology, Blood, and Cardiovascular Research, and not “flooding the literature” with mediocre research. Oddly enough, I sometimes have a hard time convincing my surgical colleagues that this is every bit as good as publishing two or three times as many papers in the surgical journals. It would be one thing if these publications in the surgical literature represented clinical trial publications. Those take as much, and sometimes a lot more, time than a basic science paper to go from idea to publication. But that’s not what I’m talking about.

So there is a grain of truth in the complaint. The problem is that the authors equate quality and importance of papers almost exclusively with citations and impact factor. Yes, citations probably do correlate somewhat with the importance of a paper to its field, but, as you will see, the authors of this critique take the emphasis on IFs to a ridiculous extreme.

On to the proposed solutions:

First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing.

This is actually not a bad idea. In fact, with the new NIH biosketch format, the NIH strongly suggests that applicants for grants only list the 15 best and most relevant publications on their biosketches for purposes of supporting their application. Universities already appear to be moving in this direction, at least when it comes to reviewing faculty for promotion and tenure. Sadly, then the perversity begins:

Second, make more use of citation and journal “impact factors,” from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher’s publication record, the publications on a CV might look considerably different than a mere list does.

This is just plain bizarre for a number of reasons. Unless you happen to be a scientist doing work that will be of wide interest to many disciplines of science, it’s unlikely that you’ll ever be published in Science or Nature. I’ve had a paper in Nature, but only as the third author and only back in the 1990s. I don’t expect ever to have another paper in Nature or a first paper in Science unless a crack opens in the sky and lighting hits my lab, causing spontanous generation of life or something like that. I’m OK with that. I’d be more than happy to publish in quality journals in my field, such as Cancer Research or Clinical Cancer Research or in surgery, like Surgery, Annals of Surgical Oncology, and Journal of the American College of Surgeons.

Perhaps the biggest problem I see with tying solutions to IFs comes in subspecialties. If I want to publish in, for example, breast cancer journals, none of these will ever have a really high IF because they cater to a relatively specialized readership. In many cases, I’d do better and get more readers who are actually potentially strongly interested in the work I do if I were to publish in the literature that those scientists read. True, there are journals that border on being “throwaway” journals, but everyone in the field know which journals those are. More importantly, when it comes to being a reviewer reviwing articles, there’s no way to know which articles are going to continue to be cited a year from now or five years from now and which articles will never be cited. As FemaleScienceProfessor mockingly put it, it’s a great excuse to turn down requests to review manuscripts. After all, we can assume that the vast bulk of papers will have little impact and only rarely, if ever, be cited.

The third proposed solution definitely rubs me the wrong way:

Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal’s Web site. The two versions would work as a package. That approach could be enhanced if university and other research libraries formed buying consortia, which would pressure publishers of journals more quickly and aggressively to pursue this third route. Some are already beginning to do so, but a nationally coordinated effort is needed.

This strikes me as a very bad idea indeed. For one thing, journals already consign too much data and too many figures to online supplemental files. It’s damned distracting to me when I read a paper. Moroever, Nature and Science papers, for instance, are already hard enough to read because of how short they are. I would argue that it’s not shorter papers that we need. It may well be a matter of personal preference, but I much prefer longer, meatier papers with more detail, like those in Cell or Genes & Development. Such papers give authors more room to explain the significance of their findings and put them in context, something that’s damned near impossible in the super short attenuated format used by journals such as Science and Nature. It’s a format I’ve always hated.

Balancing quality and quantity in academia is a problem, but it’s not a new problem. What irritates me about the “solutions” proposed in this article is that they represent elitism, but not a good form of elitism. Rather, they represent a top-down mandated form of elitism that presumes to be able to predict what science will pan out and what journal articles will be important. Science doesn’t work that way.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

31 replies on “Oh, no! Bad research is killing science!”

You know, Prehospital medicine would gladly take some of those researchers that have nothing better to do. I’m sure some good could come out of it, since most research tends to be focused on in-ER resuscitation and post-arrival care.

The state of research at all levels of science in the prehospital arena is weak at best, and only recently making a comeback. Heck, I was suprised to find out that our local trauma center was even doing trials on drug therapies for prehospital hemorrhagic shock in males.

I’m with you on this one Orac.

In addition to the points you’ve made I’d raise the issues of publication and citation bias. I don’t see how a move to reduce the number of publications would be likely to help.

Making it more difficult to publish would only reduce the chances that researchers will go to the effort of submittion negative findings for publication, and increase the chances that such negative studies are binned in favour of “sexier” studies with positive outcomes. We all know that publication bias is already a big problem in science, lets not make it worse.

Citation bias on the other hand might explain why some studies are not cited; negative “disappointing” studies, unless they overturn a previously held assumption, are usually not cited as often as positive studies. These studies are important though, especially as the value of conducting systematic reviews of pre-clinical studies before embarking on clinical trials is being recognized more widely in the translational research community.

While important the frequency of citation should not be the only measure used to evaluate scientific research.

I do sometimes worry that quite a lot of research gets buried in low tier journals, perhaps in the long term as scientific databases grow in size and capability they might offer a more accessable outlet for some of this work.

The way to cut down on the number of low quality papers is for reviewers to step up and say, “This paper isn’t interesting enough to be published.”

One problem that we are having now is that too often, reviewers shrug and say, “There is nothing wrong with it, so ok.” However, sometimes the thing that is wrong is that it just isn’t cutting edge enough to be interesting.

It’s been impossible to keep up with the literature for decades. There is too much.

Most people now use computer and search engine techniques. It is important to know where and how to find information and that is the best one can do.

There is some stuff that gets published that isn’t really very important, and a lesser amount that has real weaknesses. But a lot of published research is low impact but still worth publishing, because it adds to the weight of evidence for some conclusion. We have a bias against replication and negative findings, and those papers, when they do get published, aren’t going to be cited very much but they still need to be published.

I have been asked to review quite a lot of low quality manuscripts, and a) the result is they either are not published or are greatly improved and b)I learn from the experience and I really don’t mind doing it. It keeps me sharp and improves my own thinking and writing. Plus I get brownie points with the editors.

I’m with you on this one Orac.

In addition to the points you’ve made I’d raise the issues of publication and citation bias. I don’t see how a move to reduce the number of publications would be likely to help.

Making it more difficult to publish would only reduce the chances that researchers will go to the effort of submittion negative findings for publication, and increase the chances that such negative studies are binned in favour of “sexier” studies with positive outcomes. We all know that publication bias is already a big problem in science, lets not make it worse.

D’oh! When I decided I was going to deconstruct this article, I was going to mention publication bias and how decreasing the number of publications would likely exacerbate publication bias in clinical trials. Somehow, between the conception and the writing that part got left out. Damn. Maybe I’ll go back and add a paragraph or two about publication bias….

Many journals have fairly tight limits on the number of publications that you are allowed to cite in a paper. I often have to make hard choices as to which papers to cite, citing reviews instead of the original publications, citing only one paper to support a point when in fact there are a half-dozen that contributed to my understanding, citing the most recent paper on a topic rather than the one that I consider to be ground-breaking, and so forth. If I can cite one paper that supports two conclusions, I may have to do that rather than citing two individual papers that I consider to be stronger.

I strongly disagree with the notion that a paper should be “interesting” to be published. Interesting papers are papers with surprising results, or papers that happen to be in “hot” fields. But a result is surprising because there are not other papers to support it–which also means that it may turn out to be wrong. And sometimes what fields are hot is more a matter of fashion than science.

There are many types of papers that are “boring,” but are important to science:

A paper that confirms something that most everybody believes to be true, but that has not been adequately tested.

A paper that presents evidence against a hypothesis that few people believe, but that has not been rigourously tested.

A paper that confirms a published result, but from another laboratory using somewhat different or improved methodology, or (for clinical trials) a different group of subjects.

A paper that tests a novel hypothesis, which turns out to be wrong.

A paper that fills in fine details on a mechanism that is mostly well understood.

There is certainly a place for high-profile journals that focus on “hot” results, but the journals that publish any high-quality science, interesting or not, are extremely important.

Yeah, that last one really rubs me the wrong way too. It’s idiotic. “We need to trim down on sloppy research, so we should give scientists less space to report on their research.” WTF? That’s like a guaranteed way to make the research *worse*, not better. Good science reports on all of the nuance needed to understand it. Limiting the articles to less than the average undergrad term paper is not going to help that.

How offensive can they be? What’s getting accidentally dropped in their coffee at the next departmental seminar by “minor” research colleagues? Is it ironic that they are publishing this in a low IF journal?

It should be possible to quantify all the major discoveries that were buried in third-tier journals because of confusion over their meaning or application.

Forgive me a cut and paste, but Trimble, the last author lists this on his CV:
“AWARDS

….(many silly honors omitted)
1995-present. Listed in Marquis’ WHO’S WHO IN AMERICA
1994 Frost Lectureship, British Geomorphological Research Group
1993-present. Listed in Marquis’ WHO’S WHO IN AMERICAN EDUCATION
1992-present. Listed in Marquis’ WHO’S WHO IN THE WEST
SELECTED PUBLICATIONS

S. W. Trimble, “ Fluvial processes, morphology, and sediment budgets in the Coon Creek Basin, WI, USA, 1975-1993”. Geomorphology 108: 8-23 (2009).

S. W. Trimble, Man-Induced Soil Erosion on the Southern Piedmont. Ankeny, Iowa: Soil and Water Conservation Society. New, Enhanced Edition of the 1974 edition with a Forward by Andrew Goudie (Oxford) and an introductory essay by S. W. Trimble. (x + 70 pages)(2008).

S. W. Trimble (ed), Encyclopedia of Water Science, 2nd Ed. Boca Raton: CRC Press. (xlvi+1370 + 59 pages, 2 volumes) (2008).

A.Ward and S.W.Trimble,ENVIRONMENTAL HYDROLOGY,CRC-Lewis Press Boca Raton,Fl,( 475pp+c.25p) Jan.2004 (Winner of American Society of Agricultural Engineers Blue Ribbon Award, 2004)

S.W.Trimble,”Effects of riparian vegetation on stream channel stablility and sediment budgets in S. Bennett and A. Simon(eds.) S. RIPARIAN VEGETATION and FLUVIAL GEOMORPHOLOGY.Washington, D.C., American Geophysical Union, 2004 pp 153-169.

S.W.Trimble,”Historical hydrographic and hydrologic changes in the San Diego Creek watershed,Newport Bay,California”,JOURNAL of HISTORICAL GEOGRAPHY 29:422-444 (2003). ”

SERIOUSLY! Marquis’ Who’s Who? We need to limit the honors that people are able to list on their CVs. Journal of Historical Geography? RIPARIAN VEGETATION and FLUVIAL GEOMORPHOLOGY?

The authors state:

If only some forward-looking university administrators initiated changes in hiring and promotion criteria and ordered their libraries to stop paying for low-cited journals, they would perform a national service.

Sadly, that’s already happening, due to budgets. When a library has a static collections budget for years, but journal subscriptions (I’m looking at you, Elsevier) are increasing at an average of 9%, journals w/ low impact factors & low usage statistics get cut. And when you get to a situation like we’ve had in the past few years, when we’re being told to cut anywhere from 5-30% in costs, those small specialty journals start getting put on the block.

trrll #9 “There are many types of papers that are “boring,” but are important to science:

A paper that confirms something that most everybody believes to be true, but that has not been adequately tested.

A paper that presents evidence against a hypothesis that few people believe, but that has not been rigourously tested.

A paper that confirms a published result, but from another laboratory using somewhat different or improved methodology, or (for clinical trials) a different group of subjects.

A paper that tests a novel hypothesis, which turns out to be wrong.

A paper that fills in fine details on a mechanism that is mostly well understood.”

Excellent points.

I’d add that it is also beneficial to have an unbiased set of several published studies that test a particular hypothesis in order to have suffieient evidence to make a decision of subsequent studies or trials. This is particularly true where individual studies are small and perhaps underpowered.

The problem I see in the papers that are most often important to me (e.g. biology data sets with hundreds or thousands of assays per sample) is that review sucks, and so papers are full of fudging and false demonstrations. It makes me angry. It does feel like science is going to hell sometimes. Competent review seems very rare. I admit the burden of providing thorough review for the papers I am thinking about is fairly great, measured in tens of hours.

While Science and Nature articles may be too short, I find big supplementary material often very welcome – sometimes it lets me tell whether the claims are crap or not, or I can learn things from it that the authors weren’t trying to teach. Many of the details I want to know are not important to most readers, but that does not mean they are not important. Sometimes they are simply tables that are too big for print. In writing, I may have things to say that are important for (and may impress, by competence, completeness, or honesty) wonky folks like me, that most readers won’t have time for.

Oh, the data – if it’s not publicly available, complain, and that includes the plate-read ELISA, tissue array quantities, clinical variables, follow-up, and not just the giant proteomics, mRNA or ChIP data. This would help increase honesty. It is usual that I cannot check if a suspicious demonstration is fudged in some manner. Also, I often don’t care one whit about what the authors chose to tell or sell about their data, I just want the data.

When a library has a static collections budget for years, but journal subscriptions (I’m looking at you, Elsevier) are increasing at an average of 9%, journals w/ low impact factors & low usage statistics get cut.

Actually Elsevier has huge problems with their subscription services. They actually structure their journals in the way that its cheaper to get the subscription than to try and save money buy excising out idiotic journals like the Journal of Homeopathy.

Hmmm, I guess this won’t make me popular here, but I feel like Bauerlein and friends have a point, albeit argued poorly. There IS a lot of “redundant, inconsequential, and outright poor research”. A lot of it is taxpayer funded too, and I wonder if productivity could be improved with better organization of research efforts.

Obviously scientists need freedom to be creative, imaginative, follow lines of inquiry that may ultimately lead nowhere. But as a staff scientist at a UC laboratory, I see a great deal of wasteful, poor, aimless and nonsensical research. I can’t believe that grant money cannot be used more wisely than that.

@anonymous

What’s more wasteful? Everyone abandoning their line of research once 1 lab publishes a paper on it or those labs finishing up their newly redundant work and publishing it?

What is wrong with verifying someones results? Hell, if that didn’t happen no one would be getting vaccinated and stupid ass Dr. Wakefield would be a hero because no one would have wasted their time or money publishing the outcomes of their attempts to validate his crappy science.

Anyhow, if you have some capacity to determine what research is worthwhile, what will pan out and what won’t, you may have missed your calling and the NIH could probably use your help in determining what projects should get funded and what won’t.

I agree with you on how well specialist publications rank in terms of IF. Even primary entomology journals from the Entomological Society of America have IFs between 1 and 2. These are papers that entomologists will want to read and not many others and thus, are some of the first places checked for relevant material.

Anyhow, if you have some capacity to determine what research is worthwhile, what will pan out and what won’t, you may have missed your calling and the NIH could probably use your help in determining what projects should get funded and what won’t.

Oh, rest assured that the research that anonymous was doing was important and significant. It was the others’ work that was trivial.

@ JohnV.
At no point did I say replication of work is not important.
At no point did I say it’s wrong to verify results.
And I certainly don’t have a way to determine what research will lead somewhere and what will not.

But what I see at work every day are research groups throwing ideas around and then sending someone to pursue an idea without really thinking it through. Six months later it turns out to have been a dumb idea for reasons which, if they had truly thought about it before, would have been apparent. But no-one really cares about the waste, because at some point that year, the PI will get a paper from something.

I have a hard time believing that there isn’t a better way to organize research. The work I see done all around me is mostly wasteful, and in terms of return on the investment of dollars from the grants that come in, it is pitiful. Just because a few dozen great papers come out every year doesn’t change that. Maybe there could’ve been more if they’d planned their work better.

@ Pablo.
Nope. The research I was doing was totally trivial, basic, cited once and forgotten. Nice try though.

I agree that impact factor is a horrible metric for determining the relative contribution of a given work to science. I mainly work in mathematical modeling of drug disposition metabolism and efficacy. The premier journal in my field is probably the Journal of Pharmacokietics and Pharmacodynamics (JPKPD). I had no idea what the impact factor is until I just checked (2.055), which pales in comparison to Nature, Cell or even Blood. But if you want someone who is at the forefront of this field, this is where they are likely to publish.

I’ve read some papers from Nature and thought: That looks neat, I wonder how they implemented that. Answering that question is very difficult because those journals are written for a much broader audience. Ironically enough, to write for a broad audience the authors tend to sacrifice reproducibility — calling into question the utility of introducing results only the authors can demonstrate.

A critique I have heard of the American system of grants (I am north of the border in Canada) is that the system divides available funds into fewer, but larger grants. The critique is that it encourages an “all or nothing” approach to research where PIs with big labs will condone basically any project on the off chance that it will produce usable results, in order to maintain their large grant. In comparison, a system giving smaller grants to more people encourages them to use that money wisely, as they can’t afford to have projects that don’t work out.

This is sheer hearsay, as I have never worked in a lab south of the border. Maybe there is someone reading this who has worked on both sides and can comment?

I haven’t gotten through more than the first couple of paragraphs past the fold, but already it seems this paper is suffering from a serious “hindsight is 20/20” problem.

Indeed, if only those silly scientists would have the good sense to avoid all of those avenues of research that turn out to be dead-ends, and instead concentrate all of their efforts on the scientific breakthroughs that nobody saw coming, then we could save boatloads of money!

I’ll be these guys have also noticed that their lost car keys are always “the last place they look”. So logically, wherever they intended to look last, they should have looked there first. QED!

At the risk of invoking the scientific equivalent of “whoever smelt it dealt it,” it occurs to me that there’s something terribly hypocritical about publishing a research paper complaining about how “other” researchers are known to publish pointless crap for the express purpose of inflating their publication numbers….

A comment I heard in a science fiction panel years and years ago, in defense of publishing shoddy material: Aspiring writers benefit from reading it and being able to say, “I could write better than this!”

They completely skipped the #1 source of journal proliferation: non-faculty researchers. Since the vast majority of grad students, postdocs, and non-tenure track faculty will never receive tenure, it makes sense to cut back on the graduate admissions to limit the number of people competing to crank out volumes of publications.

Cutting back from the current bloat to a more realistic 2:1 vs replacement would do wonders to reduce the number of journal articles, and would do it at the low-quality end of the pipeline as well.

Are you sure he’s talking about improving the quality of science ? it sounds to me like he’s asking to improve the entertainment value of science.
I agree on the “need data” thing.

My supervisor is always trying to get me to publish “LPUs” (Least Publishable Units) and I have to fight him every time to try and write higher quality papers. Yes I want to have lots of papers out there so that people are more inclined to give me money but I also want to have some sort of standards.
I hate having this constant battle.

Hang on a sec –

I published in Genes and Dev twice, once as a grad student and once as a postdoc. Does this mean that, even though no-one wanted to hire me on the tenure track, that, at least to these four (idiots), that I am not a worthless failure after all? [A tear of joy slips quietly down his cheek]

Seriously, someone needs to remind these folks that it is the *question* that drives the science, and how you address the question that determines its contribution to the field. Not the impact factor or number of citations said publication receives. One of my pubs in G & D has hardly been cited at all, because it is in such a specialized niche and reported a finding that is difficult to tie in to a lot of other work in the field. Does this mean it is inferior science?

I will disagree with you on one point Orac: I think that the Science and Nature style of Letters have their place. I have seen a number of failed submissions to Cell/G & D rewritten as Letters and get accepted – once you cut away a lot of the chaff, it can often make the ideas come across with a lot more clarity than the longer manuscripts.

Didn’t someone demonstrate that the distribution of citations could be modelled by assuming that people simply take a random selection of the ciations of other vaguely-related papers?

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading