Effect Measure on “Science-based Medicine 101”: FAIL

Effect Measure is a site I highly recommend with experienced epidemiologists in charge. In other words, it’s run by adults. But scientists often disagree about things. This is apparently a secret to non-scientists and many reporters who assume that when two scientists disagree, one is lying or wrong. But it’s true nonetheless. Whatever the subdiscipline, there are disagreements. If you pick up almost any issue of Science or Nature you will find plenty of them, usually (but not always) couched in polite language in the Introduction or Discussion section of a paper or in the Letters. So it’s not surprising that I disagree with revere’s piece from yesterday: “Science-Based Medicine 101”: FAIL. Actually I don’t just disagree. I think it is quite wrong headed.

Sorry, but I couldn’t resist appropriating revere’s very own words and twisting them to my nefarious purpose. As much as I like Effect Measure in general and respect the reveres, who put it together, it’s no secret that at least one revere and I have had our differences in the past, for instance on BPA. (And don’t get either of us started on the Israeli-Palestinian conflict.) However, I like to think that we share far more areas of broad agreement than butt heads over areas of disagreements. In fact, I even mostly agree with much of what he said in his post. That being said, I nonetheless think that revere is being quite wrong-headed (word choice intentional) in how he said what he said thus earned a FAIL, perhaps even an EPIC FAIL. Instead of producing a helpful caveat and clarification to what was intentionally a simple introduction for a lay audience, he instead chose path of pedantry, and, given my involvement with SBM I don’t think I can just sit back and let it stand unanswered, because, well, that’s just how I roll.

First, let’s look at what Val Jones, the target of revere’s pedantry, prefaced her post with:

I thought I’d do a little SBM 101 series for our lay readers. Forgive me if this information is too basic… at least it’s a place to start for those who are budding scientists and critical thinkers.

So, right from the start, there are many hints that Val’s post was intended to be very basic. It was clearly meant as an introduction to how to evaluate the reliability of a source for people who haven’t thought of evaluating sources of medical and scientific information before. So, what does revere respond with? The viewpoint of an senior epidemiologist with numerous grants and publications to his credit, as well as many years of experience in medical and scientific academia:

Oh, my. Where to begin? Let’s start with track record of the researcher. More and more the real work in science is being done by post docs and graduate students who often get their names first (which is fair). But somewhere in the list (usually last, the second best place to be in biomedical articles) is usually the head of the lab or the principal investigator of the grant and they are often listed as “corresponding author” (because they’ll still be there after the post doc moves on). They are also often the ones who get the credit (“Dr. Smith’s lab”). How much guidance and input they had in the work depends. Sometimes it’s a lot. Sometimes they barely know what’s in the paper. One thing for sure. Looking at the name and record of the first author or the “senior author” is not a sure way to gauge credibility. Ask the numerous lab heads who have had to retract papers after fraud or misconduct by one of their students or post docs was uncovered.

Oh, my. Where to begin? Rarely have I seen a post that makes good points but still manages to be a FAIL because it so obtusely misses the point.

Let’s start with the first question that came to mind after I read revere’s complaint: So frikkin’ what? This is in essence a huge red herring. And if we’re going to go on anecdotal evidence, maybe I should throw my anecdotes in, which go counter to this. Maybe it’s because I just haven’t reached those rarified heights of principal investigators who have so many grants and so many publications that they have so many minions working for them that they don’t know what’s going on in their own labs, but I actually have never even met such a person. All the senior leadership with whom I’ve ever had dealings not only know what’s going on in their labs but are heavily involved in designing experiments, data analysis, and writing the papers. Anecdotal? Hell, yes, but if we’re going to rely on anecdotal evidence, then I don’t see why my anecdotes are any less reliable than revere’s are. revere’s also burning a straw man. Val never said that the credibility of the researcher was a “sure way to gauge credibility,” merely a rough guideline. Again, remember that she’s dealing in Science-Based Medicine 101. So, while revere is not incorrect and even makes good points, he’s dealing with SBM 801, a graduate level course. He’s focusing on the details, while Val is trying to give a broad brush introductory picture to start from.

And, guess what? Well-published, well-funded researchers with a good track record usually get to that position for a reason. Not always, to be sure, but as a general rule, they get that way by doing good science. It may not be revolutionary (indeed, some labs get where they are by doing fairly pedestrian science, but pedestrian science is almost by definition “reliable”), but it’s usually more reliable than than those without such a record. Here’s a hint: It’s a rule of thumb. Rules of thumb usually have exceptions, sometimes a fair number of exceptions. That doesn’t make them useless or “wrong-headed.” Moreover, when revere goes on to write about assistant professors and junior professors doing fine science, my first reaction was to scratch my head. It’s true, but so what? It doesn’t really have much bearing on Val’s point other than to dilute a valid point that does admittedly have some caveats. I can’t see how, at the level of an introductory post, pointing such things out would do anything more than confuse.

I could go on and complain about revere’s post point by point, as he did about Val’s post, but that’s not my purpose in writing this. I have bigger fish to fry. Even so, before I get to that purpose, I will point out that it is correct, as revere points out, that top tier journals can often publish science that is later refuted because such journals often publish the most bleeding edge science, and that science is almost by definition more tentative and more frequently refuted. Two years ago, I even wrote a long post about it (do I ever write a short post?) entitled Frontier Science Versus Textbook Science that explains why the cutting edge research published in top tier journals is frequently later shown not to have been correct, and more recently I’ve also discussed why popularity of a research topic can actually lead to less reliability in individual studies in that field; so I don’t feel a compelling need to rehash that territory other than to say that such considerations would only muddy up an introductory post such as what Val wrote. Why should I when I can hawk old posts of mine instead?

revere’s displeasure with Val’s attempt at science communication for the masses strikes me as the perfect embodiment of the “framing” kerfuffle that consumed ScienceBlogs a couple of years ago, embers of which occasionally continue to reignite and start up conflagrations from time to time even now, most recently as some rather nasty blogospheric histrionics over Chris Mooney and Sheril Kirshenbaum’s . revere’s FAIL is not because he’s wrong; it’s because he’s entirely missed the point, which is that Val was in essence trying to teach part I of what she admittedly called a “101 class and in doing so mentioned plenty of caveats about having simplified things. Instead, he launched straight to the graduate level discussion of the ins and outs of scientific publishing, clinical trials, and research. A lot of it is true, but it’s also, sadly, besides the point and nearly completely unhelpful in trying to educate a lay person who doesn’t know whom to trust about what sources are reliable in discussing science-based medicine. In other words, as Randy Olsen would put it: “revere, don’t be such a scientist.” (And, yes, I know that I’ve been frequently guilty of doing exactly the same thing, which is one reason why I can so readily recognize this failing when I see it.)

Back when I was in junior high, our physical science class taught us atomic orbitals, you know, the s, p, and d orbitals by having us draw pictures of them with the electrons in the shapes of the orbitals with the requisite number of electrons in each carefully drawn orbital. Later, when I was in college and took chemistry, physics, and physical chemistry. I learned that this picture of orbitals was hopelessly simplistic. I learned the Schrödinger equation. I learned elementary quantum mechanics. I learned how orbitals were in reality a wave function and that the electron’s location could not ever be accurately determined. In short, I learned the advanced course, which built upon the basics.

Were those simplistic pictures of orbitals that I learned in seventh or eight grade wrong? Were they “wrong-headed”? Were they useless? By revere’s apparent criteria, they were. Never mind that my mind and knowledge base weren’t sufficiently developed then to understand the more advanced version. How about another example? My first basic physics courses in high school and in college taught simple Newtonian mechanics. Later, I took advanced classical mechanics and learned how to deal with complexities I never appreciated before. Was what I learned in my basic classes wrong or wrong-headed?

How about a very practical matter to physicians like me? Take the the question of how I have to explain complex biology to my patients with breast cancer. How do I do that for patients who don’t understand biology and indeed may not have even graduated from high school. How am I to explain to such a patient what needs to be done and why? One possible course is simply to don the mantle of paternalism and rely on my authority. I could just say that this operation is what needs to be done and that’s that. Not surprisingly, most women, even those who have little education in science, don’t like that approach. So I use another approach. I simplify. For example, when telling such a patient that we need to do a sentinel lymph node biopsy to check whether the tumor has gone to the lymph nodes under her arm, I tell her that breast cancer goes first to the axillary lymph nodes. I don’t tell her that this isn’t the case a certain percentage of the time, when breast cancer can skip the axillary lymph nodes and go to the rest of the body. I don’t tell her that sometimes the cancer goes to another lymph node beds. I don’t get into the issue of what happens if there are isolated tumor cells in a single lymph node, a question that is currently under active study and evolving, as this study (which I may have to blog about) shows. In other words, to get my message across, I have to tailor it to what I perceive to be the level of education of my patient. Usually, that involves considerable simplification. It involves leaving a lot of nuance out. It involves leaving out complexities that I, as a physician and researcher, understand.

Am I being “wrong-headed” by not explaining in excruciating detail all the complexities and controversies in breast cancer treatment? revere’s argument suggests that he thinks that I might be. Yes, I know that we’re talking about different situations, but the principle is the same. Education involves first learning basics, devoid of much of the nuance that is the life blood of scientific debate at the higher levels. Only once a learner grasps the basics can more detail be added so that the student understands enough to understand what the controversies are even about. Again, Val was addressing a 101 level class; revere was addressing a graduate level class. The difference was intent and the intended audience. revere’s little rant is, in essence, like saying that a grade school textbook on motion and mechanics is wrong because it does not go into relativity and quantum mechanics. This “being such a scientist” leads revere to make a conclusion that is correct but utterly unhelpful to the intended audience of Val’s post:

So if these aren’t the right indicia of reliability, what are? There is no answer to this question (and certainly not the answer given in the post in question). Science is a process of sifting and winnowing and often whether work is reliable or not isn’t known for some time. It has to be tested, cross-checked and fit into an existing body of information. As one of my colleagues is fond of saying, “Real peer review happens after publication.” Most science reporting these days is quite terrible, little more than regurgitating the press release from a university’s media relations outfit. If you are a lay reader interested enough to look at the actual paper, then you are very far ahead of the game. Most lay readers are at the mercy of a reporter or a press release and there is no good way to tell which of these are credible.

That means most lay readers have to depend on others who look at the literature with a critical and informed eye. There are some extraordinary science journalists out there who are able to do this by providing the reactions of others in the field. The Perspectives, Commentaries and News sections of the top tier journals are very good at that, as well. Then, there are the science blogs, of which Science Based Medicine is one of the best. We try to do the same kind of critical appraisal here at Effect Measure on certain subjects like influenza, and there are many, many more science blogs (such as those produced by our publisher, Seed Media Group at scienceblogs.com).

While I appreciate the hat tip and feel almost churlish in having to respond (after all, I try to do just what revere describes in not just one but two places, and. Effect Measure usually succeeds at doing just this–just not this time), I am troubled by the apparent implication that lay people have to be utterly at the mercy of scientists, science journalists, and bloggers. The reason is that this nearly completely ignores the question of which scientists, journalists, and bloggers are reliable and how a lay person can tell. This is more than just an academic question. For example, yesterday I described a disturbing new website by physicians. It’s slick, written by physicians, and so utterly wrong-headed in every sense of the term as to be downright dangerous. How would a lay person realize that its contents are not just a load of hooey, but a load of profoundly dangerous hooey?

In the end, revere criticizes Val for being so simplistic as to be “wrong-headed” in her primer, but, instead of offering an alternative that actually might help the confused layperson with little scientific background, he simply confuses the issue further because he can’t lower himself below the stratosphere, ignore complexities that, while mostly correct, do not help explain the issue to the beginner, and boil down the question Val was trying to answer to its barest essence in terms that a beginner should be able to understand:

Whom do you trust to provide reliable science information and why?

Which is why I must reluctantly characterize revere’s critique as a FAIL. Wrong-headed, even. He might have provided a nice counterpoint with an additional layer of complexity, but instead chose to miss the point. There are always deeper levels of complexity in any topic to be mined, whether you are a beginner or at the pinnacle of your field. Always. But as an educator you can’t start with those deeper levels and leap, a revere did, into complexities and nuances that, in order to be understood, require background knowledge the audience doesn’t have.