Effect Measure is a site I highly recommend with experienced epidemiologists in charge. In other words, it’s run by adults. But scientists often disagree about things. This is apparently a secret to non-scientists and many reporters who assume that when two scientists disagree, one is lying or wrong. But it’s true nonetheless. Whatever the subdiscipline, there are disagreements. If you pick up almost any issue of Science or Nature you will find plenty of them, usually (but not always) couched in polite language in the Introduction or Discussion section of a paper or in the Letters. So it’s not surprising that I disagree with revere’s piece from yesterday: “Science-Based Medicine 101”: FAIL. Actually I don’t just disagree. I think it is quite wrong headed.
Sorry, but I couldn’t resist appropriating revere’s very own words and twisting them to my nefarious purpose. As much as I like Effect Measure in general and respect the reveres, who put it together, it’s no secret that at least one revere and I have had our differences in the past, for instance on BPA. (And don’t get either of us started on the Israeli-Palestinian conflict.) However, I like to think that we share far more areas of broad agreement than butt heads over areas of disagreements. In fact, I even mostly agree with much of what he said in his post. That being said, I nonetheless think that revere is being quite wrong-headed (word choice intentional) in how he said what he said thus earned a FAIL, perhaps even an EPIC FAIL. Instead of producing a helpful caveat and clarification to what was intentionally a simple introduction for a lay audience, he instead chose path of pedantry, and, given my involvement with SBM I don’t think I can just sit back and let it stand unanswered, because, well, that’s just how I roll.
First, let’s look at what Val Jones, the target of revere’s pedantry, prefaced her post with:
I thought I’d do a little SBM 101 series for our lay readers. Forgive me if this information is too basic… at least it’s a place to start for those who are budding scientists and critical thinkers.
So, right from the start, there are many hints that Val’s post was intended to be very basic. It was clearly meant as an introduction to how to evaluate the reliability of a source for people who haven’t thought of evaluating sources of medical and scientific information before. So, what does revere respond with? The viewpoint of an senior epidemiologist with numerous grants and publications to his credit, as well as many years of experience in medical and scientific academia:
Oh, my. Where to begin? Let’s start with track record of the researcher. More and more the real work in science is being done by post docs and graduate students who often get their names first (which is fair). But somewhere in the list (usually last, the second best place to be in biomedical articles) is usually the head of the lab or the principal investigator of the grant and they are often listed as “corresponding author” (because they’ll still be there after the post doc moves on). They are also often the ones who get the credit (“Dr. Smith’s lab”). How much guidance and input they had in the work depends. Sometimes it’s a lot. Sometimes they barely know what’s in the paper. One thing for sure. Looking at the name and record of the first author or the “senior author” is not a sure way to gauge credibility. Ask the numerous lab heads who have had to retract papers after fraud or misconduct by one of their students or post docs was uncovered.
Oh, my. Where to begin? Rarely have I seen a post that makes good points but still manages to be a FAIL because it so obtusely misses the point.
Let’s start with the first question that came to mind after I read revere’s complaint: So frikkin’ what? This is in essence a huge red herring. And if we’re going to go on anecdotal evidence, maybe I should throw my anecdotes in, which go counter to this. Maybe it’s because I just haven’t reached those rarified heights of principal investigators who have so many grants and so many publications that they have so many minions working for them that they don’t know what’s going on in their own labs, but I actually have never even met such a person. All the senior leadership with whom I’ve ever had dealings not only know what’s going on in their labs but are heavily involved in designing experiments, data analysis, and writing the papers. Anecdotal? Hell, yes, but if we’re going to rely on anecdotal evidence, then I don’t see why my anecdotes are any less reliable than revere’s are. revere’s also burning a straw man. Val never said that the credibility of the researcher was a “sure way to gauge credibility,” merely a rough guideline. Again, remember that she’s dealing in Science-Based Medicine 101. So, while revere is not incorrect and even makes good points, he’s dealing with SBM 801, a graduate level course. He’s focusing on the details, while Val is trying to give a broad brush introductory picture to start from.
And, guess what? Well-published, well-funded researchers with a good track record usually get to that position for a reason. Not always, to be sure, but as a general rule, they get that way by doing good science. It may not be revolutionary (indeed, some labs get where they are by doing fairly pedestrian science, but pedestrian science is almost by definition “reliable”), but it’s usually more reliable than than those without such a record. Here’s a hint: It’s a rule of thumb. Rules of thumb usually have exceptions, sometimes a fair number of exceptions. That doesn’t make them useless or “wrong-headed.” Moreover, when revere goes on to write about assistant professors and junior professors doing fine science, my first reaction was to scratch my head. It’s true, but so what? It doesn’t really have much bearing on Val’s point other than to dilute a valid point that does admittedly have some caveats. I can’t see how, at the level of an introductory post, pointing such things out would do anything more than confuse.
I could go on and complain about revere’s post point by point, as he did about Val’s post, but that’s not my purpose in writing this. I have bigger fish to fry. Even so, before I get to that purpose, I will point out that it is correct, as revere points out, that top tier journals can often publish science that is later refuted because such journals often publish the most bleeding edge science, and that science is almost by definition more tentative and more frequently refuted. Two years ago, I even wrote a long post about it (do I ever write a short post?) entitled Frontier Science Versus Textbook Science that explains why the cutting edge research published in top tier journals is frequently later shown not to have been correct, and more recently I’ve also discussed why popularity of a research topic can actually lead to less reliability in individual studies in that field; so I don’t feel a compelling need to rehash that territory other than to say that such considerations would only muddy up an introductory post such as what Val wrote. Why should I when I can hawk old posts of mine instead?
revere’s displeasure with Val’s attempt at science communication for the masses strikes me as the perfect embodiment of the “framing” kerfuffle that consumed ScienceBlogs a couple of years ago, embers of which occasionally continue to reignite and start up conflagrations from time to time even now, most recently as some rather nasty blogospheric histrionics over Chris Mooney and Sheril Kirshenbaum’s . revere’s FAIL is not because he’s wrong; it’s because he’s entirely missed the point, which is that Val was in essence trying to teach part I of what she admittedly called a “101 class and in doing so mentioned plenty of caveats about having simplified things. Instead, he launched straight to the graduate level discussion of the ins and outs of scientific publishing, clinical trials, and research. A lot of it is true, but it’s also, sadly, besides the point and nearly completely unhelpful in trying to educate a lay person who doesn’t know whom to trust about what sources are reliable in discussing science-based medicine. In other words, as Randy Olsen would put it: “revere, don’t be such a scientist.” (And, yes, I know that I’ve been frequently guilty of doing exactly the same thing, which is one reason why I can so readily recognize this failing when I see it.)
Back when I was in junior high, our physical science class taught us atomic orbitals, you know, the s, p, and d orbitals by having us draw pictures of them with the electrons in the shapes of the orbitals with the requisite number of electrons in each carefully drawn orbital. Later, when I was in college and took chemistry, physics, and physical chemistry. I learned that this picture of orbitals was hopelessly simplistic. I learned the Schrödinger equation. I learned elementary quantum mechanics. I learned how orbitals were in reality a wave function and that the electron’s location could not ever be accurately determined. In short, I learned the advanced course, which built upon the basics.
Were those simplistic pictures of orbitals that I learned in seventh or eight grade wrong? Were they “wrong-headed”? Were they useless? By revere’s apparent criteria, they were. Never mind that my mind and knowledge base weren’t sufficiently developed then to understand the more advanced version. How about another example? My first basic physics courses in high school and in college taught simple Newtonian mechanics. Later, I took advanced classical mechanics and learned how to deal with complexities I never appreciated before. Was what I learned in my basic classes wrong or wrong-headed?
How about a very practical matter to physicians like me? Take the the question of how I have to explain complex biology to my patients with breast cancer. How do I do that for patients who don’t understand biology and indeed may not have even graduated from high school. How am I to explain to such a patient what needs to be done and why? One possible course is simply to don the mantle of paternalism and rely on my authority. I could just say that this operation is what needs to be done and that’s that. Not surprisingly, most women, even those who have little education in science, don’t like that approach. So I use another approach. I simplify. For example, when telling such a patient that we need to do a sentinel lymph node biopsy to check whether the tumor has gone to the lymph nodes under her arm, I tell her that breast cancer goes first to the axillary lymph nodes. I don’t tell her that this isn’t the case a certain percentage of the time, when breast cancer can skip the axillary lymph nodes and go to the rest of the body. I don’t tell her that sometimes the cancer goes to another lymph node beds. I don’t get into the issue of what happens if there are isolated tumor cells in a single lymph node, a question that is currently under active study and evolving, as this study (which I may have to blog about) shows. In other words, to get my message across, I have to tailor it to what I perceive to be the level of education of my patient. Usually, that involves considerable simplification. It involves leaving a lot of nuance out. It involves leaving out complexities that I, as a physician and researcher, understand.
Am I being “wrong-headed” by not explaining in excruciating detail all the complexities and controversies in breast cancer treatment? revere’s argument suggests that he thinks that I might be. Yes, I know that we’re talking about different situations, but the principle is the same. Education involves first learning basics, devoid of much of the nuance that is the life blood of scientific debate at the higher levels. Only once a learner grasps the basics can more detail be added so that the student understands enough to understand what the controversies are even about. Again, Val was addressing a 101 level class; revere was addressing a graduate level class. The difference was intent and the intended audience. revere’s little rant is, in essence, like saying that a grade school textbook on motion and mechanics is wrong because it does not go into relativity and quantum mechanics. This “being such a scientist” leads revere to make a conclusion that is correct but utterly unhelpful to the intended audience of Val’s post:
So if these aren’t the right indicia of reliability, what are? There is no answer to this question (and certainly not the answer given in the post in question). Science is a process of sifting and winnowing and often whether work is reliable or not isn’t known for some time. It has to be tested, cross-checked and fit into an existing body of information. As one of my colleagues is fond of saying, “Real peer review happens after publication.” Most science reporting these days is quite terrible, little more than regurgitating the press release from a university’s media relations outfit. If you are a lay reader interested enough to look at the actual paper, then you are very far ahead of the game. Most lay readers are at the mercy of a reporter or a press release and there is no good way to tell which of these are credible.
That means most lay readers have to depend on others who look at the literature with a critical and informed eye. There are some extraordinary science journalists out there who are able to do this by providing the reactions of others in the field. The Perspectives, Commentaries and News sections of the top tier journals are very good at that, as well. Then, there are the science blogs, of which Science Based Medicine is one of the best. We try to do the same kind of critical appraisal here at Effect Measure on certain subjects like influenza, and there are many, many more science blogs (such as those produced by our publisher, Seed Media Group at scienceblogs.com).
While I appreciate the hat tip and feel almost churlish in having to respond (after all, I try to do just what revere describes in not just one but two places, and. Effect Measure usually succeeds at doing just this–just not this time), I am troubled by the apparent implication that lay people have to be utterly at the mercy of scientists, science journalists, and bloggers. The reason is that this nearly completely ignores the question of which scientists, journalists, and bloggers are reliable and how a lay person can tell. This is more than just an academic question. For example, yesterday I described a disturbing new website by physicians. It’s slick, written by physicians, and so utterly wrong-headed in every sense of the term as to be downright dangerous. How would a lay person realize that its contents are not just a load of hooey, but a load of profoundly dangerous hooey?
In the end, revere criticizes Val for being so simplistic as to be “wrong-headed” in her primer, but, instead of offering an alternative that actually might help the confused layperson with little scientific background, he simply confuses the issue further because he can’t lower himself below the stratosphere, ignore complexities that, while mostly correct, do not help explain the issue to the beginner, and boil down the question Val was trying to answer to its barest essence in terms that a beginner should be able to understand:
Whom do you trust to provide reliable science information and why?
Which is why I must reluctantly characterize revere’s critique as a FAIL. Wrong-headed, even. He might have provided a nice counterpoint with an additional layer of complexity, but instead chose to miss the point. There are always deeper levels of complexity in any topic to be mined, whether you are a beginner or at the pinnacle of your field. Always. But as an educator you can’t start with those deeper levels and leap, a revere did, into complexities and nuances that, in order to be understood, require background knowledge the audience doesn’t have.
29 replies on “Effect Measure on “Science-based Medicine 101”: FAIL”
Hmmm…very uncharacteristic of revere. I agree with both of you, but you’re right he missed the point — within all of that rebuttal he provides, the question remains: how are people supposed to tell who to trust? The whole issue is that lay-people shouldn’t rely on scientists to parse out the information because, like anything, some people are just bad at their jobs. I look forward to seeing revere’s response, if any.
Sure, simplification is a necessary part of pedagogy, but we still have to figure out what the right simplifications are. Of all the possible “lies to children” we could tell, which are useful in their own right as approximations? Which best prepare the way for the next level of accuracy? Newtonian mechanics can be taught well or poorly; the cartoon version of electron orbitals can be worthless (as it was in my junior-high science class) or informative (in my freshman chemistry class at university).
In the present case, one might ask about the “published in a top-tier journal” heuristic. Is that the most effective cut, the best first-order approximation? What if instead we advised the reader to check that the research was not published in a known crank journal, like Medical Hypotheses or the Journal of American Physicians and Surgeons? It’s a matter of trading off false negatives for false positives. If your smoke alarm goes off every time you make toast, you can pull out the batteries; this will significantly lower the rate of false positives (alarms without fire) but ups the risk of false negatives (fires without alarm). Telling people, “If it wasn’t in Science or Nature or the NEJM, it’s probably worthless” will cut out a great deal of bullshit, but in all the material that rule excludes, there’s gonna be a great deal of legitimate work.
Indeed, “Was the ‘research’ even published at all?” is a worthwhile question in its own right. We know that the media are not above taking an unfinished master’s thesis and spinning it beyond all recognition, until the story in the newspaper couldn’t see the land of truth with a telescope.
It’s never easy to pack a complicated subject into a pamphlet. When we try to invent general guidelines and rules-of-thumb, we’ll inevitably make mistakes. Simplification is necessary, but not all simplifications are therefore good, or even adequate, and we have to deploy hard-earned expertise to figure out which are worthwhile.
(There I go again, talking about “first-order approximations” and tradeoffs between Type I and Type II errors. I’m being such a scientist — Randy Olsen will hate me.)
Not to stir up more trouble, but the current furor over Mooney and Kirshenbaum’s book is NOT an extension of the framing debate. While people have differing opinions of that aspect of their book and whether they actually say anything of value, the thing that really gets people missed off is M&K’s assertion that some people need to keep their mouth shut about what they actually think, lest the poor vulnerable churchies get their sensitive wittle feelings hurt and are driven into the arms of the Creationists.
To put it another way: It’s one thing for M&K to lecture others on how to communicate science to the public. It’s another thing entirely for M&K to say that scientists aren’t even allowed to voice certain opinions. That’s not framing, it’s not even the dreaded “spin” — it’s a call for censorship.
I’m fine with Mooney saying that science and religion are compatible, and that pushing this opinion is effective at convincing the public (even though the overwhelming historical evidence is against the effectiveness of this approach, but whatever). What I have a problem with is when Mooney says that I can’t say what I think about the compatibility (or lack thereof) of science and religion.
Eugenie Scott represents the position Mooney ought to take on this if he really believes what he says. Scott has said time and time again that she believes science and religion are compatible — but she doesn’t seem to be particularly concerned when some scientists voice a legitimate divergent opinion. Can’t say the same about M&K…
Where have M&K ever said that scientists aren’t allowed to voice certain opinions? Direct quotes are preferred.
You’re wrong, by the way. This latest kerfuffle is most definitely an extension of the framing debate. Go back and read the posts from two years ago, and you’ll see a depressingly similar set of rhetoric unfolding using the very same talking points. The whole bit about religion and science and whether they’re compatible or not was right at the forefront two years ago, just as it is now in the latest incarnation of the same tired battle. True, it didn’t start out that way (much), but it rapidly “evolved” into the same thing: P.Z. et al saying religion and science aren’t compatible and that they were being told to “shut up” by the “framers.”
OK, people are talking about “framing” now. I’m giving up and going to watch Star Trek. Hmmm, The One With Twentieth-Century Rome, or the One Where Spock Gets a Romulan Girlfriend?
A characteristically thoughtful, feisty and spirited defense of a colleague. Good on you, Orac, as they say. And by saying it’s a defense of a colleague I am not implying you don’t believe she was right, although you voice some hesitations here and there and acknowledge you feel it is incumbent on you as a member of the SBM to respond. No. You buy it. But I don’t. Nor do I not confess to missing the point. The point, at least as I understand it, is to provide some helpful heuristic guidance to people who need it because they don’t have the specialized knowledge needed to evaluate a scientific paper critically. That means the guidance should be good and it should be helpful and it should, dare I say it, be reliable. Notice I didn’t say “credible,” the word Val uses. She is most certainly credible. But her advice in this case is not reliable in the most important sense: if you relied upon it you would be better off.
Your argument seems to boil down to the fact that with introductions you have to simplify. You don’t jump in medias res. Quite true. But you don’t jump in just anywhere either. You talked about molecular orbitals. Suppose your science teacher had taught you, as was commonly taught just before Rutherford, that electrons were like raisins in a doughy material? Is that the right place to start to explain atomic structure? It’s simplified and visualizable. But it starts at the wrong place.
Then you jump to another metaphor, patient education:
This seems quite inapt but convenient for my purposes. You don’t say to your patient, I have published in journals whose names you don’t know (or how papers get into them or what they mean) and I have such and such a degree from such and such a prestigious place and I have been employed by this or that cancer center. Instead you explain to her what is involved. Nor do you say to her, take my explanation and see if it makes scientific sense to you? Have I taken into account other therapies and competing explanations? Is my reasoning sound? It’s not a matter of nuance and complexity. It’s a matter of appropriateness and teaching objectives. It wasn’t that Val was giving simplified advice I objected to. It was that she was giving bad advice. I also won’t go point by point to your long response except to say that it wasn’t a question of anecdote but of principle.
Let’s parse this a little because I concede my final paragraph was lacking in detail. But there is certainly no implication that lay people are utterly at the mercy of scientists, science journalists, and bloggers, any more than Val’s piece implies that. All of her advice is about scientists and the science but I didn’t think she was implying that’s all there was and neither am I. I deal on a regular basis with communities that have environmental problems. I am continually impressed by the amount of what a colleague of mine calls “raw brain power” out there. On purely technical matters many community people with no specific scientific education can run circles around technical experts from industry and public agencies. The reason is quite clear. They are intensely motivated, they spend a great deal of time learning about their specific subject and community problem and they are bright to begin with. They don’t need quantum mechanics or molecular toxicology or epidemiology to figure things out. They put it together using native intelligence, intense engagement and help from many sources. They don’t use guides like Val’s, which would be totally useless to them.
At issue is what kind of help do we give laypeople who are motivated to know or understand more. I think your approach to your patients exemplifies it. You don’t start with a high school description of the scientific method. You explain to her what you think she needs to know. You explain the subject in terms that are appropriate for the setting and the person. You don’t try to come up with a general formula that is likely to mislead in place of addressing the subject matter.
Because there is no such formula and to imply so is wrong headed.
Orac:I don’t think they ever said “scientists aren’t allowed to voice certain opinions” but implying it is another matter…
Would “In Chapter 8 of Unscientific Americaâjust 12 pages of a broader bookâwe argue that an entire movement attached to âscienceâ today is not really much invested in effectively reaching the U.S. public, but rather, has become radicalized around the counterproductive project of blasting other Americansâ religious faith. This movement is most vociferous on the Internet and, more particularly, on science blogs like Pharyngula, where its adherents seem unswervingly certain their way is the right way, and seem to little value civil dialogue with those who might disagree. (For one seconding of this opinion, see here.)” qualify? I’m not a huge fan of PZ but repeatedly attacking him, Coyne, and Dawkins as being a significant part of the problem, with the implication that life would be easier if they would just go away does sound like a call for cenorship.
revere: “And by saying it’s a defense of a colleague I am not implying you don’t believe she was right[.]”
If you’re not implying that, what are you doing? “Framing” it? Wonderful …
revere: “Suppose your science teacher had taught you, as was commonly taught just before Rutherford, that electrons were like raisins in a doughy material? Is that the right place to start to explain atomic structure? It’s simplified and visualizable. But it starts at the wrong place.”
Actually, she did. She went through the entire procession of models of the atom, explaining why each was proposed and subsequently found to be wrong. She ended with the QM-based model, explaining that we think it’s correct, but a simplified version will work for our purposes.
But then, she was an excellent teacher. I guess you can’t expect such excellence from all people …
revere: “At issue is what kind of help do we give laypeople who are motivated to know or understand more. I think your approach to your patients exemplifies it. You don’t start with a high school description of the scientific method. You explain to her what you think she needs to know.”
I don’t think this could be more wrong. Probably the biggest problem with promoting science on the internet is horribly misguided motivated people. These folks go bonkers spouting all sorts of nonsense, precisely because they lack what you dismissively call “a high school description of the scientific method.” (Or, because someone else “explained” to them “what they need to know,” and so the the woo spread.)
Big swing and a miss here, revere.
Probably shouldn’t comment without reading both posts thoroughly, but I want to give props to revere for asking that simplified explanations be GOOD explanations. And I agree that Val Jones’s guide may be difficult to implement.
But I agree with Orac that revere didn’t help us much in that department. Truly, how does the educated and interested layperson evaluate claims and counterclaims among scientists? Is relying on trusted opinion the only avenue open to us? Because sooner or later, that reliable source is going to be wrong.
I totally agree. Furthermore, not only does Val’s preface state that she’s giving an overview (and thus what will follow will be general guidelines for which there are always valid exceptions), she states that it’s to the lay-readers of SBM. The raison d’etre of SBM, as revere must know, is to debunk pseudoscience under the guise of CAM. This achieved almost exclusively by masterful deconstruction of the claims made on websites, in books, or in research papers. Being able to determine the validity of these various sources is a very useful tool, especially as so many CAM purveyors are singing out about the ‘peer reviewed literature’ backing their particular woo. Like the scientific studies themselves, not all peer-reviewed journals are created equal, and citations from JPANDS or Medical Hypotheses, etc. should be viewed a bit more circumspectly than those from Neuron or Lancet. I think this is the valuable take-home message from Val’s original point three.
The same essential point can be made about the authors and the studies themselves. Knowing how to weight them is certainly not an easy task–goodness knows it’s too much for the MSM most of the time–but the dialogue at SBM and similar sites is invaluable in this endeavor. Thanks to Val, the lay-readers at SBM now at least have some benchmarks to go on, some questions to ask, some backgrounds to investigate, to help them interpret the claims they might hear in their own communities.
bob: We disagree on this, but one thing I want to comment on particularly: the idea that “the scientific method” is well understood by scientists. As I’ve had occasion to say more often than I care to count, expecting scientists to understand the scientific method is like expecting fish to understand hydrodynamics. Most descriptions scientists give are grotesque comic book versions of what really goes on in science (hypothesis-experiment-observation-redo hypothesis, etc.). That’s what happens in the minority of cases. Even distinguishing scientific from pseudoscientific has turned out to be such a difficult problem (it even has a name, The Demarcation Problem) that most philosophers of science of stopped addressing it as both uninteresting and fruitless. If you ask scientists for their philosophy (if they have one at all) it is almost always some combination of naive realism (no problem, I myself what would be called a naive realist, although unlike most scientists I know what this means) and both a logical positivist form of verificationism and a Popperian falsificationism. No matter these two are incompatible (one was constructed to demolish the other). Both are also inconsistent with what happens in science. You think that a theory that says all ravens are black will be “refuted” by observing a white raven? Fat chance. The black raven theorist will just reply, “Raven? You call that a raven?”
Jennifer: My experience with many EBM proponents is that they actually understand very little of what they read. If it is a RCT in a “good” journal then it’s right. Many don’t even understand randomization or its purpose or how such a study is done or its pitfalls. They just think that because it’s a “top tier” study design they should believe it. Using the MSM as a straw man here doesn’t work for me. The 101 post wasn’t about the MSM. It was about what the tell tale signs of a “credible” piece of scientific work is. Yes, there’s a lot of crap out there, most of it not CAM, just lousy science that gets published anyway. I can spot it in my field but Val probably couldn’t even with all her experience as a scientist. The same is true of my judgment of other people’s areas of expertise, where I am pretty much a layperson. And none of her criteria would be reliable. I read all sorts of crap in the MSM that got there through a media relations press release from a “top tier” journal. Some of these things I talk about on the blog. When it comes to “woo,” people go to SBM and Respectful Insolence to get the scoop, a much surer and efficient way than to decide on the basis of whether it was in a “top tier” journal. As for Val’s piece giving general guidelines for which there are exceptions, that’s not the problem. She is giving highly special cases for which almost everything else is an exception. Most science is not in one of those journals nor is it by someone whose reputation and contribution to the work can be judged with any ease, nor can the injunction to see if the design is good be carried out by people who don’t know the field. And if you confine yourself to only science that meets the first two criteria you will ignore most science and most science that makes a real difference.
revere, thanks for you response. First off, I hardly think my passing mention of the MSM presents anything like a straw man. There is a lot of crap to be found in print and on the airwaves, and much of it is crap precisely because the findings of a particular study are overstated, sensationalized or otherwise misrepresented. As such, I only brought up the MSM as an example of the difficulties the lay public might encounter in trying to accurately interpret published data.
Beyond that, I’m not really sure what you’re advocating here. You obviously disagree with Val that journal reputations (and PI reputations) are a reliable metric by which to judge the strength of a study. You further contend that none other than an expert within the field in which a given study has been conducted is suitable to judge the robustness of the methods, thus disputing Val’s final piece of advice. Fine. But your alternative recommendations are a little contradictory. On the one hand, you seem to be advocating listening to the experts via SBM, RI, EM and so forth, and while these are indeed all awesome resources, I do worry about the paternalistic hue here, as Orac pointed out. (and parenthetically, I would dispute that the ‘experts only’ edict holds up even to this point. Neither David Gorski nor Steve Novella are epidemiologists or an immunologists, but they can fisk the hell out of any anti-vax doctor who comes along, and a whole passel of other woo besides). On the other hand, you refer to the ‘raw brain power’ within your community of can-do environmental problem solvers. They, evidently, don’t need the experts around at all. What gives?
Further, I might characterize that latter example as a bit of a straw man, as it seems only superficially comparable to the complexity of debunking medical claims. There may be minor disputes over particular strategies, but does your community brain trust have to navigate around environmental snake oil salesmen? Are they confounded by the placebo effect when they put their ideas into practice to solve these problems? Are they receiving conflicting advice from apparently equally credentialed ‘experts’?
Hmmm, something in my response to revere triggered the moderation filter. Hopefully it’ll be worth waiting for.
I don’t have time to respond in detail, revere, but I will say two brief things: (i) I’m disappointed that you resorted to philosophy so quickly, and (ii) individual scientist’s failings do not undermine the entire enterprise of science.
Number two is an especially serious error that you’re making (assuming I read you correctly), because that is indeed the POINT of science. I fear it’s YOU who doesn’t know the scientific method that well, after all. (Or, at least you assume that it’s totally bastardized in practice, which isn’t true.)
As for your comments about “EBM proponents,” I disagree completely. Perhaps we’re reading different blogs, and you don’t read the folks at Science-Based Medicine that often. Do you really think Novella, Orac, Crislip, et al don’t understand the studies they discuss?
bob: I’m very puzzled by your comment that I “resorted to philosophy.” Since I believe we are talking about epistemology (how do we know and what warrants our beliefs), you have left me baffled. It sounded like you were accusing me of something disreputable in argumentation, like making an ad hominem argument.
I also do not understand your second point: “that is indeed the POINT of science.” What exactly is the referent for the demonstrative pronoun “that”? If you are talking about “the scientific method,” maybe you’d like to tell us how you understand it. I think I can do science and others think so, too, if getting grants and publishing papers is a measure. But I couldn’t give you an easy definition of scientific method. I’ve thought about it quite a bit and written about it a number of times but the essence of it is very hard to express. I’d be interested in your version. I don’t think there is a definition, although there have been some good explications (I find Susan Haack’s version close to my own, although we use different metaphors). She is a philosopher, of course, but I don’t think that invalidates her views. Insofar as a scientist has an idea of scientific method, it is based on some philosophy of science or at least, the epistemology of science.
Regarding the EBM comment, I hope you understand that SBM and Orac’s blog are not the only proponents of EBM. Indeed they are somewhat unusual in that they actually think about it (and writing is thinking). Many reflexive proponents merely pay lip service to it. I doubt that one in ten could even give a correct version of what a confidence interval is as a coverage probability. One of the most egregious but regrettably common errors one hears from knee jerk EBM proponents (not SBM or RI, at least that I am aware) is that differences that are not statistically significant are evidence of no effect. But now we’re getting into the weeds and I gather you don’t wish to follow me there.
bob: I believe I neglected to respond to your comment about the starting point and the raisins in the dough explanation of the atom. To recap the exchange for other readers:
Let me understand you here. She went through the entire procession of models, you say. But we aren’t talking about the entire procession. We are talking about the starting point. By your reasoning it was fine to start there (or earlier) and then pick up in the next class with, say, the Rutherford atom, etc. I am giving you the benefit of the doubt here, but I don’t know where she started.
So that means your biology teacher, in teaching evolution could just as well start with Creationism or the Bible and then move on? Or perhaps Francis Collins’s version? Since the starting point doesn’t matter in your version, why not? Would that still be excellent teaching? How about the kids that didn’t make it to the next class (or the next post; I’m not even sure another post is planned). No, bob, I’m afraid that’s not an answer to my point. Or as you might say, my POINT.
I recommend the following sources to laypersons who want to be able to read primary sources, or even to have some understanding of what they read in the media.
Know Your Chances: Understanding Health Statistics, by Woloshin, Schwartz and Welch
This book is accessible to just about anyone.
Clinical Epidemiology: The Essentials, by Fletcher and Fletcher
This book is a little more advanced, but still accessible to motivated laypersons.
The entire procession?
(And, of course, Schrödinger wasn’t the end of the story, either. His solution of the hydrogen atom in terms of nonrelativistic wavefunctions had to be corrected for relativistic effects; to calculate the Lamb shift in the hydrogen spectral lines, one has to confront the self-energy of the electron, and so forth. . .)
Dear Orac,
I have a question concerning this statement:
“I tell her that breast cancer goes first to the axillary lymph nodes. I don’t tell her that this isn’t the case a certain percentage of the time, when breast cancer can skip the axillary lymph nodes and go to the rest of the body.”
If you do it like this, isn’t there the danger that the patient thinks: “O.k., if this biopsy comes out o.k., then I’m 100% sure that the cancer did not spread”? So what do you tell the patient after the outcome was negative? Won’t she feel in some way confused if you seem to retract the definiteness of your statement?
Sorry I haven’t been around to participate in this discussion much. I was working, and on Friday nights I usually try hard not to look at the blog. Gotta have one night away, at least.
I think what this is boiling down to is not so much whether significant simplification is necessary to explain complex science concepts to the public but rather what is appropriate to be left out. I still think revere came at this question from a very elevated level, a level so high above what the intended audience is ready for that his message came across as, in essence, that there is no good way to assess reliability and that the public is stuck with relying on experts. That may not be what he meant, but that’s how it came across to me, and, of course, that always leaves the question of which experts can be trusted, perhaps an even thornier question. His subsequent clarifications seem to me not to have made things much clearer, although it could just be because we disagree on what’s important. (Really, revere, that stuff about assistant professors and postdocs was pretty far afield and not very helpful at all, as I pointed out in my post.)
When it comes to clinical trials, I think Val has a reasonable point, as do most of the editors of SBM. In general. In general, the bigger and better clinical trials show up in the higher tier journals and the smaller and less rigorous ones show up in lower tier journals. I may also have been swayed not to feel obligated to point out the number of exceptions to that general observation because I know that Val’s going to do another part that discusses replication, which of course, trumps where the first study shows up.
Orac: I think you did boil down the essence of my argument correctly. It wasn’t against simplification. It was against Val’s simplification which I think was — well I’ll say it again –wrongheaded. I do in fact believe there are no reliable indicia for what is a study to depend upon, although there may be a number of indicia of which ones not to depend on (although that is always chancy). The Assist. Prof./post doc comment was in direct relation to using track record as a guide. Pardon me if I think it much more salient than the patient education example, but I guess there we disagree.
However the second part of your version of what I was maintaining — that laypeople must resign themselves to be at the mercy of experts — is not the case. Motivated laypeople can ferret out an excellent understanding of complex science by cobbling together helps of all kinds, including blogs, excellent science reporters and educating themselves formally or through texts. I’ve seen it countless times so it is not speculation on my part. But of all the things that can help them and advice I would give, Val’s would be very low on the list as helpful or even valid. That was the burden of my remarks.
Which brings up another thing. Why do you call what you do, science based medicine if what you really mean is medical science that uses a clinical trial as its main tool? That is only a tiny fraction of the science of medicine. It also elevates one kind of evidence to a level that is both misleading and often wrong. Clinical trials are not always, perhaps not even usually, the best study design for answering important questions in medicine. There are many reasons for this and that would be a long blog post in itself, but I would point out that many clinical trials are very badly done and badly interpreted and many are essentially worthless. Big trials are expensive and expensive trials are more likely to be published in a top tier journal because they deal with a subject someone thought important enough to spend money on. But many of them are or were badly done and some even fraudulent for the same reasons. Money. So that’s a risk factor for a paper like this in a top tier journal.
Replication trumps where the first study shows up? Why? Depends on what you mean by replication (very few studies can be exact duplicates) but on its face there is no reason why the second study should be better than the first so you would have to use Val’s criteria all over again and no trumping would be involved.
This is a complex subject but my readers were interested because it promised to give them a key to read “science.” In fact only a tiny slice of medical science was involved and the criteria offered were not only far short of a key but highly problematic. It also seems that they were framed (I am using that in the naive sense) with clinical trials in mind. Why not just call the site, “Clinical trial based medicine” and get it over with? That way those of us who do science not involving clinical trials (which is most scientists in medicine) won’t get confused and neither will lay people.
My impression has been that “evidence-based medicine” is something of a term of art (confusingly enough: what else would you like medicine to be based on, Vicodin-fuelled hallucinations?). EBM advocates have been criticized for emphasizing clinical trials above all other forms of data and ignoring prior probabilities, leading to spurious credibility for things like homeopathy. The term “science-based medicine” was intended to be more inclusive than EBM, which in practice equates to “clinical-trial-based medicine”.
Blake is correct. EBM deemphasizes prior plausibility based on basic science to the point where clinical trials are all. The reason I brought up clinical trials, though, is because that’s what’s reported the most in the news and that’s what people are in general most interested in regarding science and health.
That’s not what the MSM actually writes about. Clinical trials come up relatively infrequently. Most science reporting in health and medicine is about things for which clinical trials are irrelevant because they can’t be done. It is a special interest of yours and your colleagues but it isn’t most of what can legitimately be called science based medicine.
I just visited CNN, MSNBC and FOX, and fully 40% of the headlines under ‘Health’ were reporting on study results. Am I misinterpreting your statement somehow?
I’m glad I’m not the only one shaking her head over revere’s comment about clinical trials coming up rather infrequently. My experience is that clinical trials and epidemiological studies (you know, the ones that say this or that does or doesn’t cause cancer, heart disease, etc.) make up a large chunk of health reporting when it comes to reporting on science.
I find myself in virtual complete agreement with Revere on this.
The quintessential quintessence of being a scientist is to understand the questions being asked and then understanding the factual basis of the answers and the chain of logic that ties them all together. If you donât have that, all you can have is a faith-based understanding of something, in other words, no understanding at all. Using Feynmanâs term, a âcargo-cult understandingâ.
We bring non-scientists no closer to understanding science by telling them to substitute one type of faith for another. Relying on peers, relying on editors, relying on research proposal reviewers is substituting faith for personal experience. If one understands the underlying science, then one can verify the claims, but if one doesnât understand the science one is limited to faith.
Understanding is always and can only be a personal understanding. If you are a scientist, you canât take someoneâs word for something. It is always âtrust but verifyâ. Trust that other scientists are not lying to you (i.e. are being as honest with you as they are being with themselves), but verify that they havenât made any mistakes. If you donât understand something, you canât have an opinion on it as a scientist. You might have an opinion as a lay-person but you cannot have a scientific opinion unless you understand the science and have a chain of logic leading from facts to support your opinions. A scientistâs opinion on something he/she is not knowledgeable about is not a scientific opinion.
If we take Val Jonesâ advice, how does a non-scientist evaluate the science? First she says evaluate the reputation of the scientist reporting the results. How? By evaluating the reputation of the scientistâs mentors, journals, body of other research, etc. How are those reputations evaluated? By application of the same method of reputation evaluation to those at the next level. If one lacks the ability to evaluate the reputation of any party, one is unable to proceed, or one follows the reputation chain out until one reaches someone whoâs reputation can be or has been evaluated, a non-scientist; perhaps a parent, a politician, a member of the clergy, an acquaintance.
How does one evaluate a reputation in the non-scientific world? Much the same way as Val Jones suggests. One measure of credibility is popularity, using the wisdom of crowds. If many people have found a source to be credible, then using that as a default is a heuristic that many people use. Popular people are popular so they must have high reputations. Venerable institutions are venerated so they must have high reputations. People go with their âgutâ and can then be defrauded by people good at lying. People bad at lying (such as people with Aspergerâs) are not believed even when they are correct because social skills are such an important part of what makes someone âcredibleâ. Social skills have no relationship to whether someone is speaking correct facts and is using correct logic and reaching correct conclusions. Social skills have everything to do with having a certain âreputationâ. Belief based on âreputationâ is how the anti-vaccine message is being propagated. The leaders of cargo cults had their position based on reputation in unrelated areas too.
I have found errors in âtop tier journalsâ, and when I have written in to correct the errors, the corrections have not been accepted. In one case the editor explicitly said that they only wanted positive comments and didnât want the debate to become contentious. I presume the editor didnât understand how wrong the entire premise of the paper was (it was about extending homeostasis) and defaulted to the reputation of the âexpertâ who had written the paper. The editor had defaulted to the Val Jones model, the reputation of his expert, published in a top tier journal is higher than someone who hasnât, so who was I to say the emperor had no clothes?
At TAM7, Randy talked about how the easiest people to fool were those who were âhalf smartâ, and that the hardest people to fool were children. Half smart people are everyone who is not an expert in the field, and if the field is cutting edge, then no one is an expert and everyone is âhalf smartâ, or less. The leaders (and followers) of cargo cults were âhalf smartâ. They knew enough to be able to go through the motions, but lacked the essential intellectual integrity to test their fundamental assumptions. You can read a paper, but unless you know enough background to understand it, you are doing âcargo-cult readingâ.
It has been said that it takes about 10,000 to 20,000 hours of study to become an âexpertâ in a field. There are very few of us who can be expert in many fields 😉 . There are no short cuts that can get you there without the very large store of facts and the logic connecting them together into a coherent whole. If you are not careful, and let wrong âfactsâ get in there, the âcoherent wholeâ you force them to fit is wrong and will lead to wrong conclusions. Garbage in equals garbage out applies to conceptualizations of reality too.
Jennifer, Orac: Well this is slipping into the battle of the anecdotes. I gather none of us have done a real survey of topics the MSM treats (I plead guilty to not having data) but are operating from general impressions, no doubt reinforced by our separate interests. However I note that Jennifer’s single sample indeed suggests that the majority of stories are about something other than clinical trials. Of course we’d have to take into account that when a story hits, there will be multiple reports of the same study but that holds true for all the subjects. Sol there are two parameters here: of the number of different subjects/papers discussed in the MSM, what proportion are trials; and of all the stories, how many discuss trials. I don’t have the data but would be willing to bet in both cases most articles are not about trials (there is also the question of the sample space to consider and many other details).
That aside, I think my point still stands: most (I will now refine to mean “the majority”) of articles in the MSM on health and medicine don’t relate to clinical trials.
daedalus2u makes an important point we have not emphasized. Most of science depends on trust. We don’t verify every finding or read every reference or check every number. The skein of scientific reputation is important and Val’s credibility criteria implicitly incorporate this aspect. But trust is not the same as reliability. They may be correlated in some instances and not others and it is still a task to make the connection (again, consider the grant review process). We rely on people who have looked into this on our behalf. Those people might be experts, bloggers, colleagues, your own ingenuity, etc. But there is no mechanical process that does it with adequate sensitivity and specificity that we can or should rely on it for anything really important, either as scientists or laypeople.