Having taken note of my little missive yesterday about New York Times health reporter Tara Parker-Pope and her utter credulity towards the woo that is acupuncture, Dr. R. W. makes an observation:
A number of years ago I ran across Science Education in Preparation for the Ministry. The premise of the document, written by pathologist and teacher Ed Friedlander, MD, was that because members of the clergy are often called on to speak in areas where morality and ethics interface with science, they should have some prerequisite knowledge. Orac’s latest example of credulous and sloppy medical reporting in the New York Times got me to thinking that maybe there should be similar learning objectives for journalists.
An excellent idea, and Dr. R.W. has a list of some things that every medical journalist should know. My favorites? These:
- Outline the scientific method. (I’m betting there are a lot of journalists out there reporting on medicine who can’t outline the scientific method.)
- Explain why consideration of biologic plausibility is important in the evaluation of health claims and why evidence based medicine often fails when biologic plausibility is not taken into account. (This one is hard, but knowing the answer would eliminate a lot of truly ignorant articles like the one Parker-Pope wrote yesterday.)
Overall, it’s a good list, although the questions about Medicare’s prospective payment system belongs in the realm of the political and business, not in the realm of medicine. I could have done without that little political aside.
I humbly suggest that Dr. R. W. has missed at least one really, really important one:
Explain what placebo effects are and why placebo controls are so important, particularly for health outcomes with a large subjective component, like pain or anxiety. If I really wanted to be nasty, I’d insist on using certain acupuncture studies as the examples that the journalist must use to explain this.
In any case, let’s see what else we can come up with that every journalist covering medicine and science should know. Especially Tara Parker-Pope, who clearly needs a remedial class.
39 replies on “Some excellent questions for medical reporters”
Yeah, I couldn’t resist throwing in a policy item or two. Medicare’s PPS is one about which there’s pervasive ignorance. Your mention of the placebo effect is well taken and I think a thread here with additional suggestions from your readers is a great idea.
1) Kinda goes with placebo, but unblinded vs. single-blind vs. double-blind
2) A general understanding of therapeutic indexes and why they very, such as why a chemotherapy drug will generally have a narrow index, but an antibiotic will have a broad one
3) Briefly explain why model systems are used, their advantages and drawbacks
That “balance” doesn’t always mean giving both sides of a discussion equal weight as if every quasi-scientific maverick was Galileo reincarnate.
Okay, I give up.
What’s the answer to the first question? I haven’t got a clue what you mean by the “scientific method.”
Placebo effect. One interesting exercise for the average journalist would be to analyse a discussion I heard on radio, sorry I was driving and I didn’t follow up, about why it is sometimes unethical to use placebos in double-blind pharmacy trials.
The speaker was talking about the practice of outsourcing pharmacy trials to hospitals in developing countries (and a whole lot of ethical issues arise apart from the trial design itself). Mainly, he was concerned that the drugs being trialled were mere replacements for existing well-known drugs, not novel treatments at all.
His view was that the trials for new versions of standard heart medications should not be against placebo but against current medications. They should have to prove that they are not only effective but better. Better enough to justify the additional cost involved for patients and governments buying a newly patented version of something equally effective they could get more cheaply.
Or would that just lead to a journalistic rant against money-grubbing companies than an understanding of the worth and the limitations of placebos in investigations?
An excellent site reviewing media coverage of medicine (and things mistaken for medicine) is Gary Schwitzer’s HealthNewsReview Blog.
http://www.healthnewsreview.org/blog/
.
Describe the process for developing a new drug or vaccine, including target ID and validation, candidate evaluation, preclinical and clinical testing and regulatory approval, as well as the timelines involved.
Describe the benefits (and potential drawbacks?) of this process.
Describe how treatments which are approved and in use are monitored for side effects.
Contrast the above with the testing done for “alternative” treatments, and describe the risks resulting from that.
@adelady,#5
Research ethics boards won’t approve a study that tests a medication against placebo in cases like those described (heart medication, for example).
You say that they should have to prove that they are “better,” but what does this mean? For example, some antidepressants may not be more effective, but are effective in people for whom another type isn’t. Or chemotherapy regimens for those who have failed the gold standard. Or merely having a different set of side effects that might even be worse, but can be tolerated by a class of people that can’t tolerate the side effects of an existing therapy? Or a different mechanism of action, potentially allowing combination with drugs that were previously contra-indicated due to drug interactions?
Beyond this, approval of a drug doesn’t mean that it will be used. In Canada we have a common drug review process to evaluate the cost-effectiveness of drugs, whether they will be put on the formularies for the various provinces, and from this process each province (and other groups) can decide whether to reimburse the drug. We also have a price control, in the form of the Patented Medicine Prices Review Board, which examines the evidence for the effectiveness of a drug and sets the maximum price that can be charged for it, based on both international pricing and the prices of comparable therapies.
I agree that many drugs seen are simply replacements – they may be longer acting, they may have a different set of side effects, they may simply offer an alternative at a better price – but is there a reason that these drugs shouldn’t be approved?
As to Orac’s request, I think that journalists should be taught logical fallacies and critical thinking, that it is impossible to prove that something doesn’t have an effect, and the concepts of type 1 and type 2 errors.
I thought those questions could be subdivided into groups. Some are questions that should be answerable by anyone claiming to report on science (e.g. the definition of the scientific method), as they are equally relevant whether you are reporting on exoplanets, global warming, or medicine. Some are questions that should be answerable by anyone claiming to report on biological sciences (e.g., difference between DNA and RNA; Mendelian genetics). Others should be answerable by anyone claiming to report on modern medicine (e.g., the questions about placebo effects, heart disease, pandemics), and finally those about medical practice in the U.S., such as the one about Medicare or the one about HMOs.
I don’t find the question about conflicts of interest to be objectionable. Any time you have clients and providers, you have such conflicts (even pregnant women and foetuses). Having some understanding of where they lie, how serious they are, and how they arose would make a medical reporter better able to assess the kinds of claims that pseudoscientists raise against modern medicine — and maybe even to spot the similar conflicts of interests that the pseudoscientists themselves have. Â Â
The problem with this line of thinking is that efficacy and side effects vary for different people. Even if the new drug is only equally good in aggregate – for that matter, even if it’s overall inferior – it may still be superior for some people. It shouldn’t be a first choice if it’s more expensive, but if the existing options haven’t worked well for somebody, having another option to try is a good thing.
How about knowing what bias is and is not, and how the different types of biases affect opinions?
For example: Anti-vaccine groups and Recall Bias
That “balance” doesn’t always mean giving both sides of a discussion equal weight as if every quasi-scientific maverick was Galileo reincarnate.
They laughed at Galileo. They laughed at Einstein. They laughed at Bozo the clown.
Most mavericks will have more in common with Bozo than with Galileo or Einstein. In many cases, the comparison to Bozo is completely unfair–to Bozo.
#8 Aha! Research ethics boards.
The doctor in this case was talking about companies deliberately using hospitals in developing countries which tout themselves as being able to run large trials – for *reasonable* cost. The big issue for him was that many of these institutions did not have ethics procedures of the same rigour that we expect in OECD countries.
So the design of many trials was not only inadequate in many cases but contrary to accepted good practice. And his ethical concern was that trial subjects were being deprived of routine care for well-understood conditions. Therefore the questions journalists should ask themselves – why should or should not a placebo be used in certain circumstances. Probably far too sophisticated for most.
The difference between statistically significant and clinically significant.
Related to that, the different ways of reporting the numbers: “twice as risky” can mean increasing the risk of death from .05% to .1%, for example.
Come to think of it, a basic overview of statistics, including things like the importance of sample size. (At this point I’m going to plug The Cartoon Guide to Statistics, by Larry Gonick and Woollcott Smith, which is a gentle introduction but does include some of the basic math.)
@Vicki,
One of the shortcomings of all the Times health reporting is an apparent misunderstanding of relative vs. absolute risk. As in, saying that “Wonder Drug X cuts your risk on developing Dread Disease Y by 50%!”, when what it does is lower the possibility from 2% to 1%, all while disregarding the side effects and long term risks associated with the drug.
Arrgh!
How about adding the ability to do some basic cursory research/ fact checking so someone doesn’t report cryptosporidiosis as a bacteriological infection?
http://blog.cordialdeconstruction.com/2010/08/25/cryptosporidiosis-is-not-a-bacterial-infection/
I’m pretty sure Ben Goldacre (of Badscience fame) has run workshop/classes for journos on this kind of stuff before.
Of course most major newspapers in the UK now have Health/Wellness sections that seem to be bastions of woo, so the NY Times looks tame by comparison. Part of the problem here is that at least some science/health stories are covered by non-science/health journos who haven’t a clue.
For example, my last blogpost looked at a story about berries preventing Alzheimer’s. The Telegraph journo covering it was the arts correspondant ffs.
Interesting. In the UK:
National coordinator appointed to improve science journalism training (via journalism.co.uk)
“The role was created to coordinate existing efforts and then work with other organisations (e.g. BBC College of Journalism) to develop new courses and training material according to the needs that have been identified,” [appointee Martin] Griffiths told Journalism.co.uk.
“The training will be aimed more at non-specialist journalists, who need to write about science (science in its broadest sense – including health, environment, engineering) or use statistics, than at specialist science journalists. The three main elements of this will be: helping universities include more science or stats content in their journalism courses; running training courses for working journalists; and developing an online resource to host all this material and link to other resources.”
“We’re very clear that it’s not about slapping journalists on the wrist for bad science reporting – there are already plenty of people doing that. Part of it is about helping journalists get science right, which will include basic numeracy, reporting risk accurately and learning how to read a scientific research paper,” he said.
Outline the scientific method.
Now here Orac, you have lost me. There is no one “scientific method,” that’s nothing but a myth. Science is a broad philosophy of knowledge, that uses eclectic methodological approaches and tools — a vast array, in fact.
You may ask more specifically that medical journalists understand the randomized controlled trial and the various threats to validity which careful trial design is intended to address, but the RCT is not equal to “the scientific method.”
Puhhleeze.
I’m interested in your thoughts on this educational cancer program for journalists. It is supported by an unrestricted educational grant from Pfizer.
http://nationalpress.org/programs-and-resources/program/cancer-issues-2010/
cervantes, Orac is not referring to RCT, he’s referring to the basis of science, which boils down to: observe, hypothesise, experiment, draw conclusions. Of course, there are many ways to go about this, but the essentials remain pretty much fixed.
The wikipedia article on the scientific method might be helpful. The section on the elements of the scientific method gives four characteristics, and an example of a linear approach, such as is taught in elementary school science classes.
Oh, and before you get upset at the mention of elementary school science, I wasn’t making a snide remark – I was pointing out that simplified, linear versions are introduced early as part of science education, but that it doesn’t necessarily capture the whole. It is nonetheless a decent way to teach the concept and the one most people would be familiar with, even if they didn’t recognise it by name.
Thanks for the plug, Rogue Medic.
See our review of the NYT acupuncture story – an analysis by 3 of our reviewers.
http://www.healthnewsreview.org/review.html?review_id=3099
Gary Schwitzer
Publisher
The same need exists for education of lawyers and judges (not to mention politicians, who are frequently lawyers).
I once served on a jury where one of the panelists was a molecular biologist (with a PhD). The attorney asked her “whether she could set aside the scientific method in considering the evidence to be presented.” I wanted to raise my hand, but decided not to.
Epinephrine says,
I dispute that there is anything that could be called a “scientific method” capable of describing how a scientific approach leads to knowledge. One of the things that is certainly NOT required is “experiment.” Furthermore, much of legitimate science these days consists of data collecting (e.g. sequencing the human genome) and not drawing conclusions.
I think of science as a way of knowing that requires rational thought and evidence, tempered with a healthy skepticism. It can be applied to history and philosophy as well as the traditional studies of the natural world (geology, biology, chemistry, physics). There is no one “method” to describe how one thinks rationally and relies on evidence.
“Scientific method” is for grade school, not for adults.
Oh, spare me your condescension. Your definition of the “scientific method” is so vague and broad as to be in essence meaningless. Moreover, history and philosophy are distinct from science; their methods sometimes overlap with those of science, but much of the time not.
What you are describing is critical thinking. Science is a subset of critical thinking, a methodology utilizing critical thinking, but they are not the same thing. Maybe I’ll explain why later, but I have a lab meeting now.
I won’t contest that the very simplified version taught in elementary school isn’t the only approach. I believe that there really is a difference between science and non-science, and that the difference comes down to the way in which knowledge is built. Popper has falsification as one of the great differences between science and non-science, and while he doesn’t insist on experiment, the elimination of erroneous theory is vital to science. The issue of whether there exists a “scientific method” is discussed in philosophy of science at the university level, I don’t think it is “for grade school.”
I would add that IMO merely collecting data is not science. It may be valuable to science in that the data can be used to further scientific inquiry, but the mere act of collecting it, with no theories being generated or challenged, is not science.
Indeed. Collecting data for the human genome project was not exactly science; rather it was more technology being used to collect what would become hypothesis-generating data that could be used for science. In other words, I tend to view it as being one step removed from the actual science. What was and is being done with the data and the testing of the hypotheses generated from that data, now that‘s more like science.
That aside, Larry has decided to get his knickers all in a wad over what I considered to be a fairly necessary simplification in order to ask a question and get a point across. Let’s put it this way: What I expect of my science journalists is that they at least understand the “grade school” level version of the scientific method, as it is clear to me that a depressingly large number of them don’t even understand that much. If they can go up a level and talk about the ambiguities what is and isn’t science or the scientific method is pure bonus.
Orac says,
What journalists, and everyone else, need to understand is critical thinking and skepticism. They need to understand the importance of evidence based reasoning. They need to appreciate how all new bits of information must fit into a larger model of how the universe works (e.g. consistency).
Teaching them an incorrect grade school version of some non-existent “scientific method” is not only a waste of timeâit’s counter-productive. It’s the opposite of critical thinking. It’s not based on evidence. It suppresses skepticism. And it’s not consistent with current views of epistemology.
In other words, it’s only fit for grade school … and even that’s debatable.
None of us that I can tell are arguing about the importance of critical thinking, skepticism, and consistency. Seriously. Jumpin’ Jesus on a pogo stick, have you even read any of my posts emphasizing prior probability in evaluating pseudoscience in medicine? I’ve ranted endlessly about how purveyors of pseudoscience in medicine ignore prior probability in, for example, homeopathy, and don’t concern themselves with how homeopathy is completely inconsistent with huge swaths of well-established science. My guess is that you have not because you’re emphasizing something here that no one–and I mean no one–other than the woos who occasionally infest my comment threads would argue with. But I presume that you read my brief post above, and I mentioned biological plausibility as another question for journalists–which is the more or less the same thing you chastised me for supposedly not including in my offhand remark about the scientific method in that biological plausibility is based on how new observations fit in with what is already known.
Here’s the problem. Science is not critical thinking and skepticism. Science is one method of applying critical thinking and skepticism. You seem to be conflating science with all critical thinking. The two are very much related and overlapping, but they are not identical.
As for your claim that trying to get journalists to understand at least a simplified version of the scientific method “suppresses skepticism,” well, I call bullshit on that, particularly since you provide as little evidence to back up your assertion as you accuse me of providing to back up mine, namely none.
We can get very deep into the weeds here, but I’m not going to let the crass and condescending insults go unanswered. I believe that Wikipedia has a simple outline of what somebody who posts there thinks is “the scientific method” but I happen to be an actual scientist and I know there isn’t any such thing as a “scientific method” that can be described in a simple outline. Telling journalists that they will be competent if they can “outline the scientific method” is nonsense. Sorry Orac. You should not have said that. You should said they ought to understand science, and how one develops confidence in conclusions. But there isn’t a simple “outline” for that. It’s complicated.
And Popper’s idea that science is reducible to falsification has long since been found inadequate. There are many reasons for this but I’ll just give you a couple. Existential statements are often impossible, in principle, to falsify. Many, if not most assertions in science nowadays are probabilistic in nature. Entities often are highly variable, and “falsification” can often be dealt with by expanding definitions or other tricks. Theories are often perfectly adequate under some circumstances but imprecise in others. Etc.
Oversimplifying the nature of science is counterproductive.
What “crass and condescending insults”? Personally, I found Larry’s comments to be much more condescending than anything I’ve written in this thread, and I do know that I can be very condescending when I want to be. I wasn’t trying here. If I were, believe me, you’d know it.
Nice straw man ya got there, because I didn’t say anything of the sort. Asking journalists to outline the classical scientific method was simply one suggestion I threw out to be added to a much longer list after having noted a lack of any questions in the list related to the nature of science. In other words, it was one suggested criteria for competence thrown out for consideration in a quickie post.
OK, OK, science more complicated than that. We get it. Ad nauseam. I’m a scientist, too. I’ve even been funded by the NIH and various other granting agencies; so it must be true. (That’s self-deprecating sarcasm, there, in case you didn’t recognize it.) And no one’s arguing that science isn’t more complicated than that. Really. No one is. However, you have to crawl before you can walk, and all too many “science” journalists appear not even to be able to creep yet.
Prof. Larry Moran: “‘Scientific method’ is for grade school, not for adults.”
I hope you are fucking joking, Professor! Seriously. I hope you are fucking joking!
That’s the kind of thinking that has allowed Douglas Biklen to fleece many hundreds of parents of autistic children with this Facilitated Communication shit. Before anyone had scientifically validated the technique, it was out in the world, being used and nobody knew it was a obviously faulty as it is. Because nobody assessed its value using scientific method! Same kind of thinking allowed Dr. Roy Kerry to get away with the senseless killing of an autistic child some years ago by chelating the kid till his heart stopped because the chelating agent leached enough fucking calcium from his blood to stop his heart! Scientific method is not just for grade school: it’s for life. You as a professor should know that.
as obviously faulty as it is
Orac says,
There is genuine controversy over the existence of something called “the scientific method.” As you can see, I for one don’t believe in anything that could be remotely described as a particular method of doing science. And I’m not alone.
Philosophers have been publishing articles on this topic for many decades and they haven’t arrived at any kind of consensus. Conclusion: there is no such thing as a scientific method that we can all agree on.
Therefore, if you teach journalists and students your particular version of “the scientific method” without mentioning that it’s only your personal opinion, you are not thinking critically. You are not being skeptical. And you are not doing journalists or the general public any favors by misrepresenting science.
I’m perfectly well aware of your efforts to promote critical thinking. That’s exactly why I was so shocked when you suggested that there was something called “the scientific method” that you wanted journalists to know about. I’m even more shocked to learn that it’s the grade school version that you support.
Dear David N. Andrews, M.Ed. C.P.S.E.,
Please post a brief outline of the scientific method. As you know, the scientific literature is full of scientific papers that are wrong or seriously misleading. Science journalists write about these papers all the time because they often sound quite spectacular (“Darwin was wrong!” “A new cure for cancer!” etc. etc.). These papers follow the grade school “scientific method.” How does knowing this method enable us to distinguish good science from bad science?
Do you have a better version of “the scientific method” in mind?
Personally, I was shocked to see you argue so vehemently that there’s no such thing as “the scientific method” based on an offhand remark I made. I’m fully aware of the debates about what science is and what the scientific method might be. Yes, I’m aware that “the scientific method” may be too narrow a definition. However, you go beyond saying that what science and the scientific method are are more complex than the commonly taught “scientific method.” If that were all you had done, perhaps we could have had a productive and fun debate.
Instead you chose to deny completely that there’s such a thing as the scientific method and to go on a rampage in my comments over the issue, contemptuously dismissing those who argue otherwise. All I can say is that I’m very, very disappointed in you. I expected better.
I know, I know, you’re disappointed in me, too. Frankly, I don’t care about that any more than you probably care that I’m disappointed in you. So let’s just leave it at that.
“As you know, the scientific literature is full of scientific papers that are wrong or seriously misleading. Science journalists write about these papers all the time because they often sound quite spectacular (“Darwin was wrong!” “A new cure for cancer!” etc. etc.). These papers follow the grade school “scientific method.” How does knowing this method enable us to distinguish good science from bad science?”
Moran, what you are describing there is what happens when authors do not follow scientific method.They are not even following what you laughably refer to as ‘grade school scientific method’. If you cannot understand that, how come you’re a professor?
As for saying that the scientific literature is full of scientific papers that are wrong or seriously misleading… did you not see the straw man you created there? If it really were full of said papers, we would have no science and you’d have no bloody job. Shitting Jesus… I’m glad I went to a UK university for my degree, rather than Toronto. And even the Finnish get scientific method better than you seem to do. We’re done.
How about common statistical fallacies and why they’re fallacies? I’m thinking ones like correlation =/= causation or prosecutor’s fallacy. And, you know, seconding difference between clinical significance and statistical significance.