Categories
Antivaccine nonsense Autism Complementary and alternative medicine Medicine Pseudoscience Quackery Skepticism/critical thinking

Fun with phone surveys and vaccines

J. B. Handley never ceases to amaze me how much he is willing to torture me with his abuses of science, never mind his childish attempts to annoy me by cybersquatting domain names that he thinks I want. So there I was, all set to blog about a rather amusing homeopath that I’ve come across, when what comes to my attention, whether I want it to or not?

Yes, hot on the heels of its reinvention of itself from being all about mercury all the time to a kinder, gentler entity making an intentionally much more difficult to test and falsify hypothesis that, oh, by the way, lots of other “environmental” causes of autism besides mercury in vaccines cause autism too, Generation Rescue has finally released its long-promised “study” comparing vaccinated versus non-vaccinated children. Not surprisingly, the same day, Dan “Quixote” Olmsted, who’s never met a scientific windmill that he didn’t like to tilt at when it comes to pseudoscientific claims that vaccines or thimerosal in vaccines cause autism and certainly never met a dubious claim that he wouldn’t trumpet as science “proving” a link between vaccines and autism, popped up like the good lapdog he is with a story trumpeting the release of GR’s “study.”

Not surprisingly, given the source, the “study” turns out to be totally underwhelming, nothing more than a phone poll really. (Amusingly, David Kirby has said that he doesn’t consider phone surveys to be “data.”) Even so, expect to see it trumpeted all over antivaccination websites and blogs as “proof” that vaccines cause autism or, at the very least, as “evidence” that compels a study. It might be, if it weren’t so poorly designed and analyzed and if it actually showed what GR claims that it shows.

Kevin Leitch has already done an excellent job of deconstructing the numerical shenanigans in the poll that lead GR to boldly claim things like:

All vaccinated boys, compared to unvaccinated boys:

  • Vaccinated boys were 155% more likely to have a neurological disorder (RR 2.55)
  • Vaccinated boys were 224% more likely to have ADHD (RR 3.24)
  • Vaccinated boys were 61% more likely to have autism (RR 1.61)

Older vaccinated boys, ages 11-17 (about half the boys surveyed), compared to older unvaccinated boys:

  • Vaccinated boys were 158% more likely to have a neurological disorder (RR 2.58)
  • Vaccinated boys were 317% more likely to have ADHD (RR 4.17)
  • Vaccinated boys were 112% more likely to have autism (RR 2.12)

(Note: RR means “relative risk,” which in this poll is the ratio of the percentage of the condition of interest in the group of interest to the percentage found in the control population, in this case the allegedly unvaccinated.)

And to lead J. B. Handley to declare to the breathless admiration of Dan Olmsted:

“No one has ever compared prevalence rates of these neurological disorders between vaccinated and unvaccinated children,” said J.B. Handley, father of a child with autism and co-founder of Generation Rescue, which commissioned the $200,000 survey conducted by SurveyUSA, a respected marketing firm. “The phone survey isn’t perfect, but these numbers point to the need for a comprehensive national study to gather this critical information.

“We have heard some speculation that unvaccinated children would be difficult to locate,” Handley said. “But we were able to find more than enough in our sample of more than 17,000 children to establish confidence intervals at or above 95 percent for the primary comparisons we made.”

Let’s take a look at the poll, shall we? I’m not going to go over the same ground that Kevin Leitch has so ably covered, specifically the numbers. Kevin does a good job with the details showing that the “finding” are not nearly as impressive as they are represented to be. Particularly amusing is the observation that, for several of the groups, the “partially vaccinated” (whatever that means; it’s not defined, as we will see later) had apparently higher numbers of parents reporting autism or ASD but with parents of fully vaccinated children reporting numbers the same or lower than the unvaccinated, leading him to drolly observe:

There’s no getting away from this. This is a disaster for Generation Rescue and the whole ‘vaccines cause autism’ debacle. Generation Rescue’s data indicates that you are ‘safer’ from autism if you fully vaccinate than partially vaccinate. It also indicates that across the spectrum of autism, you are only 1% more likely to be autistic if you have had any sort of vaccination as oppose to no vaccinations at all – and thats only if you are male. If you are a girl you chances of being on the spectrum are less if you have been vaccinated! Across both boys and girls, your chances of being on the spectrum are less if you have received all vaccinations.

A disaster indeed. My only quibble with Kevin is that it’s hard to blame J. B. for using relative risks in the way that he did for small numbers when lots of medical studies do the same thing (the recent Avandia fiasco, for example). On the other hand, most medical studies have a much sounder methodology and a much better grasp on the true values of those small numbers. Of course, what the results actually suggest is a reporting bias in the phone survey in which parents whose children developed autism or an ASD and who believed vaccines might have been responsible stopped vaccinating, thus falling into the “partially vaccinated” group.

Kevin’s also kindly converted the locked PDF with the raw data supplied by J. B. Handley into an Excel spreadsheet that allows more easy analysis of the figures. Some commenters ran some statistical tests on the raw data for various groups. Not surprisingly, the results were not statistically significant in nearly all cases. I looked at a few groups myself and did a few chi squared tests, and I failed to find any statistically significant differences either. I will admit that I did this only to sample the data; I have better things to do with my time than an exhaustive analysis of it.

Given that background, I’d like to step back a bit and look at the big picture. This phone poll is fatally flawed as a medical study for several reasons. First, note how there are now more diagnoses being looked at. Given GR’s previous concentration on autism almost exclusively, I found it striking that ADD and ADHD were in the mix. In one row, ADD, ADHD, autism, and ASDs were all combined into one group, even though there is no etiological or logical reason to do so. GR refers to this as “neurologic” diagnoses, but then why did the survey limit itself to just the above? After all, mental retardation is a neurological diagnosis. Seizure disorders are neurological diagnoses. It makes very little sense.

In reality, the whole enterprise is nothing more than one huge case of doing multiple comparisons and seeing if anything shakes out. Remember, at the 95% confidence level there’s still a 5% chance that any seemingly “positive” result that is found is in reality due to chance alone. The more groups looked at and compared, the more chances of a spurious result that isn’t “real.” There are statistical methods for controlling for multiple comparisons, but there’s no evidence that I can find that they were done for this poll. My best guess is that this survey was nothing more than a sloppily conducted fishing expedition where they didn’t even bother to control for multiple comparisons. Moreover, doing what is in essence subgroup analysis is dubious when only around 6% of the children (a reassuring figure, actually) were totally unvaccinated, as it makes the numbers of unvaccinated in many of the subgroups too small to be statistically useful (not that that stopped GR from slicing and dicing the group any which way it could to extract more probably spurious “correlations”). I’m not even entirely convinced that the 991 unvaccinated children supposedly identified by this poll represent a big enough sample. Moreover, the poll concludes that there is no increased risk of autism or other ASDs in girls due to vaccination, which makes me wonder if J. B. will change his tune and urge girls, at least, to be vaccinated.

Another problem with the poll is how the vaccinated group was divided into “partially vaccinated” versus “fully vaccinated.” For one thing, the definition of “fully vaccinated” would not be the same in all age groups, given that the recommended vaccination schedule changes periodically based on new recommendations from the CDC and the American Academy of Pediatrics. Second, “partially vaccinated” would encompass a huge range of possibilities, from children who only received one or two recommended vaccinations to those who received all but one. It’s an almost meaningless distinction, particularly in a phone survey. The only sort of study for which separating the vaccinated into two groups like that might be useful is one in which investigators can review the vaccination records of the subjects polled and know who got exactly what vaccines. No doubt J. B. was hoping to find some sort of dose-response curve, with increasing levels of neurologic diagnoses as one goes from unvaccinated, to partially vaccinated, to fully vaccinated. That makes it all the more hilarious that the results show in many groups equal percentages of diagnoses in the unvaccinated and fully vaccinated groups, with the peak percentages reported being in the partially vaccinated group and suggests that the real comparison that should have been made was one that wasn’t: between the completely unvaccinated and patients who had gotten any vaccines (partially vaccinated + fully vaccinated). Of course, again, this also suggests a reporting bias, where parents who think their children’s problems stem from vaccines stop vaccinating. Or it could just be spurious. What it does suggest is that there is a serious problem with the poll.

The biggest problem, however, that makes me doubt this survey is the questionnaire and how the study was conducted. J. B. makes a big deal about how their methodology “mirrored” the methodology that the CDC used to establish estimates of autism prevalence. The two are only superficially comparable. The CDC used very simple methodology:

The surveys were independently conducted but both were conducted during the same time period, 2003 to 2004. Both were based on a nationally representative sample of non-institutionalized U.S. children and in both surveys parents or guardians of the sampled children were asked about a range of different health issues.

Autism prevalence was estimated from the question asking parents if they were ever told by a doctor, or other health care providers, that their child had autism.

Leaving aside that GR doesn’t describe how their sample was chosen, what measures were taken to make sure it was representative, and what the response rate was, compare this to the questionnaire that GR tried to get parents to answer. Here’s a sample:

5) If this child has ever been diagnosed with asthma, juvenile diabetes, autism, Asperger’s Syndrome, ADD, ADHD, or PDD-NOS, press 1 (continue to next Q)
Otherwise, press 2 (skpt to closing language “B”)

6) OK, then, let’s go through each condition one at a time.
Has this child been diagnosed…
With Asthma?
Yes, press 1
No, 2
Not sure? 3

7) With Juvenile Diabetes?
Yes, press 1
No, 2
Not sure? 3.

8) With Autism?
Yes, press 1
No, 2
Not sure? 3.

9) With Asperger’s?
Yes, press 1
No, 2
Not sure? 3.

10) With ADD?
Yes, press 1
No, 2
Not sure? 3.

11) With ADHD?
Yes, press 1
No, 2
Not sure? 3.

12) With PDD-NOS?
Yes, press 1
No, 2
Not sure? 3.

PDD-NOS? How many parents know what PDD-NOS is? (It’s pervasive developmental disorder, not otherwise specified, by the way.) Heck, I didn’t know what a PDD-NOS was until I became interested in the vaccine-autism hysteria. The only parents who are going to know what that is are the ones whose child has a diagnosis. Heck, even though most parents know what autism is, a lot of parents don’t know what Asperger’s is unless their child has it.

Moreover, the CDC study produced estimates of autism prevalence that were consistent with previous studies. In contrast, J. B.’s survey produced estimates of autism and ASDs of 3% in the aggregated data. That’s 1 in 33, approximately 5 times more prevalent than the usually cited estimate of 1 in 166 or 1 in 150. This, too, suggests reporting bias, where parents who have a child with autism or an ASD will be more likely to complete this survey. J. B. Handley’s response to this criticism is, as one would expect, lame:

The survey does not attempt to newly establish the prevalence of autism in the general population.The survey attempts only to shed preliminary light on any relationship between vaccination status and diagnosis.

Straw man. The concern is that the high rate of prevalance found by this phone survey strongly suggests either response bias or some sort of nonrandom selection of survey subjects.

It is not surprising and not unexpected that parents with children who have received a diagnosis may have been more willing to complete the health battery included in this survey than parents of children who have not been diagnosed. However, that does not make the parents who did participate in this study likely to lie about, or forget about, the vaccination status of their children.The only way a possible “response bias” in favor of those households with a diagnosed child would invalidate the results of this research is if asking about vaccination status of a child independently produced a bias and that bias interacted with the bias caused by asking about NDs. For the concern to be valid: somehow, the main group of vaccinated families would have to be more likely to respond if there was an ND in the family, without also affecting the response of unvaccinated families in the same way. While such an interaction is possible, this criticism can be addressed by further, more elaborate research. Such a potential interaction does not invalidate this research.

Actually, when coupled with all the other problems in the alleged “statistical analysis,” it likely does. Any major interaction between the two main sets of questions that might produce different biases in the different groups would, unless carefully controlled for, invalidate the study. There’s no evidence that SurveyUSA did any of that. Moreover, SurveyUSA is known for asking very concise questions that have been known in the past to produce divergent results using automated telephone polls, which have a number of problems, not the least of which is a much lower response rate than traditional polls. Again, the issue of reporting bias comes up. That’s not to say that it’s not possible to do accurate polls with automated technology, but asking about health problems is difficult, leading me to take any such poll done looking for correlations between vaccines and anything with a huge grain of salt. Moreover, no evidence about response rates or how many parents responded “not sure” for each question is presented, nor is any evidence to show that the sample chosen is representative. I also can’t help but note that the introduction, in which it is stated that “we’ve been hired by a private organization to study the relationship between vaccines and the health of Sonoma County children.” and it is denied that SurveyUSA is “working for the government,” a school system, or a public agency. That sets up an association right there that could lead to bias.

I could go on, given what a target-rich environment this survey is, but, between Kevin and me, I hope you get the idea. There’s a lot less to this study than meets they eye and certainly it doesn’t represent particularly compelling evidence of a link between vaccines and autism. Given that GR paid $200,000 to do this poll, if I were J. B., I’d be asking for my money back.

In the meantime, feel free to analyze the data for yourself…

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading