Why do cranks favor ad hominem attacks over scientific arguments? They work!

Anyone who routinely engages in public science communication in areas where there are cranks and denialists trying to discredit scientific consensus, such as that about the safety and efficacy of vaccines, the safety of genetically modified organisms (GMOs), or the ineffectiveness of alternative medicine has likely experienced an unfortunate phenomenon as a result of their activities. That phenomenon is the tendency of the cranks, such as those who fear monger about vaccines or GMOs or who demonize science-based medicine while promoting quackery, to go for the ad hominem attack first rather than attempting to refute the scientific argument. Certainly, I’ve experienced this. Indeed, Mike Adams once launched a months-long campaign of defamation against me, in which he published at least three dozen articles full of lies about me, including the claim that I had once worked with cancer chemotherapy fraudster Dr. Farid Fata, and even claimed that he had reported me to the FBI, as well as my state’s medical board and attorney general. Not surprisingly, I have heard nothing from any of these entities, but there still linger posts claiming that I’m under investigation. Unfortunately, the effects on my Google reputation linger.

Ad hominem attacks—attacking the person to discredit the person’s argument instead of attacking the argument itself—are generally considered a logical fallacy. I’ve always wondered why cranks go for the ad hominem attack above all else. I had always thought that it was due to intellectual laziness and because they couldn’t win on the science, and I have no doubt that there is a strong element of both of those issues in the explanation. I also speculated that it was a tactic to intimidate critics into silence, and I’m sure that there is also a strong element of this in the explanation. However, maybe these cranks instinctively understand something that we who try to advocate for good science either do not or whose implications cause us to recoil, and that something is that ad hominem attacks work. They are very effective. Indeed, a new study published in PLOS One suggests this is true. The study’s results suggest that attacking the motives of scientists is just as effective in undermining acceptance of their findings of science as attacking the science itself as flawed based on facts and evidence. Interestingly, according to the findings of this study, it’s not just any kind of ad hominem attacks, but rather specific kinds of ad hominem attacks work much better than others.

Let’s take a look, shall we?

The lead author of this study is Ralph Barnes, an assistant professor of psychology at Montana State University, who noted, ““I think scientists don’t yet have a complete understanding of how the public reacts to scientific claims, and I wanted to contribute (even if in a small way) to that effort.” But did he? Maybe.

The study consisted of two experiments designed to test the effects of different kinds of attacks on science claims, direct and indirect (ad hominem) attacks. The authors explain their rationale thusly, noting that most people, lacking the expertise and knowledge to evaluate scientific claims, often rely on heuristics to evaluate the credibility of those making science claims:

Numerous studies have shown that scientific information may not have as much impact on the public’s attitude as trust in scientists and government policy-makers [13–15]. Given the evidence for a link between trust and public opinion, cases of fraud and misconduct, and conflicts of interest may play a powerful role in shaping the public’s trust in scientists and the ability of scientists to influence the public. The popular media sometimes covers stories involving scientific incompetence (e.g. the Fleischmann and Pons affair) and fraud and/or misconduct committed by scientists [16–18]; and there is no shortage of reporting on scientists with conflicts of interest [19–22].

And:

Although we are interested in factors that reduce the public’s confidence in science claims, we are not concerned with the issue of trust per se. Rather, our focus is on the specific methods that can be used to attack and undercut science claims and the relative effectiveness of those methods. One method for attacking a science claim is a direct attack on the empirical foundation of the claim. The ad hominem attack is a more indirect method for attacking a science claim. Here we are concerned with three forms of ad hominem attack: allegations of misconduct, attacks directed at motives, and attacks directed at competence. Seen through the lens of the Mayer et al. model [9], misconduct and motive-related attacks are related to benevolence and integrity, while attacks directed at competence are related to ability.

The authors note that ad hominem attacks, even though they are usually fallacious, can be very effective. So here’s how they set about to examine the effect of attacks on science claims based on the science compared to different kinds of ad hominem attacks. First, they hypothesized that the greatest degree of negative attitude change would occur in the case of accusations of misconduct because, in this condition, both the science and the researcher are explicitly criticized and that the second greatest degree of attitude change would be associated with attacks on the actual science and data of the claim as flawed, reasoning that attacks on the empirical foundation of a claim are always relevant. Finally, they predicted that attacks based on the other four conditions would have a lesser effect because they were only ad hominem attacks.

To test their hypotheses, the researchers carried out two experiments involving a total of 638 participants. In the first experiment, they enrolled 480 undergraduate student volunteers from two community colleges, a private research university, a private liberal arts college, and a state college. After results from participants who failed to finish the questionnaire, skipped one or more of the items in the questionnaire, or failed to follow instructions, there were left 439 participants, whose average age was 24.1 and which included 312 women. The initial section of each of eight questionnaire variants contained a series of 24 science claims, and the final section contained several demographic questions. Of these, half were “distractor” items designed to prevent participants from detecting the purpose of the study. They were similar to the critical items but not all of them included challenges of the credibility of the researcher or attacks on the scientific claim made. The remaining 12 items all contained a science claim. These claims were all either fictitious claims generated by the researchers or references to phenomena likely to be unfamiliar to the subjects. Each science claim was attributed to a specific scientist. Six of these presented a claim in isolation. The remainin six contained additional information, specifically a sentence that attacked the researcher and/or science. Specifically, the additional information either pointed out a flaw in the initial research or contained an ad hominem attack on the researcher who made the claim (past misconduct, conflict of interest, education, sloppy methods) or both (relevant misconduct).

Here’s a example of one of the claims, along with the types of criticisms leveled:

Science Claim 4
Dr. Doyle from the Children’s Hospital of Pittsburgh claims that the chances of a child being diagnosed with Prudar-Wein syndrome decreases by over 20% if their diet includes niacin enriched baby food.

4 Empirical
Dr. Doyle’s research on the effect of niacin on Prudar-Wein syndrome only included children ages 28 to 34 months of age. However, Prudar-Wein syndrome is normally diagnosed by 18 months of age.

4 Relevant misconduct
Recently a team of investigators from the National Science Foundation’s ethics committee found that Dr. Doyle fabricated some of the data in her published research on Prudar-Wein syndrome.

4 Past misconduct
Recently a team of investigators from the National Science Foundation’s ethics committee found that Dr. Doyle fabricated some of the data in one of her earlier papers.

4 Conflict of Interest
Dr. Doyle is an employee of the only baby food company that adds niacin to its baby food.

4 Education
Dr. Doyle received her advanced degree from a university with a reputation for having very low standards.

4 Sloppy
Many of the researchers in Dr. Doyle’s field feel that she is a sloppy researcher.

After each item, be it the isolated science claim or the claim paired with the additional information, respondents used a six point scale to indicate their attitude towards the claim, ranging from strongly favor (1) and strongly oppose (6). The instructions and an example provided to participants clearly indicated that responses should reflect attitude towards the truth of the claim itself rather than attitude towards the researcher or the manner in which the research had been carried out. For each paired information trial scored by each subject, the attitude score was calculated by subtracting the attitude score of the claim plus additional derogatory information from the mean attitude score of the corresponding initial claim in isolation. Thus, negative preference scores meant that the participants found the claim to be less convincing when they were followed by the additional information.

Here’s the result:

Figure 1, Experiment 1

Figure 1, Experiment 1

As you can see, attacks based on researcher conflicts of interest or scientific misconduct were just as potent as “empirical” criticism of the science in lowering the participants’ acceptance of the scientific claim. By comparison, ad hominem attacks based on sloppiness or education were far less effective. No, strike that. The change in attitude based on attacks on researcher sloppiness or education were not statistically significant from zero, meaning these attacks had no effect.

The researchers were concerned that the first experiment used a population that was too homogeneous. So they carried out a second experiment. Experiment #2 had 224 adults recruited from an opt-in Internet panel managed by a survey research firm take the survey. After exclusions, there were 199 subjects who completed the entire survey as instructed. This group was much more varied, as well. Their ages ranged from 23 to 83, with a mean of 48.5 and a median of 47. 39 states were represented, and 47% of the respondents were female. Nearly 77% of the respondents identified themselves as non-Hispanic white, while 13.8% and 9.2% identified themselves as black and Hispanic, respectively. Finally, 40.4% of respondents had earned at least one college degree, and 46.2% of the respondents were from households with an annual income below $50k per year.

The results were nearly identical:

Figure 2, Experiment 2

Figure 2, Experiment 2

So, basically, as the researchers noted, neither of their main predictions were supported by the data:

Neither of our main predictions for Experiment 1 were supported by the data. For instance, we found that combining ad hominem attacks with direct attacks on the empirical foundation of the claim was no more effective than an empirical attack in isolation. In contrast to our second prediction, Experiment 1 revealed that some strictly ad hominem attacks (specifically the conflict of interest and past misconduct attacks) are just as effective as attacks on the empirical foundation of a claim. Our only prediction for Experiment 2 was that the result of Experiment 2 would replicate those of Experiment 1, and that prediction was confirmed. The similarity between the results of Experiments 1 and 2 increased our confidence in the pattern of results we found in Experiment 1. The results of Experiment 2 were based on a sample that, relative to Experiment 1, was much more representative of the US population. This indicates that our findings are not specific to a college student population.

As expected, information that a study was critically flawed was associated with negative attitude change towards a claim based on that study. What was not expected was that an ad hominem attack (in the form of an accusation of misconduct) coupled with an explicit attack on the research itself was no more influential than an attack on the research alone.

I’m not sure why the researchers were so surprised by these results. Intuitively, most of us know that ad hominem attacks can be very effective in eroding acceptance of a claim made by the person being attacked. Indeed, the authors themselves noted that, when they did the experiment, they were not aware of another study that found that the effects of message persuasiveness and source credibility were not additive, but rather substitutive. Thus, their results were consistent with some previous research. It is not, however, consistent with a previous study among marine scientists, where it was found that the quality of methodology employed by the researcher was much more important than source of funding in establishing researcher credibility. Of course, the research subjects were scientists, not lay people. So these studies are probably not comparable.

Of course, this is only one study and certainly not the final word. However, its results do pass the “smell test,” at least to me, and seem plausible. As the authors note, the effectiveness of attacks on scientists’ conflicts of interest or misconduct, which is equal to attacks on the science itself in this model, could be part of the reason for the success of certain varieties of quacks and cranks. Antivaccine websites, for instance, are rife with claims of conflicts of interest and scientific misconduct. Most of the claims are false, but they don’t have to be true to be effective if they sound plausible to most people. Ditto anti-GMO websites. Basically, cranks seem to instinctively just “know” the effectiveness of various ad hominem attacks. An instinctive knowledge that certain types of ad hominems work so well against scientific claims likely also fuels a lot of conspiracy mongering, such as the “CDC whistleblower” conspiracy theory, which posits research misconduct by investigators of a major MMR safety study, coupled with a coverup by the CDC.

There’s also an example going the other direction, namely Andrew Wakefield. Ever since Brian Deer’s investigations showed his massive conflicts of interest and credible evidence of scientific misconduct leading to his being struck off the UK medical register and to the retraction of his original MMR/autism paper in The Lancet, Wakefield has become shorthand for dismissing antivaccine claims that the MMR vaccine can cause autism. With the discrediting of Wakefield, and in the public mind (other than antivaxers) the claim that the MMR vaccine causes autism has been effectively discredited. As I’ve said many times before, I wish it were otherwise. I wish that the science had by itself been persuasive enough to win out, but I guess we science advocates have to take our victories where we can find them.

In the end, anyone thinking about getting into science communication has to be aware that there is a fairly high probability that he or she will sooner or later by slimed by someone like Mike Adams. The likelihood of that happening will correlate with how effective or widely known the science communicator becomes, of course, but even nanocelebrities like myself are at risk. Sadly, this study suggests that there’s a good reason why cranks attack the communicator before the science. It is often effective persuasion directed at their target audiences.