Categories
Antivaccine nonsense Autism Medicine Popular culture Science

Why do cranks favor ad hominem attacks over scientific arguments? They work!

Cranks, quacks, and pseudoscientists favor ad hominem attacks against scientists over arguments based in science. Unfortunately, new research suggests that ad hominem attacks against scientists making a scientific claim can be as effective as attacks based on science and evidence.

Anyone who routinely engages in public science communication in areas where there are cranks and denialists trying to discredit scientific consensus, such as that about the safety and efficacy of vaccines, the safety of genetically modified organisms (GMOs), or the ineffectiveness of alternative medicine has likely experienced an unfortunate phenomenon as a result of their activities. That phenomenon is the tendency of the cranks, such as those who fear monger about vaccines or GMOs or who demonize science-based medicine while promoting quackery, to go for the ad hominem attack first rather than attempting to refute the scientific argument. Certainly, I’ve experienced this. Indeed, Mike Adams once launched a months-long campaign of defamation against me, in which he published at least three dozen articles full of lies about me, including the claim that I had once worked with cancer chemotherapy fraudster Dr. Farid Fata, and even claimed that he had reported me to the FBI, as well as my state’s medical board and attorney general. Not surprisingly, I have heard nothing from any of these entities, but there still linger posts claiming that I’m under investigation. Unfortunately, the effects on my Google reputation linger.

Ad hominem attacks—attacking the person to discredit the person’s argument instead of attacking the argument itself—are generally considered a logical fallacy. I’ve always wondered why cranks go for the ad hominem attack above all else. I had always thought that it was due to intellectual laziness and because they couldn’t win on the science, and I have no doubt that there is a strong element of both of those issues in the explanation. I also speculated that it was a tactic to intimidate critics into silence, and I’m sure that there is also a strong element of this in the explanation. However, maybe these cranks instinctively understand something that we who try to advocate for good science either do not or whose implications cause us to recoil, and that something is that ad hominem attacks work. They are very effective. Indeed, a new study published in PLOS One suggests this is true. The study’s results suggest that attacking the motives of scientists is just as effective in undermining acceptance of their findings of science as attacking the science itself as flawed based on facts and evidence. Interestingly, according to the findings of this study, it’s not just any kind of ad hominem attacks, but rather specific kinds of ad hominem attacks work much better than others.

Let’s take a look, shall we?

The lead author of this study is Ralph Barnes, an assistant professor of psychology at Montana State University, who noted, ““I think scientists don’t yet have a complete understanding of how the public reacts to scientific claims, and I wanted to contribute (even if in a small way) to that effort.” But did he? Maybe.

The study consisted of two experiments designed to test the effects of different kinds of attacks on science claims, direct and indirect (ad hominem) attacks. The authors explain their rationale thusly, noting that most people, lacking the expertise and knowledge to evaluate scientific claims, often rely on heuristics to evaluate the credibility of those making science claims:

Numerous studies have shown that scientific information may not have as much impact on the public’s attitude as trust in scientists and government policy-makers [13–15]. Given the evidence for a link between trust and public opinion, cases of fraud and misconduct, and conflicts of interest may play a powerful role in shaping the public’s trust in scientists and the ability of scientists to influence the public. The popular media sometimes covers stories involving scientific incompetence (e.g. the Fleischmann and Pons affair) and fraud and/or misconduct committed by scientists [16–18]; and there is no shortage of reporting on scientists with conflicts of interest [19–22].

And:

Although we are interested in factors that reduce the public’s confidence in science claims, we are not concerned with the issue of trust per se. Rather, our focus is on the specific methods that can be used to attack and undercut science claims and the relative effectiveness of those methods. One method for attacking a science claim is a direct attack on the empirical foundation of the claim. The ad hominem attack is a more indirect method for attacking a science claim. Here we are concerned with three forms of ad hominem attack: allegations of misconduct, attacks directed at motives, and attacks directed at competence. Seen through the lens of the Mayer et al. model [9], misconduct and motive-related attacks are related to benevolence and integrity, while attacks directed at competence are related to ability.

The authors note that ad hominem attacks, even though they are usually fallacious, can be very effective. So here’s how they set about to examine the effect of attacks on science claims based on the science compared to different kinds of ad hominem attacks. First, they hypothesized that the greatest degree of negative attitude change would occur in the case of accusations of misconduct because, in this condition, both the science and the researcher are explicitly criticized and that the second greatest degree of attitude change would be associated with attacks on the actual science and data of the claim as flawed, reasoning that attacks on the empirical foundation of a claim are always relevant. Finally, they predicted that attacks based on the other four conditions would have a lesser effect because they were only ad hominem attacks.

To test their hypotheses, the researchers carried out two experiments involving a total of 638 participants. In the first experiment, they enrolled 480 undergraduate student volunteers from two community colleges, a private research university, a private liberal arts college, and a state college. After results from participants who failed to finish the questionnaire, skipped one or more of the items in the questionnaire, or failed to follow instructions, there were left 439 participants, whose average age was 24.1 and which included 312 women. The initial section of each of eight questionnaire variants contained a series of 24 science claims, and the final section contained several demographic questions. Of these, half were “distractor” items designed to prevent participants from detecting the purpose of the study. They were similar to the critical items but not all of them included challenges of the credibility of the researcher or attacks on the scientific claim made. The remaining 12 items all contained a science claim. These claims were all either fictitious claims generated by the researchers or references to phenomena likely to be unfamiliar to the subjects. Each science claim was attributed to a specific scientist. Six of these presented a claim in isolation. The remainin six contained additional information, specifically a sentence that attacked the researcher and/or science. Specifically, the additional information either pointed out a flaw in the initial research or contained an ad hominem attack on the researcher who made the claim (past misconduct, conflict of interest, education, sloppy methods) or both (relevant misconduct).

Here’s a example of one of the claims, along with the types of criticisms leveled:

Science Claim 4
Dr. Doyle from the Children’s Hospital of Pittsburgh claims that the chances of a child being diagnosed with Prudar-Wein syndrome decreases by over 20% if their diet includes niacin enriched baby food.

4 Empirical
Dr. Doyle’s research on the effect of niacin on Prudar-Wein syndrome only included children ages 28 to 34 months of age. However, Prudar-Wein syndrome is normally diagnosed by 18 months of age.

4 Relevant misconduct
Recently a team of investigators from the National Science Foundation’s ethics committee found that Dr. Doyle fabricated some of the data in her published research on Prudar-Wein syndrome.

4 Past misconduct
Recently a team of investigators from the National Science Foundation’s ethics committee found that Dr. Doyle fabricated some of the data in one of her earlier papers.

4 Conflict of Interest
Dr. Doyle is an employee of the only baby food company that adds niacin to its baby food.

4 Education
Dr. Doyle received her advanced degree from a university with a reputation for having very low standards.

4 Sloppy
Many of the researchers in Dr. Doyle’s field feel that she is a sloppy researcher.

After each item, be it the isolated science claim or the claim paired with the additional information, respondents used a six point scale to indicate their attitude towards the claim, ranging from strongly favor (1) and strongly oppose (6). The instructions and an example provided to participants clearly indicated that responses should reflect attitude towards the truth of the claim itself rather than attitude towards the researcher or the manner in which the research had been carried out. For each paired information trial scored by each subject, the attitude score was calculated by subtracting the attitude score of the claim plus additional derogatory information from the mean attitude score of the corresponding initial claim in isolation. Thus, negative preference scores meant that the participants found the claim to be less convincing when they were followed by the additional information.

Here’s the result:

Figure 1, Experiment 1
Figure 1, Experiment 1

As you can see, attacks based on researcher conflicts of interest or scientific misconduct were just as potent as “empirical” criticism of the science in lowering the participants’ acceptance of the scientific claim. By comparison, ad hominem attacks based on sloppiness or education were far less effective. No, strike that. The change in attitude based on attacks on researcher sloppiness or education were not statistically significant from zero, meaning these attacks had no effect.

The researchers were concerned that the first experiment used a population that was too homogeneous. So they carried out a second experiment. Experiment #2 had 224 adults recruited from an opt-in Internet panel managed by a survey research firm take the survey. After exclusions, there were 199 subjects who completed the entire survey as instructed. This group was much more varied, as well. Their ages ranged from 23 to 83, with a mean of 48.5 and a median of 47. 39 states were represented, and 47% of the respondents were female. Nearly 77% of the respondents identified themselves as non-Hispanic white, while 13.8% and 9.2% identified themselves as black and Hispanic, respectively. Finally, 40.4% of respondents had earned at least one college degree, and 46.2% of the respondents were from households with an annual income below $50k per year.

The results were nearly identical:

Figure 2, Experiment 2
Figure 2, Experiment 2

So, basically, as the researchers noted, neither of their main predictions were supported by the data:

Neither of our main predictions for Experiment 1 were supported by the data. For instance, we found that combining ad hominem attacks with direct attacks on the empirical foundation of the claim was no more effective than an empirical attack in isolation. In contrast to our second prediction, Experiment 1 revealed that some strictly ad hominem attacks (specifically the conflict of interest and past misconduct attacks) are just as effective as attacks on the empirical foundation of a claim. Our only prediction for Experiment 2 was that the result of Experiment 2 would replicate those of Experiment 1, and that prediction was confirmed. The similarity between the results of Experiments 1 and 2 increased our confidence in the pattern of results we found in Experiment 1. The results of Experiment 2 were based on a sample that, relative to Experiment 1, was much more representative of the US population. This indicates that our findings are not specific to a college student population.

As expected, information that a study was critically flawed was associated with negative attitude change towards a claim based on that study. What was not expected was that an ad hominem attack (in the form of an accusation of misconduct) coupled with an explicit attack on the research itself was no more influential than an attack on the research alone.

I’m not sure why the researchers were so surprised by these results. Intuitively, most of us know that ad hominem attacks can be very effective in eroding acceptance of a claim made by the person being attacked. Indeed, the authors themselves noted that, when they did the experiment, they were not aware of another study that found that the effects of message persuasiveness and source credibility were not additive, but rather substitutive. Thus, their results were consistent with some previous research. It is not, however, consistent with a previous study among marine scientists, where it was found that the quality of methodology employed by the researcher was much more important than source of funding in establishing researcher credibility. Of course, the research subjects were scientists, not lay people. So these studies are probably not comparable.

Of course, this is only one study and certainly not the final word. However, its results do pass the “smell test,” at least to me, and seem plausible. As the authors note, the effectiveness of attacks on scientists’ conflicts of interest or misconduct, which is equal to attacks on the science itself in this model, could be part of the reason for the success of certain varieties of quacks and cranks. Antivaccine websites, for instance, are rife with claims of conflicts of interest and scientific misconduct. Most of the claims are false, but they don’t have to be true to be effective if they sound plausible to most people. Ditto anti-GMO websites. Basically, cranks seem to instinctively just “know” the effectiveness of various ad hominem attacks. An instinctive knowledge that certain types of ad hominems work so well against scientific claims likely also fuels a lot of conspiracy mongering, such as the “CDC whistleblower” conspiracy theory, which posits research misconduct by investigators of a major MMR safety study, coupled with a coverup by the CDC.

There’s also an example going the other direction, namely Andrew Wakefield. Ever since Brian Deer’s investigations showed his massive conflicts of interest and credible evidence of scientific misconduct leading to his being struck off the UK medical register and to the retraction of his original MMR/autism paper in The Lancet, Wakefield has become shorthand for dismissing antivaccine claims that the MMR vaccine can cause autism. With the discrediting of Wakefield, and in the public mind (other than antivaxers) the claim that the MMR vaccine causes autism has been effectively discredited. As I’ve said many times before, I wish it were otherwise. I wish that the science had by itself been persuasive enough to win out, but I guess we science advocates have to take our victories where we can find them.

In the end, anyone thinking about getting into science communication has to be aware that there is a fairly high probability that he or she will sooner or later by slimed by someone like Mike Adams. The likelihood of that happening will correlate with how effective or widely known the science communicator becomes, of course, but even nanocelebrities like myself are at risk. Sadly, this study suggests that there’s a good reason why cranks attack the communicator before the science. It is often effective persuasion directed at their target audiences.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

55 replies on “Why do cranks favor ad hominem attacks over scientific arguments? They work!”

I hadn’t thought of it that way. Attacking antivaccine “science” might actually work. I suppose that is somewhat reassuring.

Conspiracy Theories are one of my hobbies (after reading Vincent Bugliosi’s amazing take down of the JFk conspiracy theories) and I have noticed many parallels between conspiracy theorists and the
alternate health folks.

This is the biggest, by far. Now I’m talking here about the hard core conspiracy theorists. It never ceases to amaze me that they can never say “ok, this guy/gal disagrees with me, I’ll try to persuade him” or “Hmm…I’m not convincing them, I’ll try to make a different/better argument”, instead they often say “ahh…you must be part of the cover up”. I’ve watched this reach farcical proportions. On a news comment section on our main online news website here I had mentioned once that I worked in a security job, offhand.

Later, when the Ebola pandemic was going crazy and the buffs were on the news website saying how it was “a bio-engineered plague that will soon be sweeping Europe and North America”, I tried to explain the factors driving the virus in Africa (washing of the dead, distrust of medicine, reuse of needles etc etc) don’t exist in the west and their hysterical calls for a ban on flights to Ireland from Africa (which didn’t exist….) were unnecessary.
This guy comes on and accuses me of “admitting” in a past post (another trope of conspiracy theorists is the claim xyz agency/agent “admitted” the conspiracy, for sh1ts n giggles apparently…) that I “worked for the security services”, and why was I trying so hard to play this thing down?. The security job I actually had? Part time college job in security at international rugby matches in a stadium, this guy had this read as me being one of the Illuminati, though I did wear a dark suit, sunglasses and an earpeice so thank christ a work photo was not the profile picture or the guy’d really have lost it.
He finished his post with “money’s great Ryan, isn’t it?”

ban on flights to Ireland from Africa (which didn’t exist….)
Ah, but there were “PLANS” to secretly start those flights. The conspiratorial mind is quite agile and unrestrained by the need for facts.

a dark suit, sunglasses and an earpeice
Sounds pretty sinister to me.

a dark suit, sunglasses and an earpiece*

USUALLY sans the earpiece but .. I do that all of the time in airports and no one ever notices or calls me “sinister”

Denice even if it wasn’t standard it was a requirement of the police special branch and the secret service protecting Clinton wanted us to look similar to them when he visited my campus so they’d think there were way more agents than there is (private security cannot have guns here, civilians can though oddly, just not all types). I was excited to be in real danger for a change. I was standing there as the motorcade arrived all happy looking at my bad-ass reflection in the tinted SUV windows, shades on white earpiece in, suit nicely tailored to my gym worked torso ‘yeh i look bad—-‘ ..i look around and it hits me…the special branch and secret service guys (we can tell whos’ who based on lapel pins)…the armed guys are BEHIND us, were the closest to the cars.

I waited for them to get out of the cars doing my usual watching of the crowd body language (look at their eyes; where is each persons hands, facial expression etc) and off they went. I leaned back to the secret service and special branch guy behind me “why are we up front? shouldn’t you guys be up front?”
“the protectee has their own close protection detail”
“yeh but why are we in front?”
“well if someone shoots us first and then the first line goes down, if you’re the second line of defense and you’ve no guns how does that help? so has to be the opposite” he explained matter of fact.
“so….were here to buy you time as human shields? thanks…”

As to the African flights, one of them got me with an “AHHH–HAAAAA” they found about the flights. One “gotcha!” was a flight from Morocco to Dublin once a week. I tried to explain the outbreak is in sub-saharan africa and this was like suggesting because there was an outbreak in Mexico flights from New York should be banned. The second one was a Ethopia-LA transatlantic flight that does a refueling stop where nobody gets off. That outbreak was a real wake up call on how easily people will panic and follow these woo-tastic types.

I did wear a dark suit, sunglasses and an earpiece

Argggh! It’s one of the Men in Black!!!! The horrors! Aliens!!! Illiminati!!!!!!!

I don’t think this results are particularly surprising but it is nice to see some empirical results on the issue, especially if one needs to consider defence or counter–attacks.

These results would also seem to suggest that the wounded zebra attack approach where one uses these techniques, especially the / ad hominem </> on a few selected individuals to discredit the entire field makes sense.

Orac, I’d suggest heavy sun screen to hide your stripes before you leave home.

I got a kick out of this line in the paper’s abstract
Four hundred and thirty-nine college students (Experi-
ment 1) and 199 adults (Experiment 2)

I imagine the students are protesting

re Mike Adams’ reportage / attacks on our benevolent leader:

Interestingly, he today retracted what he wrote yesterday about the high school shooter- he called him a “Democrat” because he mixed him up with another man having a similar name who was registered as such.

I wonder why he did that? My best guess is that he doesn’t want to offend potential customers who may identify with the party although he certainly attacks moderates and liberals on a regular basis..

HOWEVER he doesn’t seem to ever let up on our spectacularly talented host, Orac, because – most likely- his customer base includes few scientists or people who value reality and who would defend Orac.

About the study:
it would be interesting if they looked at sources of information as reliable or not.

As someone who studies conspiracy theories for fun let me tell you, you will notice a self-repeating phenomenon about this shooting when it is inevitably called a false flag by Alex Jones types.
They’ll say it’s a false flag by the elite (and deep state, even though there is a deep state they are not cartoon villains like these guys think, and have to be able to rationalize what they do for a greater good ,murdering American kids they can’t rationalize) to take away guns. Nobody will wonder why every time the elite apparently sets up one of these false flags , the actual gun seizures never happen, and that even laws for future regulation don’t get passed.

The mainstream news has this bizzare notion that they’ll get more eyeballs if they’re seen as the “first” to scoop something, so they end up broadcasting a lot of detail (bomb at the state dept on 9/11 for example) that is not confirmed, and then conspiracy theorists go back to it later as evidence for a conspiracy. BTW as your new ambassador to the land of conspiracy crazy as opposed to psudoscience crazy, I’ll tell you since you may ask how will they say it was the elite when Trumps president? ahh that’s easy, Jones says that they have a way of cowing new Presidents like Obama or Trump who might come in wanting to shake things up, they sit him down and show him footage of the JFK assassination taken from the grassy knoll in never released footage telling him to play ball or end up like JFK, and instead of blowing the whistle on this cabal undermining American democracy by going to address the nation that night uncovering the entire grand unified conspiracy, having his AG prosecute them up for accessory after the fact to murder, treason and rebellion and having them executed thus ending up carved into MT Rushmore as one of the greatest heros who ever lived…… they don’t do any of these things a normal person and a person with political instincts would do…they just shrug and say “yes sir, I’ll do what I’m told: big war is great, big pharma is great, big agri is great, I surrender my will, as of this date”.

Very interesting post, but I must admit, it was extremely difficult to inhibit speed reading.

Although, Orac’s last paragraph in each post effectively summarizes his point of view.

Getting more personal, it’s hurtful when Orac’s minions harshly criticize the efforts of a scientist/researcher/individual based on the their level of education, institution of higher learning, and the strength of the Science Journal used to communicate their data or hypothesis.

To my knowledge, Orac has never practiced or encouraged this type of behavior.

@ Orac’s minions,

Be more professional, minions.

Getting more personal, it’s hurtful when Orac’s minions harshly criticize the efforts of a scientist/researcher/individual based on the their level of education, institution of higher learning, and the strength of the Science Journal used to communicate their data or hypothesis.

Do you also use Random Capitalization in your vanity books, Doucheniak?

I, for one, don’t consider proven incompetence to be an ad hominem. Turing as the “Father of AI,” my ass. I’d bet my bottom dollar that you haven’t even read the paper in question and, on the “speed reading” front, managed to miss the punchline if you did. Just go away.

MJD telling Orac’s minions to be more professional is like…………………………………………………………………………..

Fill in the blanks, people, I can’t provide all the snark myself- I even set it up for you- It writes itself

( I can’t help it- it just comes naturally to me.

As Steven King once said ( paraphrase) that if you drove past an old country house you might think about how old it was or who had lived there whereas HE would automatically envision decaying corpses hidden in walls and mediaeval torture devices etc.

WE are all servants of our art)

MJD telling Orac’s minions to be more professional is like

Someone who murders his parents and then throws himself on the mercy of the court because he is an orphan?

Interestingly, all of the ad hom examples shown have some degree of valid applicability to evaluating the claim. And, the relative effect sizes are broadly in line with how applicable. It would be interesting to know where a truly irrelevant but nasty ad hom, say an allegation of child abuse, would fit in.

That was my thought as well. One would naturally find data from a known fraud to be less compelling.

I think the real flaw is that the attacks were presented as truth from on high. The premise is that the doctor really is sloppy and dishonest and compromised. Mike Adams attacks are just lies, and some of them are trivially easy to verify as such.

So it’s reasonable both ways–I ignore Mike’s science because he’s a know liar with a conflict of interest, and I ignore his ad hominem attacks as well.

Yes, I think they could have chosen some better ad hominem examples that were a little more far-fetched. The sort of six-degrees of separation stuff you see on the internet. Somebody worked for Merck 20 years ago, for example.

I guess I can keep using my snarky tone whilst dispelling pseudo-scientific claims and know that it is empirically-effective. Anecdote alert: I have been told by numerous people, either vaccine hesitant or anti-vaxxers that I have “converted” that it was my style and information which got them to re-consider their position. Of course I have been told by numerous anti-vaxxers that I am a meanie-pants. Scientific communication for public consumption may be too squishy. After some more in-depth analysis of this observation, we as the scientific community should re-evaluate our communication. I think that’s being done now but I don’t see any changes where there should be.

Very interesting.

One speculation maybe to add: laymen mostly make their arguments based on arguments of authority “He said, she said… but that guy has a PhD, so I’ll believe him,” because many of them don’t understand the science and can’t make very good arguments based on it. Scientists are authorities due to expertise where society is built to decouple one from that other: society makes scientists into authorities on science with the expectation that nobody else needs to understand the science. The natural attack of a layman is to question the basis of the authority, which they can understand, as opposed to the science, which they probably can’t. It’s easier to attack the tangible than the abstract, and authority is tangible. I think attacks directly against the basis of authority, namely the quality of the person due to conflicts of interest that explicitly undermines their assumed authority, are the natural strongest attacks for an audience that really can’t argue by true merit.

I think that that’s very important:
the basis of the authority

which might be different due to the perspective of the subject:
to MOST people if you say a governmental agency, professional association, elite university, well-known broadcaster/ newspaper- they’ll probably accept the results as reasonable

HOWEVER there is a growing vocal minority that fails to accept these sources: IN FACT, their very powerful positions make them immediately suspect** whether this is political reporting or science news.

** although one loon I survey scoffs at Harvard/ Yale/ Stanford ( often mispronounced as “Stamford” )/ Oxbridge -BUT accepts results/ personnel and lauds them if he can twist their studies into woo-supports-

Yhose elite universities, he complains, are automatically accepted by the mainstream whilst his studies are rejected because he didn’t study there:
it’s PREJUDICE.

Right prejudice against BS..

Politicians pre-internet and those still with us who haven’t quite gotten to the idea everything they say has been recorded, things like the Rockerfeller Commission, Church Committee, Watergate, TrumpGate (whichever it turns out to be), Obamas entire slogan being “yes we can” as an answer to the ‘no we can’t change anything’ attitude then getting in and only tinkering around the edges the very way he said Clinton would do it, no big bold ideas. All this kinda stuff undermined the public confidence in govt and institutions and it needs to be brought back.

The only way is to teach people to reason and think for themselves from elementary school, really early, teach them to be critical but not so critical they don’t accept ANYTHING ever no matter what the evidence. There is a phenomenon among Millennials I’m worrying about, other ones anyway I don’t have it, they are basically going into their bubbles with their politics, it’s the same with science now, they only read things or watch things that confirm existing beliefs they deride shows that claim to try be objective (which is not the same as neutral which calls everything 50/50 COUGH CNN COUGH- don’t tell me the GOP says the skys blue the Dems say it’s red tell me which it is, if it’s not one or the other tell me that). When you combine this trend with the fact that so many political arguments are PAST each other rather than TO each other it’s a disaster, what I mean is they rarely address each others points head on, they respond but with something else.

For example, I’m dreading an abortion referendum were gonna have in Ireland in a few months. The main point of the anti-abortion folk is that it’s murder, no different than smothering a 6 week old and a termination at 10 weeks. My counter argument to that would be that death comes in phases, not a single moment that can be defined, and so life is the same, we have to draw a line somewhere so for me that’s where what science would call the soul, consciousness, starts, and since there is no brain or nervous system that early you’re not at personhood stage yet – that would be my argument, but instead they’ll hit back with “MY BODY MY CHOICE!!!” but that does not address their argument, it ignores their central accusation against you. So this and people living in echochambers means they never hear the others side, and when you don’t read or hear what the other side is saying to convince people to go against you, you find it harder to convince them to join with YOUR side. That’s why i read all the anti-LGBT literature and even those catholic newspapers the crazy old women barely read, during our marriage referendum, so I could know thy enemy.

It’s unfortunate. Some forms of research can really only be done if you have a huge amount of money and manpower. People who don’t accept the word of such sources are essentially prohibiting us from asking some of the really really hard questions. Skepticism without some threshold criteria for acceptance isn’t skepticism anymore.

The only way is to teach people to reason and think for themselves from elementary school, really early, teach them to be critical but not so critical they don’t accept ANYTHING ever no matter what the evidence.

I think there also needs to be some means of teaching people from an early age that for any reason, at any time, no matter how competent they grow at some skill, they still might fail and be wrong about something. Every crank enters into conversation with the blithe certainty of their own infallibility. I wish there were some way of instilling a thought process that gave people a release valve here where they could afford somehow to change their own minds and be wrong. There’s nothing wrong with being wrong, it only hurts if you realize you are and fail to correct it, I think.

This is something I’ve thought often about the longer I’ve been in science. How many times I’ve made a hypothesis about which I was absolutely certain only to have some result pull the rug out from under me. Realizing my view of reality was absolutely wrong a few times was one of the hardest things I’ve ever had to do. Knowledge of personal fallibility is an important knowledge.

only tinkering around the edges

I find having health insurance to represent some mighty fine tinkering.

Bingo! Science advocates primarily rely on argument from authority when it comes to communication with the general public and policy makers. Many posts here and at SBM come down to “trust us we’re the experts.” This is just the way things have to work, really, since contemporary science is far too arcane for laymen to understand, which is only exacerbated by the specialist language in which it is written. The problem here, which several comments have identified is distinguishing a legitimate attack on authority from an Illegitimate one. The examples of criticism of “Dr. Doyle’s” credibility ARE legitimate and do have “true merit” in the evaluation of her claims. In that sense, they are either not really ad hominem, or not ad hominem to the point of logical fallacy in real-world practical reasoning (even if they may be in some abstract formal logic sense).

One way to think about this is that ‘science’ in our times will always necessarily be internal to the scientific community, and when scientists seek to address audiences outside theirs communities they are not engaging in ‘science’ but ‘rhetoric’. Which is to say ‘communication’ and ultimately ‘persuasion’.
The principles of effective rhetoric were outlined by Aristotle, and haven’t really changed since: being a combination of Logos (logic and evidence), Pathos, (emotional stakes; why the issue matters in human terms), and Ethos (authority and credibility). Fall down in any of these parts, and your message will likely fail to be persuasive, or maybe even really get ‘heard’.

One difficulty argument encounters these days on the credibility front is the sheer size and complexity of social organizations. Entities that may be perceived as ‘sources’ actually are composed of a number of different parts, which typically hold differing and sometimes contradictory positions, behaviors, etc. For example, it’s ridiculously simplistic to say ‘you can’t trust the government’ as some sort of global principle, because the trustworthiness of government varies dramatically depending on what part of government or what individuals you are talking about. The same is true with multinational corporations en masse and most specific multinationals individually. I guess in logic terms you’d call this a composition fallacy: what is true of one part of some whole is not necessarily true of the whole or of any other part within the whole. The problem is that it takes work and detail to distinguish one part from another. Take the question of vaccines, and the invocation ‘you can’t trust Big Pharma’. Well there are lots of grounds on which distrust of pharmaceutical companies is perfectly reasonable, or even wise, but this only applies to a minority of pharma actions, and vaccines aren’t by any evidence one of them. But it just so much easier to take the easy intellectual route and damn the whole apparatus indiscriminately by virtue of the worst actors within it.

When antivaccine cranks accuse me of being a pharma shill and ‘only protecting profits’ etcetera, I often concede right away that I get these juicy big paychecks from Big Pharma, Monsanto, Big Telecom (‘cell phone radiation’!), Big Food, the oil industry, etcetera. I also tell them that I’d be happy to shill on their behalf too, if they simply pay me enough. I sometimes even offer to set them up with a shill deal for themselves, so that they can expect big fat paychecks spreading the word for the tobacco industry, land mine manufacturers and some other grubby businesses that are below even my ethical standards.

Funny thing is, they never take up on my generous offers. They actually never even respond. One would almost be tempted to think that they feel a bit silly… But nah, that’s probably my imagination. But it appears quite an effective way to shut them up.

I don’t find the results surprising at all. As Willie Stark said in a different context, the right kind of argumentum can scare the hominem into a laundry bill he didn’t expect.

Which is why, despite the glass houses issue, so many alt-med types deploy the pharma shill gambit. I have observed that this is frequently projection: alt-med type who has a profitable business selling supplements accuses legitimate researchers of being compensated by the Big Pharma companies whose products the researchers find efficacious or otherwise recommend. Sometimes it is even true. But the ethics of the research business demand that researchers disclose their funding sources, so that anybody can see when the claim is true, and that most of the time it is not. Alt-med types labor under no such ethical standard.

In some circumstances projection seems to be exactly right. In the olden days of my being a shill for Monsanto (that is a joke by the way), I was accused of all sorts of malfeasance. Then it dawned on me one day that the activities I was being accused of were exactly the way the accusers were behaving. It was almost as if they could not accept that people behaved in different ways. Most importantly they failed to grasp the concept that the means mattered rather than the desired outcome being everything.

I have some problems with the empirical argument:
“Dr. Doyle’s research on the effect of niacin on Prudar-Wein syndrome only included children ages 28 to 34 months of age. However, Prudar-Wein syndrome is normally diagnosed by 18 months of age.”
if you don’t know the study, which is my case, isn’t it possible that Dr Doyle has performed a retrospective study on children who had the diet vs. a control group? Or a prospective study enrolling healthy children, which had limitations, due to the age of the children, but still could be interpreted.
Therefore, this argument is weak.
By contrast, misconduct and conflict of interests constitute strong arguments. That you don’t see this and call these arguments ad hominem at attacks is a problem.

Um, dude. Read the rest of the examples. The article is open access. Most of the examples are made up for purposes of the study—as I explained. (I don’t know if this one is.) Bloody hell.

Most of the examples are made up for purposes of the study—as I explained. (I don’t know if this one is.)

Given that there’s no such thing as “Prudar-Wein syndrome,” I’m thinking yes.

Perhaps a play on Prader-Willi syndrome (the subject of Mayim Bialik’s dissertation, BTW).

@ Orac
“To test their hypotheses, the researchers carried out two experiments involving a total of 638 participants. In the first experiment, they enrolled 480 undergraduate student volunteers from two community colleges, a private research university, a private liberal arts college, and a state college. After results from participants who failed to finish the questionnaire, skipped one or more of the items in the questionnaire, or failed to follow instructions, there were left 439 participants, whose average age was 24.1 and which included 312 women.”
“The researchers were concerned that the first experiment used a population that was too homogeneous. So they carried out a second experiment. Experiment #2 had 224 adults recruited from an opt-in Internet panel managed by a survey research firm take the survey. After exclusions, there were 199 subjects who completed the entire survey as instructed. This group was much more varied, as well. Their ages ranged from 23 to 83, with a mean of 48.5 and a median of 47. 39 states were represented, and 47% of the respondents were female. Nearly 77% of the respondents identified themselves as non-Hispanic white, while 13.8% and 9.2% identified themselves as black and Hispanic, respectively. Finally, 40.4% of respondents had earned at least one college degree, and 46.2% of the respondents were from households with an annual income below $50k per year.”
This is irrelevant information, whereas the relevant information to judge the quality of the paper should be what the author call “empirical” information.
As it appears, your article looks like a defense of misconduct. I hope it was not your intention.
@ Narad
I don’t think that any of the subjects can find that Pruder-Wein syndrome does not exist without an Internet connection.

Daniel, the researchers describe their study population because it may have an impact on the results, since they are looking at the response of the general public to ad hominim arguments as compared to empirical arguments against a stated piece of (made up) science.

Part of the reason the authors of this study did the second round with different participants is that college students, though easy to study, are not generalizable to the national population.

I don’t think that any of the subjects can find that Pruder-Wein syndrome does not exist without an Internet connection.

I suspected it enough that I looked, but I don’t see the relevance. I was replying to Orac, in any event, hence the quoted text.

Daniel, the researchers describe their study population because it may have an impact on the results

Wait, public-opinion researchers collect demographic data now? Wonders never cease.

@ JustaTech
What I mean is that what you need to understand the study is what the author call an empirical argument, rather than the precise description of the population studied.
If I were a participant, I would have rated much higher this particular conflict of interest and the evidence of present scientific misconduct, as considerations affecting the credibility of the study, in comparison to this particular “empirical argument”.
If the study finds 2 + 2 = 5, the empirical argument that 2 + 2 = 4 is enough, I don’t need to know that the study has been founded by the Higher Number Society.

Most common ad hominem I’ll get is the “you’re a greedy rich pediatrician because of vaccination” COI claim, which makes me laugh. As pediatricians are the lowest paid of physicians–yet we do far and away the most immunization–this claim makes no sense. Most adult doctors who could vaccinate their patients often do not (and simply tell their patients to get their shots at the pharmacy) because the profit margin is so low on vaccines for physicians. More recently a claim that Blue Cross is paying pediatricians a $400 bonus for each child that is fully vaccinated at 2 years age has been circulating the nets which is also laughable. First, I’ve never been offered such a bonus for keeping vaccination rates high (and I don’t know anyone else who has had this offer either), and second, to do this requires staying up on vaccines over at least 10 well checks, which wouldn’t make it a huge bonus per visit anyhow.

The irony is that most of the anti-vaccine figureheads profit much more from selling their untested products than the average pediatrician or internist does in routine clinical practice.

Ad hominem attacks seem to work very well not just against scientific arguments, but against other forms of criticism as well. Look at the US government’s attempts to discredit people critical of their policies by tarring them with accusations of paedophilia (e.g. Scott Ritter), rape (e.g. Julian Assange), etc. These accusations may or may not stick but the damage to their credibility is done.

“If the facts are against you, argue the law. If the law is against you, argue the facts. If both the facts and the law are against you, abuse the other side’s attorney.”

I am not sure that this type of accusation would damage the credibility of the arguments. It would certainly damage the authors, but not their arguments. This is different from scientific misconduct and conflict of interests, where the accusations damage the credibility of their authors.

It doesn’t damage the credibility of their arguments. It shoots the messenger rather than the message, which is what makes ad hominem a fallacy.

I found it interesting that the argument “fabricated data in the past” was found to be more persuasive than “fabricated data in the present study.” To my mind the reverse would be more likely the case.

If someone fabricated data in their past research, they may have since reformed their ways, and their present research may be sound. By analogy, someone was convicted of a robbery many years ago, but then straightened out and got a job. If their present status is not mentioned, one can reasonably question whether it is being left unsaid in order to attack them unfairly.

If they fabricated data in their present research, that invalidates their present research entirely. And, there has not been time for them to reform. By analogy someone was convicted last week for a robbery they committed a month ago: we don’t know if they’re eventually going to straighten out or commit more crimes. We can reasonably assume that there is not a more timely/immediate update (since “last week”) that sheds sufficient light on their future intent.

In any case, any research with fabricated data is worthless and should be dismissed a-priori. To my mind, “fabricated in the present” should be the top criterion for throwing out a study or findings and dismissing an author’s work as worthless for some period of time thereafter (until/unless they demonstrate reform).

“University with low standards” does not necessarily entail that all of its graduates have low standards; some may turn out to be quite good. Civilized societies should not judge individual merit on the basis of group characteristics; doing so leads down the road to numerous pernicious prejudices.

“So-and-so’s research is sloppy” is a matter of opinion that invites examination of their work in more detail to determine whether or not one agrees with the criticism. “Poor methodology (etc.)” is also subject to legitimate debate.

I’m asking some close friends & fellow rationalists how the various arguments stack up for them, to see if their inclinations are similar or different to those in the study.

“University with low standards” does not necessarily entail that all of its graduates have low standards; some may turn out to be quite good.

“Standards” also seems to be ill defined; back in the day, it was easy to get into the University of Chicago, but the education itself was rigorous. It seems like something that the undergraduates might at least pick up on, but something something affirmation bias something.

Narad, re. “ill-defined.” When I was an undergrad, the term “competitiveness” was used as a synonym for “rigorous education.”

For one thing that’s an obvious category-error and unproven implied correlation: highly-competitive admissions or academic culture does not necessarily equate to a “good education.” A scale based on “selectivity” is more useful for admissions policy, and an actual percentage of applicants admitted is best because it’s an interval scale variable rather than a categorical or qualitative one.

But here’s something else that deserves to be studied further if it hasn’t already: “competitiveness” as an academic culture, meaning, competition between students in their courses, is a cultural biasing factor that is more productive in some fields, less so in others, and potentially detrimental in others. In some fields, the opposite value, “cooperation,” is more productive: not only of learning, but on the job. One size does not fit all.

We like to say as a value statement, that science is a cooperative endeavor: many workers around the globe, working together, contribute to emerging knowledge. And it is also noted widely, that the culture of science in the universities has become impaired as it has become more “cut-throat” and dominated by relentless demands for fund-raising and publicity. Clearly, that is an excess of “competitiveness” in the academic culture, where more “cooperative” values would be more productive.

I would argue that when excessive competition for funding and publicity coincide with the teaching of quackademic medicine, the result is to fund and promote questionable work and overt quackery.

I would also argue that these factors are at work in some of the more egregious cases we have seen, of the penetration and prominence of quackademic medicine in medical schools that formerly knew better than to touch that stuff.

In related news: Crépeaux, Shaw, Exley and Gherardi have been forced to withdraw allegations about other researchers’ affiliations, which turn out not only to be untrue but laughably easy to have checked – https://www.sciencedirect.com/science/article/pii/S0300483X17302901?via%3Dihub.

I shall leave other readers to draw their own conclusions as to what this says about C, S, E and G’s competence and diligence as researchers…

PS Thanks to Smut Clyde for drawing attention to this elsewhere.

How incredibly lame is it to have your letter to the editor retracted?

On the other hand, it’s a way to bulk up your CV if it has a separate heading for RETRACTIONS. 🙂

Reading the RetractionWatch article on the Exley, Shaw et al letter to the editor, I see that there’s a controversy over an upcoming scientific conference in Italy that’s hosting antivax speakers. Set to appear at the conference in Rome next month are Yehuda Schoenfeld and Luc Montagnier.

“Many Italian scientists have distanced themselves from the “New Frontiers of Biology” conference.1 But the president of the order, Vincenzo D’Anna, who has rejected the nickname AntiVaxxer, defended the conference for questioning the safety of scientific developments.

He told The BMJ that he could not understand “all the fuss” about the conference. “Should someone have something to complain about, there will be an hour for open questions at the end of it,” he said.

But Pier Luigi Lopalco, an epidemiologist at the University of Pisa, called the conference unscientific, claiming that “the theme of safety is the common mystifying argument used by no-vax activists.”

Conference participants will include Yehuda Schoenfeld, a controversial immunology researcher from Tel Aviv, Israel, who claims that vaccines contain nanoparticles that cause immune pathologies.

Also attending is Luc Montagnier, who won the Nobel Prize along with two other researchers for discovering HIV. More recently he is known for backing the now discredited purported link between vaccines and autism…
Roberta Villa, a press officer at Asset (Action Plan on Science in Society Related Issues in Epidemics), said, “The problem is not just about vaccines. Italy has been facing a contrarian culture from some doctors and scientists for some time. New Frontiers of Biology is only one example.”

http://www.bmj.com/content/360/bmj.k711

I guess the “contrarian culture” in Italy might also include the court ruling that cellphone use caused a brain tumor, and prosecutors getting manslaughter convictions of scientists who failed to predict earthquakes.

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading