Categories
Antivaccine nonsense Autism Clinical trials Complementary and alternative medicine Medicine Science Skepticism/critical thinking

“There must be a reason”

For a change of pace, I want to step back from medicine for this post, although, as you will see (I hope), the study I’m going to discuss has a great deal of relevance to the topics covered regularly on this blog. One of the most frustrating aspects of being a skeptic and championing critical thinking, science, and science-based medicine is just how unyielding belief in pseudscience is. Whatever realm of science in which there is pseudoscience Orac happens to wander into, he find beliefs that simply will not yield to science or reason. Whether it be creationism, quackery such as homeopathy, the anti-vaccine movement, the “9/11 Truth” movement, moon hoaxers, or any of a number of pseudoscientific movements and conspiracy theories, any skeptic who ventures into discussions of such a topic with believers will become very frustrated very fast. It takes a lot of tenacity to keep going back to the well to argue the same points over and over again and refute the same nonsense you’ve refuted over and over again. Most don’t have sufficient intestinal fortitude to keep at it, leading them to throw up their hands and withdraw from the fight.

Although on occasion I’ve blamed this phenomenon on “cultishness” and, make no mistake, I still think that there is an element of that in many of these movement, particularly the anti-vaccine movements, cultishness alone can’t explain why people hold on so tightly to beliefs that are so clearly and unequivocally not supported by science, such as the belief that vaccines are responsible for an “autism epidemic.” Then last week, what should pop up in the newsfeeds that I regularly monitor but a rather interesting article in Science Daily entitled How We Support Our False Beliefs. It was a press release about a study1 that appeared a few months ago in Sociological Inquiry, and the the study was described thusly:

In a study published in the most recent issue of the journal Sociological Inquiry, sociologists from four major research institutions focus on one of the most curious aspects of the 2004 presidential election: the strength and resilience of the belief among many Americans that Saddam Hussein was linked to the terrorist attacks of 9/11.

Although this belief influenced the 2004 election, they claim it did not result from pro-Bush propaganda, but from an urgent need by many Americans to seek justification for a war already in progress.

The findings may illuminate reasons why some people form false beliefs about the pros and cons of health-care reform or regarding President Obama’s citizenship, for example.

Fortunately, this analysis has very little to do with whether invading Iraq preemptively was a good idea or not, whether it was right or not. I am not even going to address that question or my opinions on it, because doing so would distract from what is important, namely what mechanisms people use to hold onto beliefs that are clearly mistaken. The study uses as its model the widespread belief in from 2001 to 2004 that Iraq’s ruler Saddam Hussein had aided and abetted the terrorists who on 9/11/2001 crashed jet liners into the World Trade Towers, the Pentagon, and into the fields of Pennsylvania when the passengers fought back. When I read this study, I immediately realized that its results might be more generalizable and in particular apply to the movements to promote unscientific medical practices that I routinely write (or rant) about. The widespread belief that persists even to this day that Saddam Hussein somehow had a hand in 9/11 is very much like the beliefs of the anti-vaccine movement in its resistance to evidence is a lot like belief in quackery and other pseudoscience. As one of the investigators, Steven Hoffman put it, “”This misperception that Hussein was responsible for the Twin Tower terrorist attacks was very persistent, despite all the evidence suggesting that no link existed.”

One problem with this study that struck me right from the start is that the authors conveniently dodge the question of whether this mistaken belief was due to propaganda or not. Although it has been argued by many that the Bush administration promoted such a connection by innuendo and misinformation, this study suggests is that there was a lot more to it than that. However, in a way, sidestepping the question of where the belief that Saddam Hussein was involved in 9/11 is not a huge problem, because his study is not so much about how false beliefs come to be widespread in the first place but rather why human beings keep holding on to them after evidence showing them not to be true is presented to them. As in the case of quackery, how belief in pseudoscience comes to be and why humans continue to cling to that belief even in the case of overwhelming contradicting evidence are two separate, but related, problems. Right in the introduction, the authors of this study lay the conflict on the line in a way that one could see as relevant to science-based medicine as well:

Explanations for this have generally suggested that the misperception of a link resulted from a campaign of innuendo carried out by the Bush administration that explicitly and implicitly linked Saddam with Al Qaeda. For example, Gershkoff and Kushner (2005:525) argue that “the Bush administration successfully convinced [a majority of the public] that a link existed between Saddam Hussein and terrorism generally, and between Saddam Hussein and Al Qaeda specifically.” We characterize this explanation as being about the information environment: it implies that if voters had possessed the correct information, they would not have believed in the link. Underlying this explanation is a psychological model of information processing that scholars have labeled “Bayesian updating,” which envisions decision makers incrementally and rationally changing their opinions in accordance with new information (Gerber and Green 1999).

In this article we present data that contest this explanation, and we develop a social psychological explanation for the belief in the link between Saddam and Al Qaeda. We argue that the primary causal agent for misperception is not the presence or absence of correct information but a respondent’s willingness to believe particular kinds of information. Our explanation draws on a psychological model of information processing that scholars have labeled motivated reasoning. This model envisions respondents as processing and responding to information defensively, accepting and seeking out confirming information, while ignoring, discrediting the source of, or arguing against the substance of contrary information (DiMaggio 1997; Kunda 1990; Lodge and Tabor 2000).

It was with a bit of trepidation that I decided to delve into this study because (1) the subject used as the topic of the surveys and interviews in the study is from conservative politics, which may bring charges of politicization of this study, particularly since other examples mentioned in the article are from the current debate about health care (the idiocy known as the charge of “death panels,” anyone?) or the “birther” movement, namely that merry band of conspiratorial morons who believe that Barack Obama is not a natural born citizen of the United States and therefore is ineligible to be President, a belief that defies all evidence and reason; and (2) (and much more importantly) I’m not a psychologist or sociologist. After all, since when have I shied away from taking on idiocy, either right wing or left wing? Be that as it may, again, one can see how this sort of a model could apply very well to belief in all manner of pseudoscience, the most prominent at the moment being the anti-vaccine movement. In fact, this model applies to all of us, and the choice of subjects was more for convenience than out of bias, as the authors take pains to point out that these sorts of flaws in reasoning are widespread among all political ideologies:

Our choice of subjects should not be taken to imply that the processes we are examining here are particular to conservatives: we expect that, had we conducted this study in the late 1990s, we would have found a high degree of motivated reasoning regarding the behavior of President Clinton during the Lewinsky scandal. Previous research on motivated reasoning has found it among respondents of all classes, ages, races, genders, and affiliations (see Lodge and Tabor 2000).

But what is this “motivated reasoning” about which the authors write and how did they look at it? What exactly did the authors do to test their hypothesis? The authors base their concept on that of cognitive dissonance. Delving back into my medical school psychology courses, I remember that cognitive dissonance derives from the observation that people do not like to be aware when they hold contradictory beliefs. Indeed, it is the name given to the feeling we have when we are made aware that we are holding two contradictory thoughts at the same time, and the strength of the dissonance depends upon the importance of the subject to an individual, how sharply the dissonant thoughts conflict, and how much the conflict can be rationalized away. Cognitive dissonance theory thus posits that, when faced with evidence or occurrences that challenge their beliefs, people will tend to minimize the dissonance any way they can without giving up those beliefs.

One classic example of cognitive dissonance often cited include smokers who try to rationalize their unhealthy habit. An even more classic example, described by Leon Festinger in the book When Prophecy Fails is the existence of “end of the world” cults, who believe that the world will end on a certain date. Festinger infiltrated a cult that believed the world was going to be destroyed and its leader and followers would all be taken away by an extraterrestrial visitor before dawn on December 21, 1954. Dawn came and went with no visitor and no world-destroying cataclysm. Did the group disintegrate? That’s what you might think it would do, but instead it concluded that the cataclysm had been called off and that God stayed his hand because the cult had “spread so much light.” Thus, the strategy for eliminating the cognitive dissonance between the cult’s beliefs and the undeniable fact that the world had not ended, nor had an alien visitor come, at the predicted time was not to conclude that the cult’s beliefs had been wrong, but rather to conclude that the cult had somehow stopped the catastrophe through its righteousness.

Now, on to the study. The authors chose a study population from precincts in counties that had voted most heavily for Bush in 2000, identifying them through voter registration records. Surveys were mailed to 1,062 voters, of which 12 were returned to sender. Of the remaining 1,050, 267 responded, for an overall adjusted response rate of 25.4 percent. Of these surveys, 21 were unusable in this study, so the analysis of the surveys is based on 246 respondents. Subjects who agreed to be interviewed (84, of which 49 met the study criteria of having voted for Bush and believed that Saddam Hussein was somehow involved in the 9/11 attacks) were then subjected to what the authors termed a “challenge interview” to determine whether they were exhibiting Bayesian updating (the willingness to change one’s mind in the face of contradictory information from a trusted source) or motivated reasoning (resisting contradictory information). The exact wording of the challenge was, “..let’s talk about Iraq. As you see in these quotes, the 9/11 Commission found that Saddam Hussein was not behind the September 11 attacks. President Bush himself said, “This administration never said that the 9/11 attacks were orchestrated between Saddam and Al Qaeda.” What do you think about that? [show newspaper clips]”

Data were analyzed as follows:

First, we examined whether our respondents deflected the information, and we categorized the strategies that they used to do so. Second, to conduct a more stringent test of the motivated reasoning hypothesis, we examined whether respondents attended to the contradictory data at all. Lupia (2002) argues that Bayesian updating happens in three stages: to successfully change opinion, a piece of information must be attended to, remembered, and used in decision making. The first stage, attention, is a prerequisite for the second and third stages. By coding whether our respondents attended to the information we produced a minimum estimate for motivated reasoning, which can also happen at the second or third stages.

What did the authors find? Basically, only 2% used Bayesian updating, changing their mind about whether in response to information provided that challenged their belief. 14% denied that they actually believed in a link at all, even though they had responded in writing in the survey that they believed there had been a strong link between Saddam Hussein and 9/11. Most of the others ignored the evidence against their belief and launched straight into arguments of why the war against Iraq was justified, thereby reducing their cognitive dissonance by asserting, in essence, that the war was a good idea even if Saddam Hussein had not been involved in 9/11. While this study had a fair number of weaknesses inherent in studies of this sort, overall it’s not bad. For example, it didn’t look at whether the survey and challenge interviews changed any minds later, after the respondents left. Nor is it clear, given that the interviews were carried out in 2004, whether this study accurately showed the origin of the erroneous beliefs about Saddam Hussein having helped al Qaeda. Rather, it probably showed more of the reasons for the persistence of such beliefs, regardless of how they formed. However, as I mentioned before, this is probably not a huge flaw in that the study was more designed to look at how people support their false beliefs, rather than how they formed those beliefs in the first place.

From their interviews, the authors postulated a mechanism the refer to as inferred justification, which the authors describe thusly:

Finally, our interviews revealed an interesting and creative reasoning style that we call inferred justification: recursively inventing the causal links necessary to justify a favored politician’s action. Inferred justification operates as a backward chain of reasoning that justifies the favored opinion by assuming the causal evidence that would support it. As with the situational heuristics described above, respondents begin with the situation and then ask themselves what must be true about the world for the situation to hold.

None of this should come as any surprise to skeptics and supporters of science-based medicine. It’s nothing more than a fancy way of describing the flaws in reasoning that virtually all boosters of unscientific medicine use: post hoc ergo propter hoc reasoning and cherry picking the small amount of evidence that supports their belief and ignoring the vast majority of the evidence that does not. Forget politics. Forget “liberal” versus “conservative.” All this study is saying is that people who have deep emotional ties to a belief will try very hard not to have to give up that belief. Regardless of ideology, the results and conclusions shouldn’t come as any surprise. In fact, we see the same sorts of “inferred justification” all the time across the political specturm. If Tom Harkin, for example, were to take a similar survey followed by a challenge interview chock full of posts from SBM that showed the “alternative medicine” that he believed in doesn’t work, I have no doubt that it would have no effect on his advocacy for NCCAM (in fact, it didn’t; when confronted with a string of negative studies from NCCAM, Harkin simply complained that NCCAM’s studies are all negative) or his trying to slip provisions into the health care reform bill that would require the government to pay for “alternative” medicine.

Be that as it may, let me give you an excellent example in a realm we deal with regularly: the antivaccine movement. Let’s get a bit more specific than that. In the U.S., in the late 1990s through the middle part of this decade, the anti-vaccine movement latched on to the mercury-containing preservative (thimerosal) used in many childhood vaccines as a cause of autism. They based this belief on the correlation between a rise in the number of autism diagnoses beginning in the early to mid-1990s and the expansion of the vaccine schedule to include more vaccines, even going so far as to call the rise in autism diagnoses an “autism epidemic.” In 1999, despite little evidence that thimerosal-containing vaccines (TCVs) were in any way associated with autism or other harm, authorities ordered the phase-out of TCVs in favor of thimerosal-free alternatives. By the end of 2001, the only childhood vaccines still containing thimerosal were flu vaccines, and few children received them. The total dose of mercury received by children due to vaccines plummeted precipitously to levels not seen since the 1980s. And what happened to autism diagnoses?

They continued to rise.

I’ve blogged about this topic many times. It’s unequivocal that autism rates have continued to rise since 2001, and alternative explanations for the rise in autism rates are quite compelling, specifically broadening of the diagnostic criteria, diagnostic substitution, and increased awareness, along with evidence that, correcting for these changes, the “true” prevalence of autism has probably not changed much over decades; i.e., there is no “epidemic” of autism. In response, some who cling to the mercury hypothesis claim that even a trace of mercury would still cause this autism “epidemic,” failing to explain why we didn’t see such an epidemic before decades ago when thimerosal was first used in a relatively few vaccines. Other members of the anti-vaccine movement have moved on to more difficult-to-falsify hypotheses, such as different “toxins” in vaccines (the latest of which is squalene) or the concept that when it comes to vaccines we are giving “too many too soon.” The reason, of course, is because it is the belief that vaccines cause autism and all sorts of other harm that is driving the anti-vaccine movement. When more and more evidence fails to support this belief, rather than giving up the belief, the anti-vaccine movement either ignores the data and moves on to another hypothesis, expecting science to play Whac-A-Mole with each new outlandish claim; dismisses or minimizes it as being due to a conspiracy by big pharma to hide The Truth About Vaccines; or finds a way to twist the science to be either neutral or even supportive of its ideas. In the process, the anti-vaccine movement does what this study describes very well: It infers justification by recursively inventing links between vaccines and autism.

Another good example is the reaction to revelations about Andrew Wakefield. Andrew Wakefield, as you may recall, is the British gastroenterologist who in 1998 published a study in The Lancet that claimed to find a link between the MMR vaccine and “autistic enterocolitis.” This study, aided and abetted by truly irresponsible journalism, launched a panic in the U.K. that is only now starting to abate. In the interim, measles, once thought conquered, has become endemic again in the British Isles. In any case, it matters not to the anti-vaccine movement that (1) Wakefield’s Lancet study was poorly designed and utterly refuted by later studies; (2) investigative journalist Brian Deer discovered and published that Wakefield received £435,643 in fees, plus £3,910 expenses from lawyers trying to show that the MMR was unsafe; (3) the PCR laboratory that Wakefield used for his work was so poorly run that it apparently had no knowledge of the concept of a negative control; and (4) in 2009 Brian Deer unearthed and published evidence that strongly suggests that Wakefield had falsified data in his 1998 Lancet paper. Instead of abandoning the hypothesis that the MMR vaccine somehow causes autism, adherents cling all the more tightly to it, claim all the contradicting data were all a plot by the government and pharmaceutical companies to discredit Wakefield and suppress The Truth About Vaccines; and even go so far as to circulate “We support Dr. Andrew Wakefield” petitions around the Internet. Meanwhile, concerned that a TV special report by NBC’s Dateline would show Wakefield in an unfavorable light (they needn’t have worried overmuch), the anti-vaccine propagandists at Age of Autism preemptively launched a strike in the form of many of the fawning posts about Wakefield and attacks on Brian Deer they’ve published over the last couple of years. Indeed, after the special, they’re continuing to spew praise of Andrew Wakefield, attacks on Brian Deer, and sliming of Paul Offit, as well as serving as a mouthpiece for Wakefield’s attempts at spin (1, 2, 3). Thus is the cognitive dissonance that must occur as each new revelation about Wakefield’s incompetence, conflicts of interest, and scientific fraud leaks out. In fact, this is the usual M.O. when it comes to science looking at the claims of any pseudoscientist, be it Hulda Clark, Mark and David Geier, Tullio Simoncini, or whoever the woo-meister du jour is.

Finally, I think it’s worth looking at what the authors concluded in this study:

The main theoretical implication of our research is that “knowledge” as measured on surveys is partly a by-product of the attempt to resolve the social psychological problem of cognitive dissonance. The practical implication of this is that, although scholars have shown a correlation between the perception of links between Iraq and Al Qaeda and support for the war in Iraq, we cannot conclude from this correlation that misinformation led to support for the war. Rather, for at least some respondents, the sequence was the other way around: support for the war led to a search for a justification for it, which led to the misperception of ties between Iraq and 9/11. This suggests a mechanism through which motivated reasoning may be strongest when the stakes are highest . It is precisely because the stakes of going to war are so high that some of our respondents were willing to believe that “there must be a reason.”

In other words, the stronger the emotion behind the belief, the more likely a person is to fall into the trap of using cognitive errors to justify that belief. The key phrase is in the title of the article and in the conclusion, and that phrase is “there must be a reason.” Think about it and how often we hear that sort of a statement in the context of topics relevant to SBM. For example, “there must be a reason” that:

  • my child has autism (it’s the vaccines).
  • there are so many children with autism (it’s the vaccines).
  • there is not yet a cure for cancer (big pharma’s holding out on us to protect its profits).
  • my back pain got better (it must be the acupuncture, not placebo)
  • I rejected chemotherapy for my breast cancer and I’m still alive (chemotherapy is useless and “natural healing” is better).

The list goes on, and all of these are very emotionally-charged topics. After all, in the case of the vaccine-autism belief, what could be more emotional than the bond of a parent to her child, for example? This study doesn’t really break any major new ground, but it does remind me of what I’ve known, namely that, as Richard Feynman once famously said, “The first principle is that you must not fool yourself – and you are the easiest person to fool.” Besides all the ways we can fool ourselves listed by Harriet Hall, trying to minimize cognitive dissonance and using inferred justification are but two more.

It also make me wonder about two things. First, is there a way of taking advantage of these psychological mechanisms to persuade people who hold pseudoscientific views to accept science? In other words, can cognitive dissonance be reduced in a way that doesn’t require a person to reject science and cling ever more tenaciously to pseudoscience and invent conspiracy theories? Second, what beliefs do I hold that are more akin to inferred reasoning or strategies to reduce cognitive dissonance than beliefs based on strong science and firm evidence? I’m just as human as any of the participants in this study. Indeed, any skeptic who thinks he or she is not just as prone to such errors in thinking is not a skeptic but suffering from self-delusion. The only difference between skeptics and non-skeptics, scientists and nonscientists, in this regard is that skeptics try to make themselves aware of how human thinking can go wrong and then act preemptively to try to keep those normal human cognitive quirks from leading them astray. Indeed, guarding against these normal human failings when it comes to making conclusions about the natural world is the very reason we need science and why we need to base our medicine on science. If we do not and if we further do not at every turn gird ourselves with science, skepticism, and critical thinking against pseudoscience and the need to belief, it won’t be long before we are indistinguishable from what we oppose.

It’s because there must be a reason.

REFERENCE:

1. Prasad, M., Perrin, A., Bezila, K., Hoffman, S., Kindleberger, K., Manturuk, K., & Powers, A. (2009). “There Must Be a Reason”: Osama, Saddam, and Inferred Justification Sociological Inquiry, 79 (2), 142-162 DOI: 10.1111/j.1475-682X.2009.00280.x

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

33 replies on ““There must be a reason””

This makes me wonder how many erroneous beliefs I have that, given the evidence of its errors, I am still keeping and trying to justify.

Brilliant post… as somebody who has traveled amongst political and religious realms, eventually the voices have all started to sound familiar no matter the topic. Liberals, conservatives, scientists, woomongers – most seem to seek out that which reinforces their world view.

Reference – anything by PZ Myers or Christians. They both content themselves with low hanging intellectual wins to puff up their self-confidence.

As the other poster says, the proper response to this is silence as we ponder how many of our beliefs are being held together by shreds of emotion, rather than any sort of logic or evidence. Sobering.

Reference – anything by PZ Myers or Christians. They both content themselves with low hanging intellectual wins to puff up their self-confidence.

If that was true, then it would also apply to Orac, and pretty much anyone else that you care to mention.

Who gets to decide which issues are worth highlighting? And is it not the case that most of the issues that Orac blogs about — scientifically, at least — are “low hanging”, intellectually speaking? Perhaps you simply don’t understand the reason for that?

Also, who/what exactly would you consider as worth highlighting? Is education in and of itself not worthy? Are the dangers of building an epistemic foundation on faith — be it in religion, pseudoscience, whatever — not worth highlighting, even as a constant reminder of where that can lead?

I’d suggest that if you are having difficulty discerning between “scientists [and] woomongers”, that the problem is perhaps your own. While I doubt that you meant to suggest that you really cannot tell the difference between science and woo, the vast majority of people really can’t. That is why people like Orac and PZ, as well as many others, spend so much time highlighting the kind of reasoning, and consequently the outcomes, that are all too common.

And it is demonstrably false that “most seem to seek out that which reinforces their world view.” Some people certainly do, but that isn’t necessarily the issue (after all, if you can defend your position with copious amounts of evidence, you’d have to have a good reason for seeking out alternative explanations).

But the very fact that the likes of Orac and PZ are critics of certain modes of thinking suggests that they do in fact seek out that which does not reinforce their “worldview” (whatever that even means), even if it is usually to criticize it. You can’t exactly say that they don’t expose themselves to differing points of view.

Indeed, what else is science if not a form of applied skepticism which necessarily forces you to confront other views?

And if you believe that Orac and PZ differ in any meaningful sense, please explain, because I’m not convinced that you can do it in a logically consistent fashion.

IMO an important aspect of a this is that people develop what they consider wholesome world views. When something happens to cause gaps in this world view they become desperate to find something to fill the gap. That something may be an insight, philosophy, assertions of fact or fiction. The differences between creative writing, wild conspiracy theory, and propaganda created to drive or excuse policy are not important.

As an example a friend has long felt that the US coasts were patrolled by the navy, the skies by the air force. That our military was the best in the world. We had not been attacked. We were safe. He felt safe and a sense of pride of his service in the military. 9/11 changed all that. He couldn’t accept that a bunch nothing special guys armed with box cutters could bring down the WTC towers, blow a hole in the Pentagon, and threaten the White House. At a gut level he ‘knows’ that the skies are protected and the ‘official story’ can’t be right.

The dust hadn’t settled before conspiracy sites, particularly the always active ones in the Middle East, had speculated that it had to be an inside job. Probably done by Israel to smear the Arabs and justify an invasion.

This reasoning, such as it is, resonated with my friend. An inside job would mean there was still a mighty military force patrolling the border. It wasn’t a failure of the noble American defenses. It was an inside job. A foul betrayal.

This both patched the hole that 9/11 caused in his world view by, justifying his outrage and explained the disaster in terms that preserved his assumptions. In effect he is bending over backwards, performing a lateral arabesque, to avoid having to change his world view.

In the end it is not a fight between having to buy into a massive and intricate conspiracy involving thousands of people scattered around the world and a much simpler explanation involving twenty guys with box cutters. The choice is between a convoluted and unlikely conspiracy theory and a world view, adopted in childhood, that lends a a sense of safety, stability, and pride.

As incongruous and shocking as these wild conspiracy theories, and the fact that some people buy into them, may be it doesn’t begin to come close to the shock these people would feel if they were forced to admit that their world view and assumptions were wrong.

In psychology this is not unusual. Traumatic events are usually met with denial. With few variations it doesn’t matter if your facing a major disaster or death the five stages are roughly the same:

Denial
Anger
Bargaining
Depression
Acceptance

Most people work their way through the stages in time. Chronic deniers are usually working their way, ever so slowly, through their particular set of traumas. Truthers can’t deal swiftly with 9/11 and the fact nineteen guys with hand tools caused this disaster. Autism folks are working on accepting that autism happens to anyone’s kids and we don’t know why. Birthers are working on accepting of the fact that a a Democrat, a black Democrat, got elected. But it has to be noted that people working through the five stages don’t necessarily make rapid progress. They can get hung up on and make a lifestyle of any of the stages.

The lie that Iraq had something to do with 9/11, and the story that they had WMDs, were psychological patches to cover up the simple fact that we invaded a country with no good reason. That the POTUS lied and our military was used as plaything by a man-child, in part, seeking to level the status between him and his father. That neocon dreams of hegemony and control were more important than the blood and treasure of the nation. That oil industry dreams of public funded conquest of massive supplies of oil that would be privatized for money and power were worth the deception and loss of trust. The cost in lives and treasure and trust, still rising every day, is a very large and bitter pill.

Denial is so common that it might be termed a normal way to deal with trauma and avoid facing uncomfortable facts. Most people will, in time, face the facts and accept reality. Others will live in denial indefinitely. A few will make it part of their identity as “skeptics” and feel more comfortable if they can get others to deny reality.

Thanks for the work, Orac, it was well done.
I see this as a study indicating that yes, indeed, people practice the post hoc fallacy often.
There must be a reason that ______________ is true.
It doesn’t really matter if blank is young-earth creationism or UFOs, “it must be true” so let’s find a reason why.

The whole Saddam-caused-9/11 garbage was pushed by Bush, Cheney, Rumsfeld, and the other members of the PNAC platoon that controlled the White House and the Pentagon at the time. (As Rumsfeld said, in his orders to his subordinates to try and find a connection to Saddam in the attack: “Go massive. Sweep it all up. Things related and not.”

They were looking for a pretext to invade Iraq because PNAC itself was bamboozled by Ahmad Chalabi — a convicted embezzler who had to leave Jordan in the trunk of a car because he’d destroyed the Bank of Petra with his little stunts — who told them that it would be easy-peasy lemon-squeezy to topple Saddam and install a US-friendly and Israel-friendly régime. As everyone outside of the PNAC people knew, this was arrant nonsense, yet the lure of stomping a country into the dirt for ‘righteous’ reasons (a country that just happened to have a shitload of oil) was too strong.

Interestingly, Chalabi has longstanding ties to Iran, which is the one entity besides himself, Al-Qaeda, and various contracting firms to actually benefit from the invasion and occupation of Iraq. Why? Because Iran’s biggest historical enemies are the US, Israel and Iraq. By invading and all but destroying Iraq, we bogged ourselves down so that our ability to react to events is hampered. Israel is also a good deal more vulnerable now than when the pragmatic Saddam was in power. (As explained here, Saddam, like all the other Middle Eastern rulers, hated Al-Qaeda with a passion because they were trying to upset the nice gig he and the other rulers had going as the gas stations for the rest of the world.)

Partially OT: As a student, I was given the opportunity to take a seminar with Leon Festinger himself.He was totally brilliant,hysterically funny, sarcastic, and smoked *way* too much.I didn’t realize it, but he was probably also very ill at the time- he died soon thereafter. I’m glad that his work continues to influence us.

Orac:

I question whether the argument about a supposed connection between Saddam Hussein and 9/11 really supports your thesis. Contrary to what is frequently said — that “Bush lied”, etc. (see, e.g., Phoenix Woman above) — I don’t recall that the Bush administration ever made such a connection, and couldn’t find one when I checked. Rather, a number of prominent Democrats alleged that the Bush administration had made that allegation (after fighting in Iraq became unpopular). For example, Sen. Carl Levin (D-MI), Senate Select Committee on Intelliegence (SSCI) said Nov. 14, 2005, on CNN’s “America Morning”:

“But before the war, the president was saying that you cannot distinguish between Saddam Hussein and Iraq.

As a matter of fact, he said that so often they tried to
connect Saddam Hussein with the attackers on us on 9/11, so often, so frequently and so successfully, even though it was wrong, that the American people overwhelmingly thought, because of the president’s misstatements that, as a matter of fact, Saddam Hussein had participated in the attack on us on 9/11.”

http://transcripts.cnn.com/TRANSCRIPTS/0511/14/ltm.06.html

And, before the invasion of Iraq, Sen. Jay Rockefeller’s (D-WV), Senate Select Committee on Intelliegence, in Oct 2002 said:

“…He [Saddam Hussein] is working to develop delivery systems like missiles and unmanned aerial vehicles that could bring these deadly weapons against U.S. forces and U.S. facilities in the Middle East.

And he could make those weapons available to many terrorist groups which have contact with his government…

At the end of the day, we cannot let the security of American citizens rest in the hands of someone whose track record gives us every reason to fear that he is prepared to use the weapons he has against his enemies…

As the attacks of September 11 demonstrated, the immense destructiveness of modern technology means we can no longer afford to wait around for a smoking gun. September 11 demonstrated that the fact that an attack on our homeland has not yet occurred cannot give us any false sense of security that one will not occur in the future. We no longer have that luxury…”

http://justoneminute.typepad.com/main/2005/11/hardly_seems_fa.html

So, unlike the other examples you cite, the source of the belief that there was a direct pre-9/11 connection between Saddam Hussein and Al Qaeda stems more from partisan political statements from Democrats (who were using that as part of their “Bush lied” meme) than any argument from the Bush 43 administration. So, it seems to be an erroneous believe that backfired on those putting it out. The Democrats were trying to discredit the Bush 43 administration, and, instead of their message being remembered, the accusation was remembered and probably increased support for the administration.

wfjag, I disagree.

September 25, 2002
PRESIDENT BUSH: That’s a — that is an interesting question. I’m trying to think of something humorous to say. (Laughter.) But I can’t when I think about al Qaeda and Saddam Hussein. They’re both risks, they’re both dangerous. The difference, of course, is that al Qaeda likes to hijack governments. Saddam Hussein is a dictator of a government. Al Qaeda hides, Saddam doesn’t, but the danger is, is that they work in concert. The danger is, is that al Qaeda becomes an extension of Saddam’s madness and his hatred and his capacity to extend weapons of mass destruction around the world.

2004 Cheney (interview transcript): http://www.msnbc.msn.com/id/3080244/

Remaining committed to certain beliefs despite evidence to the contrary certainly invites psychological analysis, but one shouldn’t forget the social factors as well. My guess is that parents who get swept up with anti-vaccine beliefs and alternative therapies more readily find themselves part of a “community” (as dispersed as this community is) that seems supportive and energized. It isn’t simply a large number of individuals thinking this all by themselves; it’s the synergy of many people thinking it collectively and collaboratively (and acting on it). In addition, I remember an anthropological essay I read a long time ago that pointed out that many people believe certain things because someone they trust tells them so, even if they have no personal experience or observation to back the claim whatsoever. It would be a mistake to assume that this “person of trust” needs to be a movement leader (e.g. a Jenny McCarthy) that they trust… in fact, my guess is that those who find McCarthy’s twaddle convincing have already found someone with the same ideas in their more immediate circle. McCarthy simply confirms them in their convictions.

Very interesting post and while I do not disagree in any way with the intent of dissecting the holding of invalid beliefs, cognitive dissonance (itself a post hoc interpretation) is not the only means of interpreting the outcome generated by the study. There is a fundamental error in assuming that people reason with themselves, so to speak. Economic theory assumes a rationale person and economic theory is notoriously poor in predicting outcomes. Not all information is attended to equally and certain beliefs will persist at great strength due to the inertia generated by that strongly held belief. Built upon the context in which a belief develops (for instance seeking to place blame for a child having autism), when an answer is developed, that belief becomes like a boulder of near infinite mass rolling down a steep incline. Nothing will stay the momentum of this belief. Other pieces of information are irrelevant as they are not attended to by some. There is no conflict for some persons in this situation, no dissonance, there may even be perfect harmony for such persons. They have their answer and do not evaluate the correctness of that answer.

BA
I’m not sure cognitive dissonance implies that the person is the mythical “rational actor,” and while I agree that simple disinterest in further information and no felt dissonance is probably a perfectly valid explanation in a lot of situations, I’m not sure how it would explain the phenomenon that, I believe, cog dis was initially proposed to explain (I think. I am not a psychologist.): that after having their beliefs (end of world, what have you) clearly disproven by events, people sometimes respond by behaving as if their beliefs have become even further, by going out and recruiting more, for example.

*further = *further confirmed. Go go typing skills.

…I am just spamming the hell out of this post.

I belive (this is purely opinion) that there are to reasons “why” we hold onto irrational beliefs even when confronted with overwhelming evidence indicating that the beliefs are not reality:

(1) our self-centeredness when considering any issue, ie, “how does X affect me – will it protect me or hurt me.” It is human nature to adopt beliefs that reassure us of our immortality, and if the beliefs also give us neat prescritions for things to “avoid” to stay safe, it is even more satisfying. EG, I believe there is an environmental cause of autisim, there is a group of people who believe vaccines cause autism, I avoid vaccines, I have a better chance of keeping my kids safe. It is a lot easier than coming to grips with the fact that our lives are basically insignificant and that s*(t happens.

(2) fundamental lack of rigorous mental development/scientific education. Not intelligence – so many intelligent people I know are college educated, unvaccinated, homeopathic pill-popping, creationist individuals with very clean colons…who are “bad at math.”

I wanted to add that the “leaders” of the groups that are interested in convincing others that the pseudoscientific beliefs are true fall into different categories (or combinations of categories). Some I think have personality disorders, some start out with a legitimate scientific theory but lose their objectivity, some are snake oil salesmen/women, preying on people for $.

This research is incredibly valuable, and thank you for writing about it, Orac. I think there’s an aspect of this model that requires further thought, and until I go read the actual paper to find out whether/how it is addressed, I wanted to point it out to others.

The cognitive dissonance theory of conceptual change/social development was a primary model for many years without anyone ever directly investigating its primary assumption: to wit, whether cognitive dissonance actually does cause people discomfort and therefore motivate change. When it finally occurred to someone to investigate that question, it turned out that the level to which individual people experience cognitive dissonance is distributed on a roughly bell-shaped curve much like many other intellectual or personality characteristic: That is, some people are either incapable of ever recognizing that they hold two contradictory ideas at once or feel no discomfort or motivation to change whatsoever when they do recognize it, other people are almost entirely incapable of holding contradictory ideas in the first place and/or are highly motivated to change by even the hint that they do – with most people lying somewhere on a spectrum in between these extremes. (I’d cite this research, but I don’t have the citations close to hand at this computer.)

Knowing this, I wonder both whether and how the researchers took it into account, and how it affects their model.

Damian says, “And if you believe that Orac and PZ differ in any meaningful sense, please explain, because I’m not convinced that you can do it in a logically consistent fashion.”

I know this isn’t aimed at me, but I’d like to respond. I believe Orac and PZ both stick mainly to the evidence. However, where PZ goes wrong is when he starts making religious or philosophical arguments, something Orac usually does not do. PZ will use strawmen arguments about Christianity, he cherry picks examples and applies them to the whole, he sometimes makes philosophical assertions that he thinks are decisive when in reality these assertions have been recognized as false, even sophomoric, for centuries by all involved in the debates, and I think he did goal-post moving once (or maybe that one was just clarifying his position(?)).

I’ve even seen him quote from the Bible to bolster his point yet unknowingly has taken the verse completely out of context and got the meaning wrong (skeptics call that quote-mining). He is an example of the Dunning-Kruger Effect.

It doesn’t take a first year philosophy student to pick out some of the mistakes in PZ’s religious/philosophical claims either. All it takes is the usual skeptic critical reasoning skills. We’ve seen enough examples in other areas so why can’t we recognize the same false analogies, stereotypical statements, cherry picking, and other logical fallacies when “one of your own” uses them? He doesn’t need to use them…there’s plenty of valid arguments he can make (and he does make those too)…so why does he keep falling back on what I’d also classify as “low hanging intellectual wins”? Get that zinger in there whatever the cost to reality.

In my case, I see PZ’s errors easily (have a degree in comparative religions, and basic religious philosophy, among others) but I have to wonder in what other categories am I as wrong as he is? What dearly held beliefs or thoughts do I have that simply do not hold up to the evidence yet I continue to justify anyway?

It does make me a bit nervous knowing I have gaping truck-sized holes somewhere in my views. I do hope I’m honest enough with myself though that I will follow the evidence regardless of where it leads and regardless of what views or cherished ideas I need to put aside.

Indeed, the readers on HuffPost who are incredulous that some conservatives still believe the 9-11/Sadaam Hussein connection are the same readers who are giving themselves enemas to protect against swine flu…

DLC wrote, “There must be a reason that ______________ is true.”

A little off topic, but related.

In my studies of history and the history of legislative activities (i.e. why legislatures have written the laws they do) I find that for almost every law on the books, every regulation, every court decision, and every executive decree, there is usually a known reason.

That reason may no longer be applicable. That reason may be filled with ad-hoc or post-hoc reasoning. That reason may be the result of cognitive dissonance or inferred justification (a new concept for me, but I like it). That reason may be to benefit an individual who happens to have political power. That reason may be sheer bigotry. But there is a reason.

To approach the problem more generally, when you look at any activity that a human being partakes in, there is a reason. A reason strong enough to motivate them to perform the activity.

The task of skeptics is to uncover and examine those reasons, expose them to the light of day, and question if those reasons really justify the action. In my experience, the reasons often do justify the action, but not always and not to the benefit of everyone.

While I study the legislative acts of the Michigan congress over the past 150 years or so, I try to figure out why certain laws were established. There was a reason, some reason, which at the time made sense to the legislators. Usually the law itself makes some sort of sense. Occasionally you can find the reasons in legislative records, and occasionally from reading contemporary newspapers. Sometimes there is no available record which enables you to understand why a particular law was passed. But, there was a reason at the time, even if the reason is lost today.

The problem is that some, possibly many, of the attitudes and opinions we individually and culturally adopt are founded on ideas and concepts where the reasons for them are clearly superseded by today’s knowledge, based on ad-hoc reasoning, or even basic bigotry. Yet, because it’s our culture they are adopted without question and defended without thought. (And this is not limited to American culture, this phenomenon is present in every culture, worldwide.)

The primary task of skeptics, from Twain to Vonnegut, has been to expose the reasons for an action to critical inquiry. Sometimes with critical evaluation, sometimes with a horselaugh.

ah how fortunate i am to have grown up knowing the status quo wasn’t reliable…that what “they say” is often baseless, and that just because many people think so doesnt’ make it true…
yet my son has had only the first vaccination…at two months of age…after which he suffered violent seizures and has been subsequently diagnosed with autism…
the way i approach life is as an iconoclastic empiricist (sp?)…always aware that the totality of my wisdom is personal and my truths purely subjective…
it might be a better world if everyone could make their heads work this way….
for example, no amount of propaganda can convince me that a group of people halfway round the world are a danger to me, so that i might justify seeking their death…or go to war against them…
but the person who comes to my door with intent to enter and do evil can fully expect a no holds barred battle…
the way in which my mind processes information simply doesn’t allow for superstitions unfounded by personal verification…and i don’t expect my experiences to necessarily extend to anybody else…nor my truths, nor my wisdom…
belief is derived of two words…by life…
mine therefore are defined by what life has shown to be so for my self…
good article by the way…and wonderful comments on the article

Is it helpful to distinguish between the reason why people adopt mistaken beliefs and they reason why they persist in those beliefs in the face of all the evidence?

#29 Moon is a case in point. S/he suggests that questioning the status quo and refusing to follow the herd somehow confer value upon their opinions. I think that this is to confuse being a contrarian with being a sceptic. It is your reason for going against the grain rather than going against the grain per se that is important.

If people take up a position on a proposition based not on reason but on their preconceived notions about the proposition maker you will never convince them that their position is wrong while they continue to believe that the proposition maker is righteous, wicked etc.

Mike Stanton wrote, ” I think that this is to confuse being a contrarian with being a sceptic.”

Which is a very good point. A skeptic lives to expose and evaluate the reasons for an action. If the reasons are sound, the skeptic accepts the action as a reasonable action.

A contrarian doesn’t care about finding out the reasons for someone taking an action. A contrarian is simply, like Professor Wagstaff of Huxley College, against it!

Don’t get too emotionally attached to ideas would seem the logical conclusion 🙂

Or turn science into a religion…the bestest religion evar. 😉

However, where PZ goes wrong is when he starts making religious or philosophical arguments, something Orac usually does not do. PZ will use strawmen arguments about Christianity, he cherry picks examples and applies them to the whole, he sometimes makes philosophical assertions that he thinks are decisive when in reality these assertions have been recognized as false, even sophomoric, for centuries by all involved in the debates, and I think he did goal-post moving once (or maybe that one was just clarifying his position(?)).

It’s hard to argue with you when you don’t provide any examples, to be honest. There’s no doubt that PZ sometimes goes too far, other times he overgeneralizes, and yet more times provides a lazy answer when a more rigorous analysis is required.

My contention is with what has become an almost universal attempt — among his critics, at least — to discredit him by pointing to one or two of his excesses, and then dismissing him without actually representing him in even a semi-fair light. That’s intellectually dishonest, and even though we are all prone to it, including PZ, he can at least point to his many excellent posts as evidence that he is understands far more than some give him credit for. He produces some of the most consistently excellent and lucid science posts anywhere in the science blogoshpere, and contrary to popular myth, he is as prolific with those kind of posts as pretty much anyone else at scienceblogs (someone actually tested it). He has also written numerous terrific posts about the history of science and religion, including creationism, as well as some far more fantastic posts about contemporary religion than some are willing to let on.

Where he sometimes trips himself up is when he offers a very quick analysis — probably in between lectures, or other teaching commitments — and doesn’t give it the thought that is required. Those posts are really supposed to be a jumping off point for the commenters, but I freely that that shouldn’t be used as an excuse for bad arguments and faulty reasoning. But again, notice how his critics never admit that it is often his regulars that take him to task for those mistakes.

But PZ’s M.O. is fairly transparent — at least where his less involved posts are concerned. He offers an example of the excesses of religion for people to discuss, and leaves it at that. We can certainly debate whether that is conducive to a fair representation of religion, or whether he should spend more time on those posts, making sure that every aspect is rigorously examine before hand, etc, but I’ve never seen criticism for criticisms sake — even without a conscious effort to provide balance — as that much of a problem.

Indeed, we see it all of the time in other areas. I will admit that there are dangers to that approach — namely that some people become dogmatic in their approach to religion, influenced by the constant stream of criticism, and without actively seeking out the obvious and readily available exceptions (reasonable Christians, in other words, of which there are certainly many), to provide some perspective — but then, that’s why I, and many others, represent ourselves in the way that we do, hoping to show that it’s not necessary to fall in to that trap.

I’ve even seen him quote from the Bible to bolster his point yet unknowingly has taken the verse completely out of context and got the meaning wrong (skeptics call that quote-mining). He is an example of the Dunning-Kruger Effect.

That may well be true, although I will mention that what is in context and what is out of context is not an exact science, as you well know. Also, it’s just too easy for people to turn around and effectively say that their religion is being misrepresented, when we all know that religion, by its very nature, is among the most plastic beliefs that we have ever formed. So, not only does the argument supposedly not touch their own religious belief, but it’s also terrible of arguer for having not read their mind, so that they do know exactly what that particular person actually believes. That’s often nonsense, I’m afraid, and it’s even more often disingenuous.

Apologists have been caught too many times either denying things that others know them to believe (or later find out that they actually do believe it), or temporarily defending a vague form of deism, in an effort to escape an unwelcome logical conclusion of their actual belief, knowing full well that they actually do believe. Related to that, the entire concept of God itself is completely nebulous — and I would argue, therefore incoherent — because most people simply won’t state what they believe and then defend it, fearing — at least rhetorically — that they will have to admit that it is unfounded and unsupportable. That makes it all too easy to accuse the atheist of attacking a strawman, when in actual fact, they’re not. And we wouldn’t have to if people that claim that they wish to engage in a conversation were actually willing to clearly and honestly define their terms.

And in any case, as Raymond D. Bradley says in the introduction to his paper, “A Moral Argument for Atheism”:

The argument I am about to advance is intended mainly for a non-philosophical audience. So professionally trained philosophers may wonder at the fact that I say little about the God of philosophical tradition and much about the God of pulpit and pew.

For them I offer two brief explanations.

First: there is ample precedent for what I am doing. Socrates, for example, examined the religious beliefs of his contemporaries–especially the belief that we ought to do what the gods command–and showed them to be both ill-founded and conceptually confused. I wish to follow in his footsteps though not to share in his fate. A glass of wine, not of poison, would be my preferred reward.

Thus, like Socrates, I take issue with the God of popular belief, not the God of natural theology. And since God, in the minds of most westerners, is predominantly the God of the Jewish and Christian scriptures, I have little option other than to quote from the Bible freely so as to confront squarely the theistic beliefs that are my target and pre-empt charges of having misunderstood or misquoted my sources.

Second: the fact is that most of the big-name philosophers of religion who publish in academic journals such as Faith and Philosophy are themselves believers in the God of the Bible, not just the God of the philosophers. To do a little name-dropping, I have in mind the likes of William Alston, Peter van Inwagen, and Alvin Plantinga. All of these are, as Plantinga puts it, “people of the Word [who] take Scripture to be a special revelation from God himself”. None is averse to quoting chapter and verse of the Holy Scriptures–the morally palatable ones, anyway–in their publications as well as the pulpit.

William Alston, for example, claims: “a large proportion of the scriptures consists of records of divine-human communications,” and holds that God continues to reveal himself to “sincere Christians” of today in ways ranging from answered prayer to thoughts that just pop into one’s mind. Peter van Inwagen confesses: “I fully accept the teachings of my denomination that ‘the Holy Scriptures of the Old and New Testaments are the revealed Word of God.'” And Alvin Plantinga maintains: “Scripture is inerrant: the Lord makes no mistakes; what he proposes for our belief is what we ought to believe.” These views typify the kind of theism, viz., biblical theism, that I have undertaken to refute.

I agree. There’s nothing with arguing against the God of the average believer. In fact, I would suggest that more people should be doing it, even if some people see it as “low-hanging fruit”. If nobody is willing to engage with the God of vast majority of the populace, is it any wonder that we see a similar effect to widening of the wealth gap, only only in this instance it has the effect of widening intellectual gap?

I read a lot of academic philosophy, including the philosophy of religion, but you know as well as I do that most apologists are not interested in what are often long, drawn out arguments that make use technical language. So in that sense, your entire argument is misses the mark. I’ve tried to discuss philosophical arguments with numerous apologists, but very few of them are interested. They just don’t believe in that God (until they need to, of course…and round and round we go).

It doesn’t take a first year philosophy student to pick out some of the mistakes in PZ’s religious/philosophical claims either. All it takes is the usual skeptic critical reasoning skills. We’ve seen enough examples in other areas so why can’t we recognize the same false analogies, stereotypical statements, cherry picking, and other logical fallacies when “one of your own” uses them? He doesn’t need to use them…there’s plenty of valid arguments he can make (and he does make those too)…so why does he keep falling back on what I’d also classify as “low hanging intellectual wins”? Get that zinger in there whatever the cost to reality.

Again, without examples it’s a rather moot point. I simply can’t know whether I’d agree with you or not. But I’ve already admitted that there is some truth to it — the general point — and I’ve answered some of it, already.

Please excuse the appalling editing. Damn…and…blast. I produced the bulk of the post, went off to do something else, and then came back and added to it, forgetting to proofread it again. Oh, well. It should just about make sense.

I’ve been reading Timothy D Wilson’s book “Strangers to Ourselves” in which he proposes that most of our thinking, emotions, attitudes and behaviours are carried out by an ‘adaptive unconscious’. The adaptive unconscious is where the quick and dirty responses to life’s situations occur. The adaptive unconscious ‘communicates’ with the conscious through feelings. Our conscious thoughts are very much the junior partner and have little or no introspective ability for looking into the adaptive unconscious.

The adaptive unconscious is the result of long term evolutionary pressures, and its function has evolved to respond quickly and correctly (enough) to environmental and social events such that the individual survives and passes on their genes. The unconscious is adaptive in the sense that it ‘learns’ and incorporates fresh data into self-narratives (or world views). I believe that most of this incorporation is not based on unconscious deductive reasoning but on unconscious abductive reasoning. The adaptive unconscious takes on an effect (the alpha male is angry) and produces a hypothesis to explain the effect (I’ve stolen his food). This is forms a handy extension to the self-narrative (don’t steal the bosses food) – even though the hypothesis was not tested. I did say that the adaptive unconscious is quick and dirty – it only needs to be effective for the most threatening or emotive situations. The unconscious belief that ‘all snakes are deadly’ may not be logically true but it is a better unconscious quick and dirty rule than a slower conscious thought ‘lets take our time to look up in a book to see if this snake is dangerous or not’.

Wilson goes on to explore how the self-narratives built up in the adaptive unconscious may be out of step with our deliberate conscious self-narratives, and we don’t even realise.

So it is quite possible for an individual to unconsciously hold two logically contradictory self-narratives (e.g. I believe in God who created the world in 6 days and also I believe in the scientific method) and as long as the two self-narratives don’t conflict in the quick and dirty adaptive unconscious there will be no feelings of dissonance passed forward to the conscious brain to worry about.

Similarly if a belief (vaccination harmed my child) is held strongly in the adaptive subconscious, then conscious contrary evidence is unlikely to carry enough emotive clout to undo the anti-vax belief. People do put a lot of unconscious effort into defending their unconscious self-narratives. A challenge to their unconscious self-narrative is a challenge to their self-autonomy and generates strong defensive feelings – which the conscious mind then tries to elaborate into a logical argument.

All of the above ties in with the idea of motivated reasoning – it is just that most of the motivation is hidden from our conscious thoughts.

@Orac

It’s unequivocal that autism rates have continued to rise since 2001.

Citation please. The CDC has been reporting the 1-150 number for almost a decade.

It should be for the same reason why you stick with all your forces to the governemnt lies. Good try, but this strategy of mixing up 911 with moon hoax doens’t work anymore.

By all means of imagination, I can’t how someone in their sane mind can still believe that the free fall towers happened because of the fire. Pentagon circle hole and no a single piece was hit by the wings. What people want is those questions to be answered, not a declaration of the president saying the official version should be accepted because it is true. More than half of the americans believe the official version is flawed.

So tell me again, which is the vaccine laboratory that pays you to write this pile of garbage?

http://www.anovaordemmundial.com/

This is the biggest load of crap,pun intended,I have read from you yet,and that’s saying a lot.

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading