Michael Shermer’s Believing Brain

This looks like it may well be a good year for scientific books on belief. My paperback “The Science of Superstition” is out later this month and Robert Park (author of Voodoo Science) has a new book, “Superstition: Belief in the Age of Science,” out in August. Moreover, one of the high priests of skepticism, Michael Shermer, also has a forthcoming book on belief, tentatively entitled, “The Believing Brain.”

Back in February, Shermer gave a TED talk where he gave us a tantalizing 15min glimpse of what will be in his new book.  In true Shermer tradition, it was a very entertaining presentation, and I was very pleased that he highlighted the ADE651 story about bomb-detecdting in Iraq.  I was even more delighted to see he referred to my work in his Slant article.

I agree with Shermer’s main “patternicity” idea that we are inclined to ascribing agency everywhere. He also referred to Susan Blackmore’s seminal work on signal-to-noise thresholds in believers which is strongly associated with supernatural belief propensity.  In much the same way I argued in “The Science of Superstition” (formerly known as SuperSense), Shermer supports the idea that the natural inclination in humans is to believe and the skepticism and the scientific approach is unnatural. This of course, is an old idea and we are both indebted to the great 18th century Scottish rationalist philosopher David Hume, who said over 200 years ago,

“We find human faces in the moon, armies in the clouds; and by a natural propensity, if not corrected by experience and reflection, ascribe malice and  good-will to everything, that hurts or pleases us.”

Michael uses the modern language of statistics to explain the propensity of humans to detecting all manner of patterns that we have documented so much in this blog, as Type I errors – rejecting the null hypothesis. In other words, saying something is present when in fact it is not. This is much better than making a Type II error which is rejecting a real signal as not being there when in fact it is.

Why are Type II errors a disadvantage and how could a Type I bias have been selected for?   Stewart Guthrie in his book, “Faces in the Clouds,” argues that our intuitive pattern processing biases us towards seeing faces which leads us to assume that hidden agents surround us. Building on David Hume’s, “We find human faces in the moon, armies in the clouds,” observation, Guthrie presents the case that our mind is predisposed to see and infer the presence of others which explains why we are prone to see faces in ambiguous patterns. If you are in the woods and suddenly see what appears to be a face, it is better to assume that it is one rather than ignore it. It could be an assailant out to get you. Why else would they be hiding in the shadows?  In this case it is always better to err on the side of caution.

Of course, there is still much further research to be done, such as why are there individual differences and why does our shift to Type I errors increase under certain circumstances. These are some of the questions that we are currently researching in our lab but I look forward to reading Michael’s account which I know will be eminently entertaining and engaging.

3 Comments

Filed under book publicity, In the News, Research

3 responses to “Michael Shermer’s Believing Brain

  1. While I think Shermer’s “patternicity” argument has a lot of plausibility, and I have recited it myself in fact, there is one specific in the construction of his argument that has always bugged me:

    Since the cost of making a Type I error is less than the cost of making a Type II error, and since there’s no time for careful deliberation between patternicities in the split-second world predator-prey interactions, natural selection would have favored those animals most likely to assume that all patterns are real.

    (emphasis mine)

    I do not think the second bolded phrase follows from the first. It is not enough that the cost of the error is less… the cost of the error multiplied by the probability that it is, in fact, an error must be less than the alternative.

    I suppose one could argue that this is implied in a sophisticated understanding of the word “cost”, but I don’t like how Shermer generally glosses over this point, especially since I think it is one the biggest potential weaknesses in his account — his description gives the account logical plausibility, but we have no freakin’ idea if it is also mathematically plausible. It’s quite conceivable that the odds are so low that that rustle in the grass really was a tiger, that an organism biased towards Type I errors would be more likely to starve to death as a result of constantly fleeing imaginary predators than to actually evade any tigers. Meanwhile, his Type II-biased cousins generally prosper, with only an occasional one here and there getting eaten — more than an acceptable cost from the point of view of natural selection.

    Without the mathematics to show that a Type I bias really is preferable, Shermer’s account is nothing more than a lot of talk. Intuitively, I happen to think he’s right, but in his talks and articles I do not feel he is sufficiently guarded. Perhaps his book will be more circumspect…

    It occurs to me as I type this that a mathematical model showing a preference for Type I-biased organisms could potentially even explain the heterogeniety that you allude to (“there is still much further research to be done, such as why are there individual differences”). It’s just conceivable that there could be an interaction between patternicity and certain environmental conditions (e.g. prevalence of food, prevalence of predation, etc.) that mirrors the interaction between host and parasite — an interaction we already know to mathematically support heterogeniety in a population.

    If you’ll pardon a “Just So” flight of fancy on my part: We could imagine a population of prey organisms where some individuals are 90% biased towards Type I errors, and others are 70% biased (whatever that would mean, but run with me here for a second). If conditions of predation were static, we’d expect one of the two phenotypes to become dominant. But if the local density of predators has a cyclic fluctuation, then the ratio of the two phenotypes in the prey population could conceivably track that cycle. In years with a high predator population, the 70%-biased individuals tend to get eaten before they can reproduce, favoring the 90%-biased individuals; while in years with a low predator population, the 70%-biased individuals are able to gather more food than their cousins and therefore out-reproduce them.

    It’s sounds just plausible… IANAEvolutionary Biologist, nor am I a member of any profession such that I would have a prayer of working out the mathematical model required to show that it really is plausible.

    But in any case, I do wish Shermer would, even in his short talks and articles, give at least a sentence to the role that probability makes. He speaks as if there is a 50/50 chance that the rustle is a tiger, but clearly we know that’s not the case. It doesn’t entirely undermine his account, but it does ignore the one avenue which could potentially transform his account from “an interesting story” to “a workable scientific model.”

  2. brucehood

    I guess that Shermer’s response would be that the likelihood (population) of occurrences of tiger’s rustling leaves is always going to be much lower than the likelihood of the wind which is much more common. However, the cost of responding is always going to be less than the cost of not responding – but you are right to question such matters.
    I think “The Selfish Gene” has a very clear section (if I remember correctly) about how environments fluctuate and change as the predominance of a particular gene changes. I always like to point out that increasing the “fit” of a gene to its environment will eventually lead to its demise.

  3. A very important book! And it is definitely a book about a problem-
    why our brains are dominated and our lives are controlled by memes, and we are living in memecracies and making self-destructive decisions.

    Almost all problemns have solutions, as I have shown in: http://egooutpeters.blogspot.com/2011/03/my-rules-of-problem-
    solving.html

    For the Believing Brain- the name of the Solution is BISINISENCEPHALIA- de facto rational people. Not an easy solution.
    Peter

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s