I Knew You Would Say That

New Scientist has pre-empted the publication of nine precognition studies by the eminent social psychologist Daryl Bem which is due out in the Journal of Personality and Social Psychology later this year. The reason that this is drawing such attention is that Bem reports nine replications of precognition where participants’ performance on various cognitive tasks is significantly predicted by events that happen after their responses are made. For example, in one study, Bem reverses the classic priming effect. In priming, participants are presented with a positive word such as “beautiful” or a negative word “ugly” and then they have to make a speeded response to categorize pictures in terms of whether they are beautiful or ugly, say for example a picture of a flower or a picture of a wart. When the prime is congruent (“beautiful”) with the image (flower), participants are faster compared to when the prime is incongruent. Bem reversed the sequence so that participants categorized the picture and were then presented with the prime. He still found a significant but reduced effect for the reverse prime. In other words, participants behaviour at time 1 was associated with events that had not yet happened at time 2.

Does Bem have an explanation? Not really. What is the mechanism? The usual appeal to quantum theory that all bets are off. I know that Richard Wiseman is currently attempting to replicate Bem’s findings and until then I must reserve judgement. It would be unscientific to dismiss a peer-reviewed article without waiting for confirmation. But something tells me that this is not going to hold up as a reliable finding.

UPDATE: Richard Wiseman thinks that experimenter bias may have crept in. According to the methods section, words that were recalled that were close to target words were coded as near misses by student experimenters who were aware of which words were the targets. In other words, not truly blinded.

12 Comments

Filed under Research

12 responses to “I Knew You Would Say That

  1. It’s an interesting experiment, and while I’m inclined to think some kind of very small effect of this kind is plausible, I remain skeptical. Jacques Benveniste and Rupert Sheldrake claimed persuasive experimental evidence too, and in both cases their work just didn’t stand up to scrutiny.

    In situations like this (as with Benveniste and Sheldrake) where the results are so marginal, the experimental protocols need to be very sound to conclude much. If I was laying money down, I’d bet that we find some flaws in Bem’s process, or that there’s a data compilation anomaly.

    But hey – it’s science! If there’s anything to it, we can find out.

  2. Arno

    I read the article yesterday and have to say that I was very impressed with both the methodology and the stats: Bem seems to have been exceptionally careful.
    That he has written the details and procedures down so carefully is good as well: other researchers shouldn’t have much of a problem replicating the studies. If they replicate the findings, I can see the implications going one of three ways: 1) either social psychology may have to reconsider some of its staple paradigms of the last 25 years (which is good), 2) statisticians and mathematicians will have to reconsider how to make a test truly random (which is good), 3) we might have to accept the presence of a phenomenon that previously wasn’t as well established (which will be unwanted to some, but is at its heart still a good thing). My guess is a mix of options 2 and 1.
    But if the effect isn’t replicated, we very probably have a case of unconscious influence of the tester on the participant (which Bem acknowledges as a possibility) and lead to attempts to make the studies as close to double-blind as possible.
    Either way, it is good science and a very modest and careful attempt to establish a controversial topic. The only thing I honestly dislike, is the invocation of quantum as an explanation. Nonetheless, this is a good paper, whatever the consequences in the end will be. But then again, that is just my two cents.

  3. It might be unscientific to completely dismiss it, but Bayes’ theorem tells us we are still on firm scientific footing if we assert that the study is almost certainly not indicative of some heretofore unknown method of cognition. Without an explanatory mechanism, I’d put the odds of demonstrable precognition at less than a billion to one. Even if we are nice and give the odds at 10,000:1, then applying Bayes’ theorem to Bem’s results still tells us that it is extraordinarily unlikely that he has discovered actual precognition.

    That said, kudos to him for documenting his methods so closely. As Arno said, that should make it easy for other researchers to attempt to replicate the effect, and also to try and figure out what’s going on.

  4. Two things make my skeptical antennae twitch here: the first is that Bem is already predisposed towards a belief in the paranormal (as evidenced by his support for Ganzfield experiments showing evidence for psi powers – despite any independent corroboration of cited results). While this should not have any material effect on a soundly constructed experiment, we saw this exact problem with Jacques Benveniste and his homeopathy experiments. Turns out that Benveniste’s science was pretty good – but his lab practices and data collation were faulty. Scientists missed this entirely. As we know, it took a magician to see where the science was in error.

    In my opinion, that’s where we’ll find problems with Bem’s experiment. It will be really impressive if the results can be duplicated in a double blind experiment by a third party with no belief in the paranormal.

    The other thing that makes me uneasy (as Arno said above) is the invocation of ‘quantum’ effects. Scientifically this is completely nutty, as there is no evidence at all to suggest that this should be a mechanism for Bem’s claimed results. That would take a different set of experiments entirely. ‘Quantum’ when used in conjunction with ‘paranormal’ effects is nothing more than a buzz word until some kind of causality is demonstrated.

  5. Pingback: Tweets that mention I Knew You Would Say That « -- Topsy.com

  6. I read through the paper, and it seems basically sound. Which leaves three possibilities: publication bias (i.e. what about the experiments that didn’t work, plus there does seem to be a wide range of outcomes assessed and a very small effect size, all of which predispose to finding things that aren’t really there), luck on the part of the investigator, and also the possibility that it’s real! I do hope it’s real… but would need someone else to do the experiments before I accept it. Having said that, we accept dozens of other psychology studies, some with quite revolutionary claims, most of which could be criticised in a similar way (should anyone be motivated to do so).

    • There’s a fourth possibility, Tom: That is seemed basically sound to you, but you missed something 🙂 That’s what I’m guessing, is that the experimental problems are so subtle that most of us are missing it. I read today a good description of a potential problem with a couple of the studies Bem writes about in this paper. Turns out that while the experiment itself was properly blinded, the coding was not blinded, and in some cases there was just enough subjectivity in the coding process to permit the possibility of bias. (It is too soon to say whether this actually caused a bias, but it seems plausible).

      As I said before, though, kudos to Bem for documenting everything so closely, and providing his software and data for all the world to see. There’s no way that a possible bias that subtle could be detected just from reading the paper. And although without a plausible causal mechanism I will pretty much automatically assume that any evidence of precognition is due to this kind of hidden bias (or publication bias), it’s much more satisfying if you can put your finger on the exact problem.

      Come to think of it, Benviste’s homeopathy experiments were being sabotaged largely by the coding process too, weren’t they? Or am I misremembering?

      • Correct, James. Benveniste’s experiments weren’t rigorously double blinded. If I recall correctly some of Benveniste’s assistants were inadvertently skewing results to positive bias. It is a real problem for scientists who believe passionately in their field. It doesn’t just happen in paranormal research either. It takes a lot of courage to see results that don’t support your hypothesis and abandon months, or years, of work.

  7. “It would be unscientific to dismiss a peer-reviewed article without waiting for confirmation. But something tells me that this is not going to hold up as a reliable finding. ”

    If you’re right, does this prove that you’re psychic?

  8. This manuscript argues that the Bem paper highlights problems with standard methods of data analysis:
    Why Psychologists Must Change the Way They Analyze Their
    Data: The Case of Psi

    Click to access Wagenmakersetal_subm.pdf

    • brucehood

      Wow,,,Yuko. I am simultaneously delighted and somewhat embarrassed that you read my blog. Its been a long time! Hope things are going well with you and look forward to catching up.

  9. Pingback: Study proves Psi . . . or, university students are really good at predicting or guessing porn . . . or a little of both | Brian David Phillips Waking Dreams

What do you think?