This blog post from The Calladus Blog is an example of how "positional" skeptics tend to approach any new information that might challenge their worldview. I found a particularly large number of errors and irrelevancies in this post, and thought it would be useful to analyze them here. So I will be excerpting from the "Nutty Professor" post and providing commentary on what I found problematic at each point.
The Nutty Professor: Dr. Rupert Sheldrake and Telephone Telephony
We are off to an inauspicious start with the namecalling ad-hominem right out the gate. . .
"However, his sample was small on both trials -- just 63 people for the controlled telephone experiment and 50 for the email -- and only four subjects were actually filmed in the phone study and five in the email, prompting some skepticism.
Undeterred, Sheldrake -- who believes in the interconnectedness of all minds within a social grouping -- said that he was extending his experiments to see if the phenomenon also worked for mobile phone text messages."
Notice that Dr. Sheldrake didn't address the problem of a too small sample size for his experiment, and instead immediately widened the experiment to include different tests. This is not a sign of good science!
Woah, there, pardner!
Just because the "yahoo news" article made a claim of "too small sample size" doesn't make it true. This is a fundamental misunderstanding of what Sheldrake was studying. If Sheldrake was seeking to demonstrate that everyone has telephone telepathy, then indeed this sample size is much too small. In addition, Sheldrake did not select a random sample of a population -- he actively sought out individuals who felt that they frequently experienced telephone telepathy. Furthermore, Sheldrake only took the most successful participants from the first phase of the experiments for the more detailed and rigorous videotape trials. He was seeking out a very specific population of people with the best possibility of showing strong effects for telephone telepathy -- the exact opposite of What Mark of the Calladus blog is suggesting by complaining about sample size. Sheldrake's sample size was (deliberately) small, and that is not a problem, as long as his effect size and number of trials is sufficiently large, as they certainly are in Sheldrake' experiment.
This isn't the first time Dr. Sheldrake has been accused of making unwarranted claims based on improper methodology. CSICOP took Dr. Sheldrake to task and debunked his Psychic Staring Effect experiment - where he claimed to show that people could tell, better than random chance, when someone was staring at them.
The link Mark provides offers a normal "explanation" for Dr. Sheldrake's results. However Dr. Sheldrake's rebuttal is also found linked to the "debunking" article:
Colwell et al.'s second experiment was designed to test their pattern-detection hypothesis by using "structureless" random sequences. Sure enough, this time there was no significant overall positive score, although in two of the three sessions there was a highly significant excess of correct guesses in the looking trials.
At first sight, the overall non-significant result seems to confirm their hypothesis. But Marks and Colwell (2000) omitted to mention the crucial fact that in Experiment Two there was a different starer, David Sladen. Can we take it for granted that changing the starer made no difference?
Such experimenter effects are not symmetrical. The detection of Schlitz's stares by the participants under conditions that excluded sensory cues implies the existence of an unexplained sensitivity to stares. By contrast, the failure to detect Wiseman's stares implies only that Wiseman was an ineffective starer. Perhaps his negative expectations consciously or unconsciously influenced the way he looked at the subjects.
In Colwell et al.'s Experiment Two, the starer, Sladen, as one of the proponents of the pattern-detection hypothesis, was presumably expecting a nonsignificant result. His negative expectations could well have influenced the way in which he stared at the participants. It would be interesting to know if Sadi Schršder, the graduate student who acted as starer in Experiment One, was more open to the possibility that people really can detect when they are being stared at.
Other Relevant Experiments
Marks and Colwell claimed that their pattern-detection hypothesis invalidated the positive results of staring experiments carried out by myself and others. If these experiments had involved pseudo-random sequences and feedback, as required by their hypothesis, their criticism might have been relevant. But this is not how the tests were done, as they would have seen for themselves if they had read my published papers on the subject.
First, in more than 5,000 of my own trials, the randomization was indeed "structureless," and was carried out by each starer before each trial by tossing a coin (Sheldrake 1999, Tables 1 and 2). The same was true of more than 3,000 trials in German and American schools (Sheldrake 1998). Thus the highly significant positive results in these experiments cannot be "an artifact of pseudo randomization."
Second, when I developed the counterbalanced sequences that Marks and Colwell describe as pseudo-random, I changed the experimental design so that feedback was no longer given to the subjects. Since the pattern-detection hypothesis depends on feedback, it cannot account for the fact that in more than 10,000 trials without feedback there were still highly significant positive results (Sheldrake 1999, Tables 3 and 4).
In spite of their prior assumption that an ability to detect unseen staring must be illusory, both Baker (2000) and Colwell et al. (2000) in their first experiments obtained unexpected positive results consistent with such an ability. They attempted to dismiss these findings with question-begging arguments. In their second experiments, which gave the non-significant results they expected, an investigator with negative expectations acted as the starer. This arrangement provided favorable conditions for experimenter effects, already known to occur in staring experiments (Wiseman and Schlitz 1997). Both Baker and Marks and Colwell also failed to mention a large body of published data that went against their conclusions. In short, their claims were misleading and ill-informed.
To summarize: Sheldrake takes the SI "debunking" and completely shreds it with a devastating recitation of the facts. The hypothesis outlined in the debunking article is completely at variance with the actual data and experimental findings, which Baker and Colwell would have noted if they had carefully read Sheldrake's research before attacking it. I suppose that is the difference between a "debunking", and a scientific investigation. The latter is an attempt to discover the truth, and the former is simply an attempt to win an argument with no particular regard for reality.
In another paper Sheldrake mentions offering to analyze Marks and Colwell's own data in detail and see if it matches their hypothesis of pseudorandom pattern recognition. Not unexpectedly, Marks and Colwell fail to take him up on that offer.
Dr. Sheldrake also wrote a book called, "Dogs That Know When Their Owners Are Coming Home: And Other Unexplained Powers of Animals" (Amazon link) In this book he attempts to show that dogs can somehow psychically 'tell' when their owners are coming home. The methodology described by Dr. Sheldrake shows that he didn't even attempt to create a 'double blind' experiment, where neither owners, dogs, nor observers knew when the owner was coming home. Instead he allowed the owners of these dogs to record the observations of their pets. He again used a very small sample size.
Again, Mark displays a lack of understanding of this kind of research and the relevant statistics. Small sample size is a canard here. His is also completely wrong about the experimental design, which involved both "observational" and double-blind and randomized components. The researcher who analyzed the videotape data was blind to the experimental conditions, as was everyone in the house (including the dog). Either Mark hasn't read the studies he is criticizing here, a cardinal sin in science, or he read them sloppily and is misinformed.
Instead of publishing to peer-reviewed media, Dr. Sheldrake writes popular books and makes claims and announcements pitched to the media.
Like many other scientists, Dr. Sheldrake does write popular books in addition to his admirable record of peer reviewed publication credits, including multiple articles in Nature, Planta, and the Journal of Consciousness Studies. I'm sure that Mark is not intending to imply that Francis Crick, Stephen Hawking and John Wheeler are bogus scientists, because of their popular science works, or that Nature, Planta and JCS are bogus journals, right?
His results are based on a small sample size, which is at the very limit of detection of effects.
Again, the relevant statistic here is the number of trials, not "sample size". In the videotaped experiments, there were of 271 trials, and were 122 (45%) correct guesses (p = 10-12). This is astronomically far from being "at the very limit of detection of effects"!
He bases some of his conclusions on anecdotal evidence (for example, allowing a dog's owner to record their observations.)
Any scientist uses anecdotal evidence as a starting point for designing experiments. Sheldrake's experiments absolutely do not rely on "allowing a dog's owner to record their observations", but rather use videotapes, evaluated blindly by a third party, to determine the measured data.
He claims that Quantum Theory can explain psychic phenomena, which is a proposed new law of nature since Quantum Theory describes the subatomic, not macroscopic, universe.
Now Mark is confused. The quantum nature of matter is responsible for many of the properties of the macroscopic world, such as the very most basic fact that atoms take up space. You cannot wall off quantum mechanics and say that it is completely irrelevant to macroscopic reality!
He works in some isolation, well outside the mainstream science community.
Sheldrake works with many other researchers, including avowed "skeptics", and has been published in a large variety of journals as I described above.
What I find most interesting about Dr. Sheldrake's supposed skepticism is his refusal to cooperate with noted skeptic James Randi in Randi's Million Dollar Challenge.
Randi is a showman, not a serious researcher. And he has already established a track-record of distortions trending to outright lies with regards to Rupert Sheldrake's research:
The January 2000 issue of Dog World magazine included an article on a possible sixth sense in dogs, which discussed some of my research. In this article Randi was quoted as saying that in relation to canine ESP, "We at the JREF [James Randi Educational Foundation] have tested these claims. They fail." No details were given of these tests.
I emailed James Randi to ask for details of this JREF research. He did not reply. He ignored a second request for information too.
I then asked members of the JREF Scientific Advisory Board to help me find out more about this claim. They did indeed help by advising Randi to reply. In an email sent on Februaury 6, 2000 he told me that the tests he referred to were not done at the JREF, but took place "years ago" and were "informal". They involved two dogs belonging to a friend of his that he observed over a two-week period. All records had been lost. He wrote: "I overstated my case for doubting the reality of dog ESP based on the small amount of data I obtained. It was rash and improper of me to do so."
Randi also claimed to have debunked one of my experiments with the dog Jaytee, a part of which was shown on television. Jaytee went to the window to wait for his owner when she set off to come home, but did not do so before she set off. In Dog World, Randi stated: "Viewing the entire tape, we see that the dog responded to every car that drove by, and to every person who walked by." This is simply not true, and Randi now admits that he has never seen the tape.
And Sheldrake is not the first psi researcher who has uncovered a pattern of untruths from James Randi.
Given Randi's history of distortions and lies, why would any serious psi researcher want to work with him on any experiments?