Princeton Parapsychology lab to close.

i-93c94ef329ecfb19c96ef8bcdc79776d-Resize of parapsychology_cartoon_lab.gifIt's the end of an era (and also a lot of embarrassment for Princeton and scientists in general) for the Princeton Parapsychology lab.
From the NYT on Friday:

Over almost three decades, a small laboratory at Princeton University managed to embarrass university administrators, outrage Nobel laureates, entice the support of philanthropists and make headlines around the world with its efforts to prove that thoughts can alter the course of events.

But at the end of the month, the Princeton Engineering Anomalies Research laboratory, or PEAR, will close, not because of controversy but because, its founder says, it is time.

The laboratory has conducted studies on extrasensory perception and telekinesis from its cramped quarters in the basement of the university's engineering building since 1979. Its equipment is aging, its finances dwindling.

The lab was famous for experiments that essentially went like this:

In one of PEAR's standard experiments, the study participant would sit in front of an electronic box the size of a toaster oven, which flashed a random series of numbers just above and just below 100. Staff members instructed the person to simply "think high" or "think low" and watch the display. After thousands of repetitions -- the equivalent of coin flips -- the researchers looked for differences between the machine's output and random chance.

Analyzing data from such trials, the PEAR team concluded that people could alter the behavior of these machines very slightly, changing about 2 or 3 flips out of 10,000. If the human mind could alter the behavior of such a machine, Dr. Jahn argued, then thought could bring about changes in many other areas of life -- helping to heal disease, for instance, in oneself and others.

Can you think of any plausible explanation for these results?

Here's the NYT article.


More like this

The mind is a complicated and a still very much unknown entity. The earliest conceptions of the mind didn't even have it placed in the brain, instead it was very much separate from the body. This is of course all very silly, the only possibility is that the mind wholly and completely resides in…
After 28 years, the Princeton Engineering Anomalies Research institute is finally giving up the ghost, bringing an end to arguably the most respectable -- or the least embarrassing -- parapsychological research effort. Is this cause for celebration? I'm not sure, but I think Princeton is probably…
Via Recursivity and Pharyngula, I've learned that, after being an embarrassment to Princeton University for nearly three decades, the Princeton Engineering Anomalies Research (PEAR) laboratory is closing due to lack of funding. I'm only amazed that it held on so long. Let's just hope that Deepak…
Given my love of science and advocacy of evidence-based medicine, people may have come to the erroneous conclusion that I hate all woo. Nothing could be further from the truth. I just want medical woo to be subject to the same scientific testing as conventional medicine, because I believe that…

To get statistical significance for a result that you can bias coin fips by a factor of 3/10000, you need to collect data from around ten million flips. To bias the set, you need merely to throw out a couple of apparently anomalous data sets, or have a tiny intrusion of other systematic error. In other words, to get pristine statistics for an effect this small is in practical terms impossible.

Depends on how they ran their statistics... if they never pooled the results and just analyzed the data with ~10,000 degrees of freedom, it's not unreasonable to expect that they'd get this kind of false-positive. Even if they didn't, it seems like they'd have to collect a huge amount of data to show any statistical significance for a 3 in 10,000 effect.

Their data, in my opinion, seems to support this. They have a random number generator (an actual random number generator, not a simulation) that spits out 200,000 random numbers per second. These numbers are somewhat arbitrarily broken into groups of 200 (one trial, in which the baseline rate of 'high' should be 100), and a session/block consists of either 50, 100, or 1000 trials.

This review paper shows the results, after 12 years:

To me, the graph on page 6 is a red flag. Essentially the graph shows that the more observations were used, the higher the z-score in their intended direction. I can't speak to the directionality of things (but look at those error bars on the next graph!), but it seems like you could make this effect disappear by increasing the number of observations per trial (i.e. vastly increasing the number of trials w/o changing total observations), or increase the effect by reducing it.

I'm not sure why "thinking 'low'" translates into some sort of machine language, but "thinking 'cow'" doesn't, so maybe we could invite these guys to a brown bag and ask them?

Ah, scooped by a shorter post ;) And there's a dumb typo in mine anyway.

Interesting article.

"One editor famously told Dr. Jahn that he would consider a paper if you can telepathically communicate it to me."

Made me chuckle. :)

Here's a couple of critical comments by mathematicians with links to detailed accounts:

Jeffrey Shallit:
The PEAR has rotted

March Chu-Carroll:
The End of PEAR

A couple of PEARs greatest hits, to give you an idea:
An attempt to create a mathematical explanation for how consciousness affects reality. This work uses some of the worst fake math that I've ever seen...
Skewing statistics to show that minds can affect the REG...
Post-Hoc data selection to create desired results...

By Mustafa Mond, FCD (not verified) on 11 Feb 2007 #permalink

Cue all the "didn't they see this coming" jokes.