It's the end of an era (and also a lot of embarrassment for Princeton and scientists in general) for the Princeton Parapsychology lab.

From the NYT on Friday:

Over almost three decades, a small laboratory at Princeton University managed to embarrass university administrators, outrage Nobel laureates, entice the support of philanthropists and make headlines around the world with its efforts to prove that thoughts can alter the course of events.

But at the end of the month, the Princeton Engineering Anomalies Research laboratory, or PEAR, will close, not because of controversy but because, its founder says, it is time.

The laboratory has conducted studies on extrasensory perception and telekinesis from its cramped quarters in the basement of the university's engineering building since 1979. Its equipment is aging, its finances dwindling.

The lab was famous for experiments that essentially went like this:

In one of PEAR's standard experiments, the study participant would sit in front of an electronic box the size of a toaster oven, which flashed a random series of numbers just above and just below 100. Staff members instructed the person to simply "think high" or "think low" and watch the display. After thousands of repetitions -- the equivalent of coin flips -- the researchers looked for differences between the machine's output and random chance.Analyzing data from such trials, the PEAR team concluded that people could alter the behavior of these machines very slightly, changing about 2 or 3 flips out of 10,000. If the human mind could alter the behavior of such a machine, Dr. Jahn argued, then thought could bring about changes in many other areas of life -- helping to heal disease, for instance, in oneself and others.

Can you think of any plausible explanation for these results?

Here's the NYT article.

- Log in to post comments

To get statistical significance for a result that you can bias coin fips by a factor of 3/10000, you need to collect data from around ten million flips. To bias the set, you need merely to throw out a couple of apparently anomalous data sets, or have a tiny intrusion of other systematic error. In other words, to get pristine statistics for an effect this small is in practical terms impossible.

Depends on how they ran their statistics... if they never pooled the results and just analyzed the data with ~10,000 degrees of freedom, it's not unreasonable to expect that they'd get this kind of false-positive. Even if they didn't, it seems like they'd have to collect a huge amount of data to show any statistical significance for a 3 in 10,000 effect.

Their data, in my opinion, seems to support this. They have a random number generator (an actual random number generator, not a simulation) that spits out 200,000 random numbers per second. These numbers are somewhat arbitrarily broken into groups of 200 (one trial, in which the baseline rate of 'high' should be 100), and a session/block consists of either 50, 100, or 1000 trials.

This review paper shows the results, after 12 years: http://noosphere.princeton.edu/papers/pear/correlations.12yr.pdf

To me, the graph on page 6 is a red flag. Essentially the graph shows that the more observations were used, the higher the z-score in their intended direction. I can't speak to the directionality of things (but look at those error bars on the next graph!), but it seems like you could make this effect disappear by increasing the number of observations per trial (i.e. vastly increasing the number of trials w/o changing total observations), or increase the effect by reducing it.

I'm not sure why "thinking 'low'" translates into some sort of machine language, but "thinking 'cow'" doesn't, so maybe we could invite these guys to a brown bag and ask them?

Ah, scooped by a shorter post ;) And there's a dumb typo in mine anyway.

Interesting article.

"One editor famously told Dr. Jahn that he would consider a paper if you can telepathically communicate it to me."

Made me chuckle. :)

Here's a couple of critical comments by mathematicians with links to detailed accounts:

Jeffrey Shallit:

The PEAR has rotted

March Chu-Carroll:

The End of PEAR

Cue all the "didn't they see this coming" jokes.