Quantum Consciousness and the Penrose Fallacy

Blogging on Peer-Reviewed ResearchOver at Neurophilosophy, Mo links to an article by a physicist, posted on the arxiv, that claims to explain visual perceptions using quantum mechanics:

A theory of the process of perception is presented based on von Neumann's quantum theory of measurement and conscious observation. Conscious events that occur are identified with the quantum mechanical ``collapses'' of the wave function, as specified by the orthodox quantum theory. The wave function, between such perceptual events, describes a state of potential consciousness which evolves via the Schr\"odinger equation. When a perceptual event is observed, where a wave-function ``collapse'' occurs through the process of measurement, it actualizes the corresponding neural correlate of consciousness (NCC) brain state. Using this theory and parameters based on known values of neuronal oscillation frequencies and firing rates, the calculated probability distribution of dominance duration of rival states in binocular rivalry under various conditions is in good agreement with available experimental data. This theory naturally explains recently observed marked increase in dominance duration in binocular rivalry upon periodic interruption of stimulus and yields testable predictions for the distribution of perceptual alteration in time.

This sort of "quantum physics explains consciousness" stuff has a long history, and most of it is gibberish. In particular, there tends to be a good deal of circularity in most arguments invoking von Neumann. The "testable predictions" line is new, though, so I decided to download the paper and take a look.

There's a lot of quantum background in the first couple of sections, but the basic phenomenon being studied is the idea of "binocular rivalry," visual illusions-- like the spinning dancer that's been all over the place recently-- that could be seen in one of two states. Manousakis says that these should be modeled as a quantum superposition of two brain states, and constructs a model that appears to reproduce the distribution of time between "flips" between states.

So, is he really on to something?

Well, the model he produces does reproduce the general shape of the curves, but it's not clear to me that there's anything especially quantum about it. He engages in a good deal of quantum handwaving to justify the model, but in the end, what he's got is just a system in which there's some probability of seeing either state, and the relative probabilities of the two oscillate (that is, when the probability of seeing one is high, the other is low, and some time later, they switch). He attributes the switching to quantum measurements collapsing the state of the system, and generates a distribution of switching times for various parameters.

The thing is, there's nothing intrinsically quantum about this arrangement-- lots of things oscillate, lots of things are probabilistic, and the mere fact that something is oscillating and probabilistic does not mean that it's quantum. He goes through a bunch of work to derive these probabilities from a unitary matrix, but that's just linear algebra, and again, the use of matrix algebra doesn't make it a quantum system. If it did, then Excel would be quantum, because I can use it to do least-squares curve fitting, and you can describe those in matrix notation.

I'm also a little dubious about calling this a "prediction." I mean, yes, the model he uses does generate a distribution of switching times with the right general shape, while a different model apparently does not. But it's really more of a fit than a prediction-- the parameters used in the model do not appear to have come from any other measurement, but rather have been chosen to give the best possible agreement between the model distribution and the distributions measured in psychology experiments. Which is fine as far as it goes, but unless you can compare the parameters (a couple of time scales, one for the oscillation and one for the perception of time in the brain) to similar parameters measured by other means, or use those parameters to generate curves for some other condition, you haven't really proved anything. He applies the model to two different situations (two versions of the same experiment, one involving subjects who were on LSD), but it's essentially a curve fit both times-- none of the parameters are the same, and there's no explanation for the changes.

It's nice that the model reproduces the shape of the distribution, and this may or may not be something new-- I don't know anything about neuroscience, so I don't know if the competing model he dismisses is actually the best anybody else has to offer, or if this is another case of an arrogant physicist reinventing the wheel for another discipline. But really, there's nothing convincingly quantum here. There's a slightly weird invocation of the Quantum Zeno Effect, but I really don't see anything that couldn't be done with classical probability distributions.

The whole "consciousness is quantum" business is a case of what I tend to think of (perhaps unfairly) as the "Penrose fallacy," because I first ran into it in The Emperor's New Mind. The argument always seems to me to boil down to "We don't understand consciousness, and we don't understand quantum measurement, therefore they must be related." And that just doesn't make any sense at all.

If you want me to believe that quantum processes are responsible for the workings of the brain, I need to see something that's intrinsically, well, quantum. Oscillations and probability don't cut it. Some sort of interference effect would. I need a reason to believe that we're dealing with a wavefunction (or, better yet, a density matrix), and not just a probability distribution.

More like this

How about "Free will is Quantum"?

One job of our mind is to amplify very tiny signals and then act on them.

Free will and randomness areboth defined in the same way: The unpredictability of an object's action. Is there any difference to the outside observer between the two?

We are an amplification device for quantum randomness. A rock is a dampening device. Organized society amplifies free will and a mob dampens it.

Boronx,

The problem with that is that it's not obvious a completely classical world would look any different in that respect, thanks to the sensitive dependence on initial conditions which chaotic systems exhibit. This is a point which Feynman raised forty-odd years ago in the first volume of his Lectures on Physics.

We have already made a few remarks about the indeterminacy of quantum mechanics. That is, that we are unable now to predict what will happen in physics in a given physical circumstance which is arranged as clearly as possible. If we have an atom that is in a excited state and so is going to emit a photon, we cannot say when it will emit the photon. It has a certain amplitude to emit the photon at any time, and we can predict only a probability for emission; we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom and will, and of the idea that the world is uncertain.

Of course we must emphasize that classical physics is also indeterminate, in a sense. It is usually thought that this indeterminacy, that we cannot predict the future, is an important quantum-mechanical thing, and this is said to explain the behavior of the mind, feelings of free will, etc. But if the world were classical -- if the laws of mechanics were classical -- it is not quite obvious that the mind would not feel more or less the same. It is true classically that if we knew the position and the velocity of every particle in the world, or in a box of gas, we could predict exactly what would happen. And therefore the classical world is deterministic. Suppose, however, that we have finite accuracy and do not know exactly where just one atom is, say to one part in a billion. Then as it goes along it hits another atom, and because we did not know the position better than to one part in a billion, we find an even larger error in the position after the collision. And that is amplified, of course, in the next collision, so that if we start with only a tiny error it rapidly magnifies to a very great uncertainty. To give an example: if water falls over a dam, it splashes. If we stand nearby, every now and then a drop will land on our nose. This appears to be completely random, yet such a behavior would be predicted by purely classical laws. The exact position of all the drops depends upon the precise wigglings of the water before it goes over the dam. How? The iniest irregularities are magnified in falling, so that we get complete randomness. Obviously, we cannot really predict the position of the drops unless we know the motion of the water absolutely exactly.

Speaking more precisely, given an arbitrary accuracy, no matter how precise, one can find a time long enough that we cannot make predictions valid for that long a time. Now the point is that this length of time is not very large. It is not that the time is millions of years if the accuracy is one part in a billion. The time goes, in fact, only logarithmically with the error, and it turns out that in only a very, very tiny time we lose all our information. IF the accuracy is taken to be one part in billions and billions and billions -- no matter how many billions we wish, provided we do stop somewhere -- then we can find a time less than the time it took to stat ethe accuracy -- after which we can no longer predict what is going to happen! It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical "deterministic" physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a "completely mechanistic" universe. For already in classical mechanics there was indeterminability from a practical point of view.

>>How about "Free will is Quantum"?

Free will isn't quantum. Will isn't quantum. Illusions aren't an expression of will. Free will was invented wholecloth about 800 years ago.

...I need a reason to believe that we're dealing with a wavefunction (or, better yet, a density matrix), and not just a probability distribution.

If the brain is found experimentally to use/exhibit fundamentally "quantum" processes, it will likely (IMHO) be through evidence of non-locality. Of course, as Blake alluded to with his Feynman quote, non-locality doesn't necessarily imply quantum mechanical processes, but it would be a good start. I believe there are a few people exploring this issue with moderate success (e.g. Thaheld), though obviously these types of experiments need independent reproducibility to be taken more seriously.

At this point, I believe the whole quantum brain question is still open to speculation, and should be continued to be explored with an open mind. For instance, it is not productive to simply throw out Tegmark quotes about the so-called impossibility of quantum coherence in brain tissue. He is a good physicist, but his calculations leading to this conclusion are shoddy at best.

After all, we now know that plants take advantage of it, so why wouldn't we?

http://www.lbl.gov/Science-Articles/Archive/PBD-quantum-secrets.html

"Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems"

Nature 446, 782-786 (12 April 2007)

Disclaimer: I don't know any quantum physics, so I can't comment on that part of the article.
What I can say, however, is that it's highly doubtful the author could get his paper published in any respectable vision science outlet (eg, The Journal of Vision).
There is a whole literature on multistable perception, with probably dozens of models that can produce similar phase distributions to those he tries to model in the article. The author makes no reference to those, nor does it try to compare his model with anything simpler, just to compare fits.
Phase distributions in any case don't even begin to characterize bistable processes, they have much richer temporal characteristics that he does not address (for example, transition probabilities that change over time, Mamassian and Goutcher '05).
Right now his proposal:
a) Explains less than current models based on our best understanding of neural and perceptual phenomena
b) Does not explain anything that those models haven't dealt with yet, like the 'pattern completion' discovered by Maloney et al. (PNAS, '05)
c) Is rather hard to instantiate - but maybe that's just my poor understanding of quantum physics. How is "observation" supposed to take place, neurally speaking?
d) Does not address biological function. What is the biological point of such a mechanism?

Multistable perception has been observed in pigeons (Vetter et al., Vision Research, 2000) - do pigeons have quantum consciousness as well? I wonder what von Neumann would make of that.

Simon

Penrose's argument was a little more than that. He argues that humans are capable of perceiving truths that are noncomputable. (He justifies this with his experience doing mathematics.) Then, he says it's been proven that a system based on classical physics is incapable of doing anything noncomputable. Therefore, he claims, some other physics must be involved.

Oddly, he also mentions a proof that quantum computers are limited to the computable as well. So it's a bit odd that he gets onto this quantum kick...he justifies it by saying we don't have a working quantum gravity theory, so there's unknown physics there.

But the core of his argument has nothing to do with quantum physics...really, it's just that since no known physics can account for the noncomputable, there must be unknown physics involved in operating the human mind, and quantum gravity is the main area of unknown physics that he knows of.

The weakest part of his argument is probably the idea that humans can do the noncomputable. Perhaps we have some kind of program running that gives us what insight we have, and we just aren't aware of the truths we can't see. I may be forgetting part of his argument there, though, it's been a while.

"The argument always seems to me to boil down to "We don't understand consciousness, and we don't understand quantum measurement, therefore they must be related." And that just doesn't make any sense at all."

This in turn can be neatly encapsulated by anouther lately-faddish disciple, evo-psych! It's a common quirk of human logic that when we're confronted by two significan anomolies at once, we look for a common cause. That's why so many Fortean-style "theories" depend on pairing or grouping various "inexplicable incidents", and it's why quantum magic gets drafted for so many other crank theories.

By David Harmon (not verified) on 27 Oct 2007 #permalink

He argues that humans are capable of perceiving truths that are noncomputable... then, he says it's been proven that a system based on classical physics is incapable of doing anything noncomputable.

And the obvious fallacy in that particular part of Penrose's argument is that we our brains don't compute the noncomputable. When we say we "perceive" a truth, in a sense what we really mean is that we have computed it to an approximation-- a fuzzily bounded approximation, at that. Well, hell, a computer could do that.

You can't take results, like Turing's halting problem analysis or Godel's incompleteness theorem-- which are meant to judge things held to a standard of absolute proof or absolute correctness-- and compare them to the output of the human mind. The human mind is assumed to be fallible and isn't ever being expected to operate in a manner anything like that absolutely correct-- the fallibility of the human mind is the entire reason we created things like formal proof systems and computers in the first place! The only reason why Roger Penrose perceives he is able to do things a hypothetical AI can't is because Roger Penrose is surreptitiously holding himself to a lower standard for getting that job done than he holds the AI to.

Alright, I'm both a physicist (worked in experimental HEP and some time in theoretical cond-mat), something of a mathematician, and (these days) a molecular biologist who has enough friends doing vision work where I have some idea of what's going on.

1. Collapse of the wave function is something that happens in a theorist's notebook, not in reality. It's a convenient approximation to short circuit some things, but is completely unnecessary to an interpretation of quantum mechanics. See Griffiths, 'Consistent Quantum Theory.'

2. As Simon pointed out, we actually know rather a lot about the behavior of human perception of this kind of optical illusion. The naivety of my fellow physicists about science that isn't particle physics continues to astound me.

3. Heidegger, Husserl, et al, nicely exorcised the "consciousness as disconnected floating sphere" concept in philosophy. Sokolowski wrote a very clear exposition of the results in 'Introduction to Phenomenology.'

So the paper uses an obsolete interpretation of quantum mechanics to do bad science in a active and well studied field in order to solve an obsolete philosophical problem. Bravo!

One interesting thing about this paper is that the hypothesis is really only half quantum. The quantum two-state system just provides an oscillating probability of favoring one perception, one which goes as the square of cos(ωt). Three of the four parameters fed into the Monte Carlo simulation actually pertain to how often this two-state system is "observed" and "collapsed". These parameters describe a completely classical pulse train — click, click, click, pause, click click click click, etc.

What's more, the classical part is the higher-level one, the one which intrudes on the low-level processing. Crudely speaking, it's like saying there's a quantum two-state system back in the visual cortex, but all the processing up in the prefrontal lobes is purely classical.

I looked into the literature on binocular rivalry (Google Scholar is your friend!), and lo, the "dominance duration", or amount of time a particular interpretation of the ambiguous visual information is favored, follows a gamma distribution. OK, now, the gamma distribution has two parameters: it's the sum of k random variables with exponential distributions, each of mean μ. So, if a set of data points can be fit by a gamma distribution, then that dataset can be summarized by two numbers.

Manousakis fits a two-parameter curve with a four-parameter hypothesis. The numbers he gets out are, at face value, meaningless (so it's no surprise that they differ so radically between the different curves he fits). Two combinations of his parameters, one dimensionful and the other dimensionless, might have actual significance.

Question: AFAICT, this paper has only appeared on the arXiv (as arXiv:0709.4516). Should this post therefore be tagged with the BPR3 logo?

"We don't understand consciousness, and we don't understand quantum measurement, therefore they must be related."

Can you provide a reference for this quote? Yes, I know you didn't mean it as a direct quote, you meant it to express an attitude, but the thing is, this EXACT quote turns up in a great many of these posts that question the quantum-mind idea, so it must have come from SOMEWHERE, and it makes me wonder how many of the other suppositions are from that original source.

I'm not being critical, just being curious. The science used to be. Like I find it curious how this phrase keeps re-occuring without reference to the PLACE where the quantum mind research says any such thing, and that in itself is a rhetorical device, is it not? I've been out of science for a long long time, but in those days when I was in the labcoats, we would debunk other research directly, by pointing out experimental flaws, questioning the selection process, tallying the experimental errors, proposing counter provable hypotheses. Alas, this is not the case any more, today things are such and such because they simply must be, or cannot be because they cannot, and that leads me to think that anyone interested in my old fondly remembered old-skule science method probably should go into Theology instead.

I don't mind the questioning, even the idle armchair questioning, and if classical rules can give the same results, I'm keenly interested in the sober and detailed proof of that, not "I say so" proof, but the old classical sense of a set of logically coherent statements from axiomatic information leading to that old familiar symbol with three dots. Some of the older readers will remember that sort of proof, the younger ones probably already think I'm a quantum nut, or some sort of new-age zealot, and that's fine too.

Provided they can back it up with experimental results ;)

I'm very late to the discussion here, but as someone who models binocular rivalry I wanted to add that the Manousakis model indeed produces certain experimental results. But all of these are also captured by more commonly accepted "mutual inhibition" models which are simple dynamical systems models with noise and deterministic processes. Mutual inhibition models are simply flip-flop circuits put into the context of brain mechanisms (inhibition, noise, and adaptation).