On Consciousness

David Barash has a short, but interesting post about consciousness. Responding to someone who asked him about the most difficult unsolved problem in science, Barash writes:

I answered without hesitation: How the brain generates awareness, thought, perceptions, emotions, and so forth, what philosophers call "the hard problem of consciousness."

It's a hard one indeed, so hard that despite an immense amount of research attention devoted to neurobiology, and despite great advances in our knowledge, I don't believe we are significantly closer to bridging the gap between that which is physical, anatomical and electro-neurochemical, and what is subjectively experienced by all of us ... or at least by me. (I dunno about you!)

And skipping ahead:

But the hard problem of consciousness is so hard that I can't even imagine what kind of empirical findings would satisfactorily solve it. In fact, I don't even know what kind of discovery would get us to first base, not to mention a home run. Let's say that a particular cerebral nucleus was found, existing only in conscious creatures. Would that solve it? Or maybe a specific molecule, synthesized only in the heat of subjective mental functioning, increasing in quantity in proportion as sensations are increasingly vivid, disappearing with unconsciousness, and present in diminished quantity from human to hippo to herring to hemlock tree. Or maybe a kind of reverberating electrical circuit. I'd be utterly fascinated by any of these findings, or any of an immense number of easily imagined alternatives. But satisfied? Not one bit.

I agree, and would go a bit further. I can't imagine what an explanation for consciousness would even look like. Right now, from the comfort of your armchair, make something up which, if it were true, would make you feel like you really understand how consciousness comes about. I don't even know what to make up!

Barash continues:

I write this as an utter and absolute, dyed-in-the-wool, scientifically oriented, hard-headed, empirically insistent, atheistically committed materialist, altogether certain that matter and energy rule the world, not mystical abracadabra. But I still can't get any purchase on this “hard problem,” the very label being a notable understatement.

I am also a dyed-in-the-wool materialist, and I think we have good reasons for confidence that ultimately consciousness is a purely natural phenomenon. We know, for example, that virtually everything we regard as special to human beings can be obliterated just by damaging the correct portion of the brain, including our emotions and our sense of morality. We know that drug therapies have been very effective in treating mental illness. It certainly appears, based on observing animal behavior, that consciousness is something that comes in degrees. Humans seem to have more of it than apes, who have more of it than dogs and cats. At any rate, the circumstantial evidence is strong that the brain is a purely physical organ.

But I might be moved to reconsider if someone proposed a nonmaterial theory of mind that does any better. For example, I don't see how it's any help at all to posit some sort of ineffable mindstuff that interacts with the physical brain to produce consciousness. Does substance dualism explain anything at all about the brain? Let's stipulate that there really is a distinction between physical stuff and mental stuff. Then tell me more about this mental stuff. If it's not material, then what is it? How does it interact with the physical brain to produce consciousness? How does it get into our heads in the first place? How does the mental stuff in your head know how to be you, while the mental stuff in my head knows how to be me?

I may not understand how physical processes in the brain can produce consciousness, but I don't understand how non-physical processes can do it either.

I know people who go in for vague, New Agey type explanations. They say things like, “Maybe consciousness is just an irreducible part of the universe,” which sounds like the purest gibberish to me. I find it hard even to say those words without putting on my best stoner voice, appending the word “man” to the end of the sentence, and then staring furiously at my hand for ten minutes.

I feel much the same way about free will. If it's an illusion, it's a mighty convincing one. But whether it's real or just an illusion, I don't understand how physical processes in the brain can can create it. I'm not letting that bother me though, since no one arguing from a non-physical perspective has, in even the slightest degree, had any more luck.

Everything we know about the brain suggests that it's a purely physical organ. It also seems perfectly obvious that we are conscious and have free will. I conclude that suitably organized matter can create consciousness and free will. That I don't understand precisely how that happens just tells me that our understanding of matter is imperfect.

Don't like my explanation? Think I'm waving my hands? Fine. Give me a better one. I'm all ears.

Tags
Categories

More like this

I fear my commentary on this issue is probably too simplistic, but in a similar line to PZ's argument against all supernatural arguments being incoherent, therefore not even worth consideration, I find this issue similarly worthy of being dismissed without any consideration. As soon as Dualism is found to be wanting, a null hypothesis needs to be recognized. That being, consciousness is nothing more than the emergent property of brain matter, in the same way that the null hypothesis for digestion is an emergent property of the stomach.

The question of null hypothesis demands basic assumptions that transcends these arguments of hypothetical possibilities that become so cluttered with objections to Occam's Razor. The absolute minimal assumption should be, the seemingly blind processes are all there is. Any argument for invisible agency is arguing against there being nothing but blind processes, not so much because we insist that is necessarily reality, but because it is the simplest hypothetical reality that matches our data.

And in that atmosphere, let us please stop pretending that invisible ghosts in our heads are a valid argument that stands by itself. it is incoherent and does not in any way challenge the null hypothesis. If anything, it mocks it by offering such an absurd non sequitur to the problem.

By AbnormalWrench (not verified) on 01 Nov 2011 #permalink

Except that digestion is not an emergent property of the stomach. There is no problem at all in tracing cause and effect in the stomach. If the stomach is emergent then everything is emergent and the word loses any useful meaning.

The problem is in explaining experience. You can trace the cause and effect of data processing in the brain but there is nothing to trace to the experience of that data processing. If we weren't conscious how would we ever understand that the phenomena existed at all?

I'm always frustrated by people who leap to "consciousness" as the "hard problem" and bypass all of the other hard problems in nature. It simply plays into the crypto-dualists hands. I'm fairly confident that if we had a relatively complete explanation of (e.g.) how a pack of wolves engages in cooperative hunting, including all of the communications, planning and adaptation involved, we'd find that the steps from there to primate consciousness were relatively straightforward.

#2, There is a problem tracing cause and effect for brain matter to consciousness?

The non sequitur to me is, demanding the blind processes can't explain it. It is nothing more than a god of the gaps argument.

Dualism has a fascinating attribute, it tries to claim blind processes while also entertaining quark-twiddling, as I like to call it. God-manipulation on the minimal level that can't be traced by our current observations. This is a goal post that conveniently moves as science advances, just like all god-of-the-gaps arguments.

By AbnormalWrench (not verified) on 01 Nov 2011 #permalink

I think, basically, there's too much to be said on this subject, but if there's one thing that evolution and science has shown us it is that complexity have no problem arising from simpler states. I don't know why we keep saying we don't know how the brain can create conscience when we perfectly know that it's just a word we humans have for trying to sum up a bunch of internal and external influences that creates a certain phenomena in us humans (as far as we know) that we can't all seem to agree on; I think it's perfectly clear that what we *do* know about the brain, about neural networks and the workings of the chemical and electromagnetic nature of it, it's acceptable to say that consciousness doesn't have to be more than a feedback loop of neural activity, perhaps an evolutionary mistake, but basically an echo-chamber of sensory inputs and outputs.

Even the very concept of "consciousness" and what it is still of much debate, even when a pure materialistic approach is taken. No, I understand your take to say that despite all of this, and given that the problem is still in its scientific infancy (which I'd disagree with given the latest years of research, however I don't mean to attack any straw-men here :) ) that religious or alternative dualistic approaches doesn't fare any better.

And that I fully agree with. And it doesn't apply just to the problem of consciousness either, but to almost all walks of life, any problem posited that the religious claim to have better answers to, even theological ones (hehe).

The problem isn't that "the problem of consciousness" is hard, it is that people think too highly of this thing we try to explain by calling it "consciousness." It's a subtle difference worth noting.

I must be missing something. I don't see that "consciousness" is even well-defined, let alone a mystery.

By "consciousness" people seem to mean many things, including (but not limited to) having a memory, making decisions, awareness of the outside world, capable of reacting to stimuli, awareness of ourselves, etc.

But we know how computers can store information, and we have some limited understanding of how brains do, too. So memory doesn't seem that mysterious or incapable of understanding.

We know that programming languages have "if then" statements, and even a stupid thermostat can make decisions, so that can't be difficult.

We know a lot about how the human visual system works, so awareness of the outside world isn't so far away from being explained, either (see Crick, The Astonishing Hypothesis).

Even bacteria can react to stimuli, and we know a lot about how nerve cells work, so that can't be the really mysterious part.

Self-awareness can be explained as our mental models of the outside world becoming so detailed that they include ourselves, and we know that chimpanzees and dolphins have it, so that doesn't seem out of reach, either.

Consciousness will be explained, but in the end, the explanation will just be elaborations of these ideas: multiple, interacting systems capable of sensing and computation. And what more would you want, precisely?

By Jeffrey Shallit (not verified) on 01 Nov 2011 #permalink

"certain that matter and energy rule the world"
Yes, but that world does not actually exist directly real, as has been even empirically proven by quantum mechanics. You will never find phenomenal consciousness existing inside a world that does itself not exist. If you think this equates to abracadabra, you will never understand consciousness (or fundamental physics). The problem is hard because you refuse what you need for understanding it.

So what is wrong with the analogy that consciousness is the program that runs on the brain hardware? It only seems weird because the program is somewhat self-modifying to an extent much greater than any software that's run on a computer.

By dexitroboper (not verified) on 01 Nov 2011 #permalink

Let's say that a particular cerebral nucleus was found, existing only in conscious creatures. Would that solve it? Or maybe a specific molecule, synthesized only in the heat of subjective mental functioning, increasing in quantity in proportion as sensations are increasingly vivid, disappearing with unconsciousness, and present in diminished quantity from human to hippo to herring to hemlock tree. Or maybe a kind of reverberating electrical circuit. I'd be utterly fascinated by any of these findings, or any of an immense number of easily imagined alternatives. But satisfied? Not one bit.

Huh. Personally, I'd be totally satisfied with an explanation like that. I mean, it'd have to be embedded in some fairly complex theory, because consciousness is complex. But if neurologists managed to map out the physical correlates of consciousness in the way Barash suggests here, I'd declare the Problem of Consciousness solved.

Now, I might still find consciousness emotionally mystifying. I might still wonder at the fact that this nucleus/molecule/circuit produces conscious experience. But so what? That's not science's problem. I can look up at a mountain and go "Jeez, that's amazing, how can that thing be so big?", but that doesn't mean that science hasn't solved the Hard Problem of Mountains.

By Anton Mates (not verified) on 01 Nov 2011 #permalink

Consciousness doesn't seem like that big a problem. It's pretty clearly common to most birds and mammals at least. It's my guess (and I think that Daniel Dennett was here first) that the sliver of consciousness we experience is our formation of short-term memory, which would explain the lag between response and cognizance.

It's difficult to discuss because we suppose that we are actively, willfully deciding everything that goes on in our heads, which is obviously nonsense: we don't actually have to plan where to plant our feet or remind ourselves to take another breath, or sort through the parts of speech when we compose a sentence.

I've had entire inventions and nearly complete limericks pop into my mind. They were my own work but not the product of my attentive labor. I'll contend that they issued from my consciousness since I wasn't sleeping when they revealed themselves to me. Here's one instance:

That polymath Gauss had the nerve
To sum us all up in a curve
Statistics made clear
And geometry, queer
And that's just a part of his oeuvre.

(Yeah, the final rhyme is cracked, but a limerick is supposed to be at least a little bit funny.)

They say things like, âMaybe consciousness is just an irreducible part of the universe,â which sounds like the purest gibberish to me. I find it hard even to say those words without putting on my best stoner voice, appending the word âmanâ to the end of the sentence, and then staring furiously at my hand for ten minutes.

I'm a pretty hardcore, committed stoner, and that sounds like purest gibberish to me too.

Everything we know about the brain suggests that it's a purely physical organ. It also seems perfectly obvious that we are conscious and have free will.

Just because something "seems perfectly obvious" doesn't mean it's actually true.

I, too, find the hard problem of consciousness really hard, and I'm baffled at the confidence that people seem to display in the current explanatory capacity. Though to deny that it's brain activity through not understanding it would be like denying that life is somehow all cells without understanding cell chemistry. That we can't explain it means we don't know, if people want to think there's something other than brain activity going on then they need to show it. Otherwise it's just labelling our ignorance "dualism".

You're going to have to explain what consciousness is first, though, Kel.

It would be like asking how to trap a Heffalump when you don't know what one looks like. Or trying to understand chemistry without knowing what chemicals or atoms were.

It's one hope for AI: if we can get a program complex enough to make an artificial consciousness, then we'll know more about what consciousness is.

Wow, others: First, explaining what anything "is", requires some initial understanding of something, to be able to interpret "with" or in terms of. Descriptions can't get off the ground in a vacuum. By consciousness, most of us mean something that literally *can't be defined* in terms of any doing, or ability to do X, etc, but rather the subjective "feel" of nausea, color sensations, etc. that we can imagine absent despite behaving the same as before. (Whatever you think of the zombie argument, it is an intuition pump of which *I* have no trouble imagining the distinction - to those who don't: I am not going to castrate my own appreciation for the sake of the lowest common denominator.)

BTW the zombie argument is contaminated by the alternative twist, what if the universe could be physically the same but we weren't subjectively conscious. But that is different from crudely about functionally "behaving the same" - becasue something about the universe, even that we can define instrumentally, could be that which allows the relative nature of the brain activity to be subjectively as we experience.

Nor would I say, falling into the conflation of "seeming" talk that Dennet sets, say that it "seems" we have a special, subjective content that is separable (ie, due to qualitative nature) from any conceptual scheme involving behavior and AI. No, it doesn't "seem" that we are conscious. That expression which is supposed to refer to the way represented objects can be misrepresented as given conscious experience: the square "seems" a wavy irregular shape because it is "seen through" a wavy glass window. Well, that is from the retinal image being irregular which creates the percept which is my actual direct epistemic object. (You are savvy and non-childish enough not to be a naive realist, I hope?) Yes, I've read Denial Dennet's arguments and found them wanting.

Conscious experience is *the* given, that we infer other things from, Witlesstein et al be damned (and they weren't even trying to make a good-faith effort to delve, just out to push an ideological necessity like a free-market Austrian paragon.)

Yet conscious experience does not have to be a separate "something", rather it can be the relative way that certain processes are, "to" the entity having the process. IOW, get rid of the context-free notion that things are "given" or "shown" us as-is, and we can understand that contradictory "nature" need not rule out "numerical identity" of the same process. The character of things is relative, we should already appreciate that from QM and the whole idea that how you measure (broader: encounter) is part of what gives things their apparent properties. So the same thing can have contradictory relative natures: leading some to deny the alternative nature, others to insist a new "thing" must be added; both of them failing to appreciate the matter of relative perspective. Spinoza and others appreciated this long ago, it's sad it keeps on going in a rut.

I think a lot of research into sense perception may bear on the topic. I.e., research into phantom limb effects, OOB experiences (real research, not the kook stuff), and things like that.

From what I've read (and I'm just a layman), your perception of what is "you" is updated in real time using your standard senses. It isn't intrinsic and it dosen't rely purely on some self-referencing brain acitivity. Fool your eyes into thinking you have a limb when you don't, and you'll feel sensations from that limb. You will begin to think that limb is "you." Fool your brain into thinking it's 6 feet above your head, and you will begin interpreting auditory and visual information as if that's where "you" are.

These tricks can be done in real time, with normal adults, which demonstrates that sense-of-self is not some trait that gets fixed at some point. And its not necessarily linked to one's body. Rather, concsiousness may be more like your sense of balance - you may feel stable when you walk, but your inner ear is constantly working with your muscles to give you that false impression. The continuity of consciousness may be a similar false impression, a result of constant feedback between your senses and your brain (and muscles).

Interesting stuff (at least to me). All of this points to the possibility that consciousness may just be a data processing utility, analogous to the way the brain flips visual images, in that it is merely a tool used by the animal brain to better guide limb movement and other physical interactions with the environment.

I suspect that what needs to be explained is not consciousness itself, but our feeling that consciousness needs an explanation. When we understand that, we may see that there's no "hard" problem to solve.

It may help to stop thinking of consciousness (in the "hard" sense) as a phenomenon. Phenomena are things that exist or occur in the world, things that we can directly or indirectly observe. But we don't observe consciousness. It's consciousness that does (or is) the observing.

As long as we try to think of explaining consciousness in the same way as explaining phenomena, I think we will always be left with the sense that there is something deeply mysterious that needs explanation but seemingly can never have one.

@Jeffrey Shallit,

The "hard" problem is not to explain any of the specific phenomena you listed, but to explain more generally how we can have any subjective experience at all. Why doesn't my brain just get on with the job of processing data without generating a subjective self with subjective experiences? Why is there something unpleasant about my brain responding to a kick in the shin?

The question may be incoherent. But those who feel the need for an answer will take some convincing of that. (And I'd like to be more convinced than I am.)

By Richard Wein (not verified) on 02 Nov 2011 #permalink

Well, I'd argue that the "subjective self' is just a by-product of a model of the world that is sufficiently detailed that it includes ourselves.

By Jeffrey Shallit (not verified) on 02 Nov 2011 #permalink

The "hard problem" is to persuade people that some realities are not publicly accessible. Many science fans begin with the faith that everything real is scientifically explainable, and with the bias that only the scientifically explainable is real. This includes creationists. In a Praise the Lord episode, one of them said, "We believe that the Bible is true, and so is science." Creationists want to evict methodological naturalism from science, so that they can attribute observed effects to unobservable (spiritual) causes. And materialists want to evict the unobservable from reality, so that they can regard naturalistic science as the only valid way of knowing.

A lot of confusion about consciousness can be cleared up by choosing personal pronouns carefully. Science is our collective investigation of phenomena. My experience of myself experiencing is ineluctable, and is utterly private. There is absolutely no reason that I should not regard it as real, though it is not a phenomenon. We cannot access consciousness collectively. Our verbal reports of consciousness are phenomena, as are our measurements of brain activity. They are not my experience itself.

Some commenters have set up the straw man of dualism, and knocked him down. What I'm driving at is a matter of dialectics, not dualism. I know different things in ways that are not entirely reconcilable. It would be as foolish to deny my participation in the creation of reality as to lobotomize myself. Yet I insist, on pragmatic grounds, that creation has no place in scientific explanations.

Jason has complained previously about the notion of "ways of knowing." But he exhibits different ways of knowing in this post. His belief that there is such a thing as consciousness does not begin with (public) empirical evidence. It is grounded in private experience. And he is struggling, in essence, with the "problem" of how to make public what is intrinsically private.

In my university AI class, my classmates and I were required to write a program that used a neural network to recognize handwritten numerals with (IIRC) > 95% accuracy. These digits were all grayscale, and scaled down to an 8x8 pixel square, but nonetheless it performed in the same ballpark as a human reading the same digits.

I have no idea how, precisely, it actually worked, even though I wrote the thing. Sure, I knew there were 64 inputs (one for each pixel) and 10 outputs (one for each possible digit 0-9; the output with the highest signal was taken as the answer) and there was a couple layers of virtual "neurons" with weighted inputs and outputs in-between.

But the whole system was set up to teach itself. It was fed several thousand training examples, which were used to strengthen or weaken the neural connections (which were ultimately just multipliers) so that the next examples were more accurate than the ones before. But I have no idea what the strength of each neural connection was; it just worked. There wasn't any particular "identify loops" subsystem, or "number of straight lines" calculator, as you might have if the system was designed in a top-down fashion. These functional units may have actually emerged in the course of training, but I don't know.

Anyway. All this is simply to illustrate that consciousness probably will never be "explained" in the same way that I could never really explain how my neural net was able to classify digits in any meaningful way besides what I just wrote above. I probably could have written some unwieldy but ultimately tractable linear algebra statement for it, but A) that's not a very satisfactory explanation, and B) that's probably impossible for a system like consciousness.

I also hoped to illustrate that reproducing consciousness in a similar fashion to how I and my classmates (in our limited way) reproduced handwriting recognition would probably be the strongest possible evidence against mind-body dualism.

I think a big part of the "hard problem" is linguistic. You can see it in Barash's language:

"Let's say that a particular cerebral nucleus was found, existing only in conscious creatures. Would that solve it? Or maybe a specific molecule, synthesized only in the heat of subjective mental functioning, increasing in quantity in proportion as sensations are increasingly vivid, disappearing with unconsciousness, and present in diminished quantity from human to hippo to herring to hemlock tree."

I think a lot of people are thrown by the notion that we're looking for something called "consciousness" and that because the word "consciousness" is a noun, we must therefore be looking for a thing. Barash is looking for cerebral nuclei and proteins fergoshsakes!

But consciousness isn't a thing, it's a process. Specifically, a thermodynamic process. Thinking of it as some kind of engine that pumps out "awareness, thought, perceptions, emotions, and so forth" is the problem: consciousness is not the source of these elements but rather the process resulting from their interactions.

Just as in the case of thermodynamics where the search for "caloric" ended with the realization that temperature is really a process, the problem will be solved when we stop looking for something that just ain't there and pay more attention to the lower-level mechanisms that consciousness depends upon.

To flesh out my last post, consider the notion that a computer's operating system is a process rather than a thing. And I know the brain/computer analogy is overworked, I certainly don't think consciousness is a Church-Turing computation, I think it is something stranger and more analog. But the comparison of consciousness to an operating system is actually pretty enlightening. Here are the major jobs of an operating system:
-managing inputs
-managing outputs
-managing storage of data over both short and long terms
-organizing and prioritizing the computation of particular routines

Inputs are analogous to sensory data, outputs analogous to muscle impulses (both voluntary and involuntary), retrieval and storage of data is obviously our long and short term memories. As far as organizing and prioritizing the calculation of particular bits of code, this is exactly what our frontal cortex is for, enabling to organize short memorized routines into long-term chains of activity that can go for as little as a few minutes to as long as many years. (For a trivial example, writing checks, addressing envelopes, driving to the post office, and mailing the envelopes are each discrete, memorized activities that can be combined into a larger, more complex action that is easily recognized as "paying one's bills.")

This doesn't really address the problem of qualia, but I think that one can be fruitfully separated from the hard problem. Qualia are the result of arbitrarily dividing a continuous (abstract) space into discrete units, the way our brains divide the continuous space of possible sounds into discrete phonemes or the continuous space of wavelengths of light into discrete colors -- note that different cultures make these divisions in different ways. Red looks the way it does because it has to look different from blue; the qualia inversion problem suggests that there isn't necessarily anything else to a color other than looking different from other colors.

" The "hard" problem is not to explain any of the specific phenomena you listed, but to explain more generally how we can have any subjective experience at all. Why doesn't my brain just get on with the job of processing data without generating a subjective self with subjective experiences? Why is there something unpleasant about my brain responding to a kick in the shin?"

Yes exactly so. When I detect red light I experience the color red. That experience does not seem important to the utility of being able to react to red light.

For example I could create a computer program hooked to a camera that could process the input from the camera in arbitrarily complex ways. But there is nothing in the data structures or algorithms that imply experience and experience seems to have no use anyway. If experience has no effect then it seems to only make us helpless witnesses to events that we have no control over.

But if experience has no causal effect then how can we talk about it?

For example I could create a computer program hooked to a camera that could process the input from the camera in arbitrarily complex ways. But there is nothing in the data structures or algorithms that imply experience and experience seems to have no use anyway. If experience has no effect then it seems to only make us helpless witnesses to events that we have no control over.

But if experience has no causal effect then how can we talk about it?

Your example is actually it's own answer. Experiencing the sensation of seeing the color "red" is actually a learned behavior. Let's use "black" instead, though, because most cultures are pretty consistent about what "red" means. On the other hand, many cultures would describe the daytime sky as "black" because they don't have a separate word for "blue."

So experience DOES have an obvious effect: it determines how our experiences divide into discrete elements. It's your experience with the use of color words like "blue" and "black" that let you differentiate the color of the daytime sky from the color of dark rocks. It's experience with learning the English language that allows you to differentiate between "v" and "w" sounds (Russians, famously, frequently cannot do so, hence all the wodka they drink).

Incidentally, infants below 6 months do not seem to divide the space of what they hear into discrete phonemes -- while English speakers would recognize a whole range of different sounds as a "w" a 4-month-old wouldn't be able to lump them into a category that way. Phoneme recognition -- and hue recognition -- are learned, cultural behaviors.

Good points, Tom English. (BTW the position I outlined above is called "property dualism." It's been around almost as long as the basic modern debate.) One point: it isn't whether big C is a "phenomenon" or not. It is the "character" of our experience, and our ability (most of us, I would say: everyone not "playing dumb" due to being an ideologue and/or confuse-ee) to imagine it logically separable from "behavior" (not, as I said, the same issue of whether a "physical universe" - real world as it is, not the conceptual models we think it ought to have to follow.) Hence, it is a "hard problem" why for example there really are unpleasant experiences worth avoiding, and not just people running around acting like it. I have to pity anyone who can't "get" that that certain "sting" or good feelings etc. are not clearly definable as something we can work into "information processing", neither should we allow preposterous denialism as in "feigning anesthesia" (and wrongly doubled-up seeming talk is just the trickier way to do that.)

Note that "causal effect" is not really the issue, it is "what character" and how widely accessible is that "redness" etc. With PD, we just say that the "redness" as experienced is something other than what you can "get to" by poking around in someone else's brain. And sure we have to learn names for colors etc, so what? Your brain would have to compare the inner signals anyway, to name the colors, so the foolish ordinary language philosophers are ironically anti-scientific! Wittgenstein's OLA is rubbish because he denies us the right to make parsimonious choices about whether it is *reasonable* to presume I can trust my private memories, given both my sense they are "true" and the continued correspondence to represented things. If we cannot be "sure", so what - nor can we be sure if the world really existed more than ten minutes ago with all apparent evidence of past (subsuming of course all arguments about conservation etc, which *assume* the contrary and aren't literally "findings" about the past!) - as Bertrand Russell noted. (A fine example of a thinker who avoided the seductive foolishness of OLP.)

I do appreciate the very good high-level discussion here, it takes me back to old phil-mind classes. Cheers! If you can find me on FB, send add requests if you like, etc.

[Altered for better "passage" as it were, admins: pls do not repeat similar, previous post. tx]
Very good points, Tom English. (BTW the position I outlined above is called "property dualism." It's been around almost as long as the basic modern debate.) One point: it isn't whether big C is a "phenomenon" or not. It is the "character" of our experience, and our ability (most of us, I would say: everyone not "playing dumb" due to being an ideologue and/or confuse-ee) to imagine it logically separable from "behavior" (not, as I said, the same issue of whether a "physical universe" - real world as it is, not the conceptual models we think it ought to have to follow.) Hence, it is a "hard problem" why for example there really are unpleasant experiences worth avoiding, and not just people running around acting like it. Can't you "get" that that certain "sting" or good feelings etc. are not clearly definable as something we can work into "information processing"? Neither should we allow denialism as in "feigning anesthesia" (and wrongly doubled-up seeming talk is just the trickier way to do that.)

Note that "causal effect" is not really the issue, it is "what character" and how widely accessible is that "redness" etc. With PD, we just say that the "redness" as experienced is something other than what you can "get to" by poking around in someone else's brain. And sure we have to learn names for colors etc, so what? Your brain would have to compare the inner signals anyway, to name the colors, so ordinary language philosophers are ironically anti-scientific! Wittgenstein's OLA is fallacious because he denies us the right to make parsimonious choices about whether it is *reasonable* to presume I can trust my private memories, given both my sense they are "true" and the continued correspondence to represented things. If we cannot be "sure", so what - nor can we be sure if the world really existed more than ten minutes ago with all apparent evidence of past (subsuming of course all arguments about conservation etc, which *assume* the contrary and aren't literally "findings" about the past!) - as Bertrand Russell noted.

I do appreciate the very good high-level discussion here, it takes me back to old phil-mind classes. Cheers!

Dan L. That is a nice way of putting it.

The problem of consciousness isn't a scientific problem, it is an artifact of the hyperactive agency detection device that humans have.

To detect anything, an entity must do pattern recognition and compare the pattern observed with the pattern in its pattern recognition device. This includes detecting agency. Humans use their own internal âagentâ as the pattern against which they do pattern recognition to recognize agency in another entity.

Comparing a pattern with itself always registers robust identification (in the absence of severe damage). The fidelity of the self-identity identification need not be very high, the only important output is an âI am meâ recognition of self-identity.

No human is the same entity over time. A human is always changing, as brain cells die, as new connections and new memories are formed, as the brain exhibits plasticity. A human is not âthe sameâ entity it was a minute ago, a year ago, or a decade ago, but the hyperactive agency detection still returns the same âI am meâ recognition of self-identity.

There is no need for the self-identity pattern recognition to have fidelity sufficient to notice differences in entity properties over time. Intellectually we know those differences are there, that we can't detect them shows a flaw in our self-identity pattern recognition. The feelings of continuity of consciousness are an illusion, albeit a persistent one.

@Neil Bates:

Note that "causal effect" is not really the issue, it is "what character" and how widely accessible is that "redness" etc. With PD, we just say that the "redness" as experienced is something other than what you can "get to" by poking around in someone else's brain. And sure we have to learn names for colors etc, so what?

I know I'm going to sound like a broken record, but I really do think this bit demonstrates the problem I'm trying to talk about: the "character" of an experience is not necessarily constant from person to person (my "redness" isn't necessarily the same as your "redness") and in fact there's some good evidence to suggest that subjective experience DOES actually vary widely person to person.

The Himba tribe of Africa have demonstrably different color perception than we do. I prefer using the example of phonemes because the differences in sound perception between different languages are more obvious than color perception -- western cultures have different sets of phonemes but essentially the same color maps so it's easier to get across the idea that to a Russian, "v" and "w" are not distinct sounds than it is to explain that to the Himba, black and blue are not distinct colors.

It's not about learning the words for the colors -- as you say, so what? -- it's about learning to recognize a color as its own thing rather than, say, a particular shade of some other color (one might describe green as a shade of yellow rather than its own color). Or about learning to distinguish a "w" sound from a "v" sound -- it's not just learning that the sounds are represented by different symbols. You actually have to learn that they are different sounds in the first place. Another good example might be the "blue/indigo/violet" distinction. I don't recognize any color as indigo, it's not part of my subjective color map. I can see a difference between blue and violet, but what others might call "indigo" I would simply recognize as different shades of blue and violet.

@daedalus2u:

Interesting thoughts. I hadn't really been thinking about the coherency as part of the problem of self-identification but it's a very useful way of looking at it.

@Neil Bates:

It kind of seems like you're responding to me on the basis of "we have to learn the color words -- so what?" I had a lengthy response with a link to show what I mean instead of explaining it, but it is held in moderation.

The short response is: I'm not talking about learning the color words. I'm talking about learning to recognize colors as discrete experiences. For example, I know the color word "indigo" but I have never seen a patch of color and identified it as "indigo." I recognize "indigo" as a color word but it is not part of my internal color map -- to me, indigo looks like any of various shades of blue and violet. Blue and violet, on the other hand, seem to me to be distinct colors even though I couldn't point at a particular shade as the dividing line between them. A continuous spectrum of possible colors is divided into a discrete collection of color concepts. The specific wavelengths at which the boundaries are drawn vary from culture to culture.

Here's the link at which you can get a clearer idea of what I mean: http://www.youtube.com/watch?v=4b71rT9fU-I

Jason, please do not promote my last comment unless you think there's something good in there I didn't cover here.

I can't claim to understand consciousness, but I have a pleasant metaphor. Whereas the feeling of heat is how we perceive infrared light, and weight is how we perceive gravity, and touch is how we perceive pressure, so consciousness is how we perceive the electro-chemical mush that goes on in our brains.

Philosophical zombies, however, are absurd for the same reason that epiphenomenalism is absurd.

"But if experience has no causal effect then how can we talk about it?"
Exactly.

@Neil Bates & Dan L.,
It seems to me that the 'character' of experiences is amenable to physical explanations. Not that we understand every detail, but it's certainly suggestive. Consider the experience of the color purple: we experience it as being somehow 'like' blue and 'like' red in varying degrees, so that you can pass from red to blue through a spectrum of purples and on around the color wheel. But that isn't a property of light itself. Blue and red are on opposite sides of a spectrum that fades into the invisible wavelengths on both sides. We don't perceive that with vision, because our vision works by activating 3 different color receptors that are sensitive to bands of the spectrum around red, green and blue. 'Purple' is basically the experience of having the red and blue receptors simultaneously stimulated and you can move continuously around the color wheel by gradually turning up one receptor while you turn down another. The nature of our vision (among other things) as an experience is very much determined by the mechanism.

I suspect we will always have trouble reconciling our 'direct' experience with an explanation because the part of the brain that experiences sensory input is a bit different from the part that experiences abstract concepts. But that's not inherently a problem for physicalism, physicalism explains why we experience a problem in the first place.

Josh, the trouble is: our minds could have "represented" what we see a purple some other way, by switching the correspondence such as seeing what looks "red" for the middle wavelengths and "green" for the longer ones ("make red look green and green look red.") Nor did the brain "need" to have an actual experience for us, to have the behavioral control, that is the key issue.

By Neil Bates (not verified) on 02 Nov 2011 #permalink

Just to point out the linguistic problems in the current discussion; http://en.wikipedia.org/wiki/Color_blindness

Again we *know* how the physical brain affects the concept of consciousness. People who are pushing the "incomprehensible" envelope are, I suspect, driven by a faith in it being so rather than the reality of it.

@Josh: "I suspect we will always have trouble reconciling our 'direct' experience with an explanation because the part of the brain that experiences sensory input is a bit different from the part that experiences abstract concepts"

That's only a surface observation. There's too much simultaneous activity that pass our detection under normal circumstances, and even when we can link certain brain activity to certain regions we can also detect that it's not an absolute border (even if ), nor that other parts of the brain don't take over if the original part is too busy, inactive or otherwise damaged. The brain can be very plastic, both in terms of the functional areas as well as what they jointly do.

So, not directly disagreeing with you there, just clarifying that it's not so black and white, however I must disagree that with this compartmentation follows trouble with reconciling direct vs. abstract stimuli. Why? Is there any evidence that just because we humans have two different names for types of stimuli that they are inherently different?

Dan L.@22
" Experiencing the sensation of seeing the color "red" is actually a learned behavior."

That is true but entirely misses the point. My computer program could recreate the algorithm going on in my head. But "color" would only be a number in a data structure and its relationship to an algorithm. If you were to only look at the data structure and algorithm you may not be able to tell the program was about color at all. It is only the connection to a camera that allows us to see that it is about color. Where then does the experience come from then?

daedalus2u @24
" The problem of consciousness isn't a scientific problem, it is an artifact of the hyperactive agency detection device that humans have. "

From an algorithmic perspective I agree with you. But you are again missing the point. Again I could create a computer program that attempts to detect agency. But it would not need to make any claim on agency in order to function. You have again explained the objective behavior without explaining subjective experience.

We do not just detect light we experience it as color. That experience is actually very misleading and made it difficult to understand light. In a sense we reify the experience over the thing itself.

We do the same in other domains. When we experience mathematics we become mathematical Platonists. When we experience our choices we experience it as free will. When we experience moral choices we become moralists or moral platonists that believe that moral laws exist out there somewhere.

Now I'm not saying that mathematical Platonism is true, we have free will or that there are moral laws out there. In fact I'm pretty sure none of these are true.

But we do have experiences and that is unexpected from an examination of data structure and algorithm. In the end Qualia whispers lies to our souls making Platonists of us all.

Please read, "The Self-Aware Universe" by, the theoretical nuclear physicist, Amit Goswami.

By Dr Willie Maartens (not verified) on 02 Nov 2011 #permalink

I think of consciousness quite simply as our subjective awareness of ourselves, our surroundings and the relationship between these two entities. I believe that consciousness evolved because it provided an adaptive integrated model of reality from the, individually evolved, sensory inputs we are able to receive from outside the body and from the body itself. Such a model is adaptive since it speeds up our evaluation of confirmatory/ contradictory evidence when making conscious decisions about the actions we need to take in response to the state of world and/or our physical needs. It is also an essential tool in making adaptive moral judgments.

In a moving, talking picture, the fact that the dynamic visual image is synchronized with the sound results in consilience between the words heard and the lip movements produced by the actors. This results in an emergent subjective experience that seems to the viewer to be an acceptable model of our usual real-life subjective model of reality. My suggestion is that there is a similar relationship between our sensory inputs, which are, to all intents and purposes, also experienced in a synchronized fashion, and objective reality.

This emergent model of reality that we call âconsciousnessâ also enables us to take âsnapshotsâ of states of the world associated with emotionally charged experiences. These may then be stored in long-term memory and used, unconsciously, to pattern-match to real-time experiences and thus enable very fast, unconsciously mediated and adaptive actions to be generated when similar states of the world are encountered.

But how do we explain our subjective experience of being conscious in terms of its neurological basis? In other words, how do we explain the translation from a pattern of electro-chemical pulses in neural brain tissue to the subjective model of reality that we call our consciousness? I have described consciousness as an integrated model of reality based on our sensory inputs. Since our experience of each of these inputs (sight, for example) is understood by science in terms of specific neuronal activity and the physiology of the sensory organs, the âhard problemâ seems to disappear when described in this way. Or does it?

By John Jacob Lyons (not verified) on 03 Nov 2011 #permalink

Anyone remember the trick of shining a white light on a white sheet, putting a coloured light on that sheet then replacing the bulb with the earlier "white" light, and now noticing that for a while the "white" sheet looks like the complementary colour of that coloured light that was shone on the sheet?

Cameras do the same too: when you set the white balance.

Hi Jason:

I think a step toward progress on this problem is to interrogate the assumptions which underlie our idea of the material and the physical. Physical theories and other scientific models are formal, quantitative and, by design, stripped of first-person perspective/bias. Because of this we tend to extrapolate to the idea that the physical world is ontologically non-qualitative and non-experiential. So, of course it's a problem to see how to fit conscious experience into this world.

Since our direct interactions with the world are qualitative and experiential, perhaps the assumption that the material world is devoid of these properties is the stumbling block. Just because our models of the world (understandably) lack these properties doesn't mean the world truly does.

Thanks for the thoughtful post.

not sure i believe in free will... experiments in neuroscience show that decisions are made before we are conciously aware of making them.
but then i am predetermined to not believe in free will ;)

I think we're going about this like our ancestors who looked for causes in the wrong category. Demons, devils, and other magical explanations were sought for disease, and we still (despite protests) have a supernatural hangover when it comes to consciousness: the very word in fact is meaningless; it has been stretched to cover a universe of notions unrelated to each other, from simply being "awake" to a fuzzy notion of cosmic unity. Bah humbug!

The consciousness we can talk about scientifically results from communication between individuals of our species: LANGUAGE. As infants we learn to be conscious: adults train infants and children to respond in knowable, predictable ways using language of several types, with verbal and behavioral responses that confirm that our transfer of instruction has been successful.

When an individual fails to respond to us, whether unconscious from an injury, a seizure, or what we may call mental illness, we become very anxious, fearful, and instinctively upset - we try to "make" the person return to consciousness (respond normally) using language first - "Are okay? "Calm down." "Drop to the ground, now!" We want to know that the 'conscious' person we expect to be there, and can to some extent control, is still there. To survive within a violent species such as ours, this is a vital tool.

We can also see this demonstrated in how we deliberately try to create a co-consciousness in dogs - using a combination of their and our languages to get them to respond as we wish them to: it works to some extent and we "see" that consciousness in their behavior! That is, their behavior can be directed and controlled. Think about it: is it easier to control another person using force or language? How would we control 300 million people using force? Language does it so easily.

Consciousness is not a thing or a place in the brain. It is a result of wiring or training the unfinished child brain to respond in expected ways. Language is how we do it. The success of this process is manifested in expected behavior. We can readily see a failure to create conscious behavior in the utterly misguided ideas prevalent in U.S. K-12 education. Essentially, schools have become a battleground over the archaic beliefs and ideologies that traditionally and narrowly control human behavior rather than being environments in which children learn to be effectively conscious - that is, to respond to and communicate with their environment as learners; to continue on their own the process of using language as a survival tool, which we simply no longer teach.

I would say that other species that rely on communication to direct behavior (can you think of any that don't?) are "conscious" within the style and limits of their communication.

@30:"Again we *know* how the physical brain affects the concept of consciousness. People who are pushing the "incomprehensible" envelope are, I suspect, driven by a faith in it being so rather than the reality of it."
Well ... First, we don't know much about how the "physical brain" affects the concept of C. It is very speculative, people have vague theories and models about what goes on in the brain but oddly enough, no one has actually bulit a working model program to show it can be just like a person, nor proven those ideas reflect what that mess in there really does.

Re "incomprehensible": no, not driven by "faith" at all, but by constant, brass-tacks encounter, our experience of things of all sorts. It's just "qualitative", there is no "seeming to be" that I would accept as a neutral background or that is "believed about" by inference. Ironically, "faith" is about things like thinking the universe must basically be simple, that it follows parsimony, that it ought to "make sense" of the sort model builders like (hence the drive to evade true quantum uncertainty and the also incomprehensible "collapse of the wave function" by building flawed castles in the sky of many-worlds (unobservable!), buttressed by fallacious arguments about decoherence (check name-link blog or Google for "quantum measurement paradox.")

Our reply is, rather it is you who have a "faith" in what things "ought to be like" (ie, the procrustean program of "legislating reality" instead of discovering it) rather than accepting the epistemic ground for being of a curious nature that is none of our business to complain about, only to be candid about.

Bomoore, it is not a matter of looking for curious "causes" but rather of "characterization." You reference language, which is a behavioral and "processing" issue - the "hard problem" is why we really have feeling experience. If you can't "get" the difference between that versus just acting out in *any conceivable way* - *none* of which is conceptually equivalent to the former - then we literally can't communicate our debate. You are lost behind a "pons asinorum" between us (sorry ....)

You might be tempted to say, I should "explain" or "define" what I mean by such distinctions, but that of course just gets to how we get definitions "off the ground" - there has to be something to define with. Are there two categories, the words/phrases that need defining, and the ones that are "obviously known to start with" that don't need to be, that the former are defined with? Hardly.

As for PLA type complaints about learning private language, sorry but our brains *have to do* just what Wittgenstein says we as persons can't: compare signals inside themselves, remember them, etc. How does he think brains work, anyway? The difference with neurology is, the "relative aspect" difference that makes the same process real nausea for me, not for you.

By Neil Bates (not verified) on 03 Nov 2011 #permalink

@37: We can also see this demonstrated in how we deliberately try to create a co-consciousness in dogs

I don't think your language focus is right, for three reasons. The first is that we don't create language in dogs, we co-opt it. Wild dogs, wolves etc utilize all sorts of audio and other queues to communicate. So according to your theory, they should be conscious. Yet they are not.

The second reason is that many many animals have something like language, albeit not like what humans do. We are unique in producing a wide range of complex calls. But we are not at all unique in our ability to understand a wide range of calls. Your average savannah-dwelling herbivore or omnivore doesn't just understand it's own species calls, it understands the warning calls of most of the other species it associates with, too. Its vocabulary of understood words is measured in the tens or hundreds, not the singles (admittedly, a lot of those words are going to have the same referent, e.g. a gazelle may know five or ten different species' call signs for "lion"). It appears that the ability to communicate (or at least comprehend others' communications) occurs in a much wider range of animals than consciousness.

The third reason is that many non-communicative humans appear to have consciousness, intent, will, etc... Autistic individuals are merely one example.

Neil Bates: WHAT? You sound cut off from the reality of animal existence. The problem (making this all more mysterious than it is) exists it not seeing ourselves as on a continuum with all life, which is governed by physics, not metaphysics. We are not that special! We just think we are.

I am a geologist - big picture, structure, coherent connections across the vast scale of time, evolution, change. Life is a behavior of matter. Everything we experience arises from the physical universe. There is no gooey, squishy cosmic supernatural stuff that makes us "human" you choose to see it that way.

@daedalus2u:

Sorry I missed your post before. I never saw the connection between agency detection and self-identification before. Great thoughts!

@ppnl:

That is true but entirely misses the point. My computer program could recreate the algorithm going on in my head. But "color" would only be a number in a data structure and its relationship to an algorithm. If you were to only look at the data structure and algorithm you may not be able to tell the program was about color at all. It is only the connection to a camera that allows us to see that it is about color. Where then does the experience come from then?

1. I don't believe there are any algorithms going on in your head. There are no registers, no central processor, no serial buses in your head. I think it is an analog process. So you might be able to emulate the same process on a computer -- a supercomputer with an amazing amount of memory and it would still take forever. But that's a model, not what's actually going on.
2. Given (1), you can't represent a color as a number, you have to represent it as a signal.

You'd have a point if the brain was a digital system, but I'm almost certain it's not.

@josh:

I think we're very much on the same page, but stressing different parts of the process. I think you're right that the three-color-receptor system is the root cause of most of the properties of color vision. Essentially, the system creates an abstract signal space (representing signals from the retina, perhaps) where color data is represented as a continuous 2-dimensional surface.

I'm emphasizing the next step in which that continuous 2-dimensional surface gets chopped into discrete chunks. The reason for this emphasis is that the size and shape of the chunks are fairly arbitrary and mismatches in color maps lead to demonstrably different subjective impressions of the world. And I'm trying to argue this, ultimately, to suggest that the ability to experience certain phenomena requires prior experience with those phenomena. Another example might be wine tasting: you have to taste a lot of wines before you can start naming vintages, but it's completely within the realm of human possibility to do so. A veteran wine taster and a college undergrad sampling the same vintage have different subjective experiences of the wine.

@Neil Bates:

Josh, the trouble is: our minds could have "represented" what we see a purple some other way, by switching the correspondence such as seeing what looks "red" for the middle wavelengths and "green" for the longer ones ("make red look green and green look red.") Nor did the brain "need" to have an actual experience for us, to have the behavioral control, that is the key issue.

For me, this is begging the question. I've been making a positive, empirically-informed case that experiences are indeed causal -- that the brain does "need" us to have an actual experience not so much for the control part as for the learning to control part. Babies aren't born able to crawl, they need to wiggle around to learn to use their bodies first.

Also, qualia inversion doesn't help your case in the least. I think qualia inversion is actually even likely but it will never be possible to confirm or deny that. But any form of qualia inversion would be structurally isomorphic (neglecting complications like color blindness and poor vision) to other qualia schemas.

Umm, maybe my last comment was too long? None of the modded comments get promoted, I don't understand how modding works on this blog.

Short versions:

daedalus2u, I had never considered agency detection and self identification/continuity of experience as related. Great thoughts!

ppnl, you would have a point if consciousness was a digital process, but it's almost certainly not. There are no algorithms being run inside your head. I suspect it is some kind of analog signal process; color is a signal, not a number.

Neil Bates, I think your post @29 begs the question. I've been making a positive, empirically-based case that experience is causal -- that it DOES matter to your mind whether you experience something. Not so much for the control itself (which we know can be autonomic) as for the degree of fine-grained learning to control. Babies aren't born able to crawl, they need to wiggle their bodies around for a while to figure out how to control them.

Here's a question that will narrow down the question of consciousness: Do you remember a time when you weren't conscious? Of course! That blank time between birth and the acquisition of language; "you" as a concept didn't exist. In fact, your identity was created by your parents and society. For most humans who have ever existed the idea of individuality would be incomprehensible. This can all be traced back to human babies being born premature and utterly helpless, except for the magical ability (or not) to command giant all-powerful beings to feed, comfort and protect us. We NEED to control our environment, or we die.
This overwhelming need and the magical perception of how reality works persist: all-powerful parents become the gods and we command them with all sorts of devices. The supernatural (by definition) is OUTSIDE physical reality; it's a figment of childhood helplessness.

I feel much the same way about free will. If it's an illusion, it's a mighty convincing one. But whether it's real or just an illusion, I don't understand how physical processes in the brain can can create it... It also seems perfectly obvious that we are conscious and have free will.

Definitions are so important, and you didn't offer yours. It seems perfectly obvious to me that, to a naturalist, contracausal free will is a total non-starter.

By Reginald Selkirk (not verified) on 03 Nov 2011 #permalink

The illusion thing goes like this: You're standing in front of a door that is only as tall and wide as your body. A sign on the door says "Secrets of the Universe." You say - "Hey that's for me. Let me in." The door opens a crack; you get all excited - pack several suitcases, grab your favorite book of philosophy or religion, put on a hat filled with cherished illusions and try to get through the door. You can't. You push it, kick it, have a tantrum. The door vanishes.

@ppnl:

To flesh out my objection a little more, think about the process of learning color words. You aren't born with an algorithmic process in your head that parses certain frequencies of light into color words. You learn words for colors through subjective experiences with the words and colors. The experience part is part of the learning process, it is by no means unnecessary.

In other words, you have to construct your own color-recognition algorithm over the course of your primary language acquisition process -- you're not born with such a thing (as I've tried to show by demonstrating that color concepts are culturally dependent). You can only construct this "algorithm" through experience with colors, color words, and color concepts -- e.g. a parent playing the color game with a baby, "What color is the ball? The ball is red. Rrrreeeedddd." (The exaggerated phonemes in the mother/child language game are remarkably consisted from culture to culture.)

So your "digital computer" metaphor breaks down here because we're not starting with the algorithms and data structures we need, we must create them as we go along. The algorithm creation process requires what we might call awareness or conscious experience.

Dan L.:

" To flesh out my objection a little more, think about the process of learning color words. You aren't born with an algorithmic process in your head that parses certain frequencies of light into color words. You learn words for colors through subjective experiences with the words and colors. The experience part is part of the learning process, it is by no means unnecessary."

Again I feel that you are missing the point. For example how do I make a computer "experience" color so that it may develop this algorithmic process that we do not have at birth?

We may not have the algorithmic process to parse certain frequencies of light into color words. But we do have the algorithmic process to develop that algorithmic process. And again that process need not make any claim to experience.

I have really really never understood people who claim that consciousness results from language. I can only assume that anyone who is able to look at a suffering animal and claim that it isn't experiencing pain has never had a pet. And they probably shouldn't.

Language increases the complexity of our experience of consciousness but it seems entirely irrelevant to the naked fact of experience.

Dang!I was hoping someone would finish the illusion story. The worst illusion comes next!

So there you are, with your baggage, and the "Secrets of the Universe" door has vanished. You assume you've passed through the door - Why not? You have a slew of degrees, you've written lots of papers, and everyone tells you how smart you are. So there you are, assuming that whatever you come up with HAS TO BE a secret of the universe. A cruel illusion.

Bomoore, "real feelings" of consciousness (remember, specifically distinguished from "thought processes" as logical structure) are the very essence of "animal life" and aren't about making *humans* special - ironically, the higher-level thought process like us arguing about it, is what makes us special! So you miss the point, also regarding "supernatural." I tried to explain that multiple-aspect perspective says that the same thing offers different character or properties depending on how it is encountered, defined as not being a *numerically* different "something else." Therefore it is not supernatural, it just means that what the natural is like depends on how it is grabbed onto. You don't have to agree with me, but not even really appreciating my viewpoint is like listening to teabaggers complain about communism.

Dan L, I'm not sure what your point is, but it seems tangential to mine and not contradicting the sort of points I've made.

Reginald: but your view of causality is spoiled by our finding the non-classical nature of the universe in its quantum aspect. Whether that affects the brain or not (and with electron wave functions in electrically active material being spread around, it might), it disables the clear conceptual objection you make in principle.

I just wanted to apologize to everyone whose comments got caught in moderation. I try to check at least once a day, but some days I simply forget to do so. Frankly, I still haven't figured out all the rules that get a comment sent to moderation. Some of my own comments have ended up in moderation.

@Bomoore - Your spiel on consciousness and language makes a whole heap of sense - marvelously put.

@Neil Bates (#37) : "Well ... First, we don't know much about how the "physical brain" affects the concept of C."

1. Your missing the point that "consciousness" isn't well defined. 2. We know *very* well that if I slowly eat out your brain with a sharp spoon, little by little you as a person and the consciousness that goes along with it wither and perish. That's called "knowing how the physical brain affects it."

"no one has actually bulit a working model program"

You mean like a computer program? Why does our inability to re-create the complexity of the brain on a computer have any saying on whether consciousness is physical or not?

Neil: [people who think understanding consciousness is too hard] are not driven by "faith" at all, but by constant, brass-tacks encounter, our experience of things of all sorts. "

Like what? What experiences are you referring to that makes you so sure that consciousness can't be explained by simple and physical-bound means? (Oh please, oh please, say something like 'tunnel-like light just before dying' ...)

Neil: "Ironically, "faith" is about things like thinking the universe must basically be simple, that it follows parsimony, that it ought to "make sense" of the sort model builders like (hence the drive to evade true quantum uncertainty and the also incomprehensible "collapse of the wave function" by building flawed castles in the sky of many-worlds (unobservable!), buttressed by fallacious arguments about decoherence (check name-link blog or Google for "quantum measurement paradox.")"

I can't speak for most of these straw-men you've built, but anybody with a basic grasp on astrophysics, physics and chemistry understand very well that the universe is built from very simple parts, and that you can create complexity from simpler parts without the need to invoke mystery or call upon the super-natural.

Neil: "Our reply is, rather it is you who have a "faith" in what things "ought to be like" (ie, the procrustean program of "legislating reality" instead of discovering it)"

You mean, "there is no proof of the super-natural, therefore the 'problem' of consciousness ought to have a naturalistic answer"? That sort of thing? Is that my faith?

Neil: "rather than accepting the epistemic ground for being of a curious nature that is none of our business to complain about, only to be candid about."

Epistemic ground? Being of a curious nature? None of our business? Complain? Candid? Sorry, you're not making any sense, dipping your toe into the shallow end of the Choprac Ocean. Pardon me for not wanting to jump in.

If you ascribe consciousness to be a feature of being, or existence, in the same way weight is a feature of mass, you pretty much resolve the hard problem. Our experience of being aware relies on this feature, according to Vedanta, but I doubt it will ever be detected by anything other than another aware being. Thus, materialism gets to stick around, and yet folks will have to acknowledge that there is an essential mystery to life that will never be resolved.

"The problem will be solved when we stop looking for something that just ain't there and pay more attention to the lower-level mechanisms that consciousness depends upon." - Dan.

I agree, Dan.

"@Bomoore - Your spiel on consciousness and language makes a whole heap of sense - marvelously put." Dan.

Thank-you Dan.

Please take a look at my comment at 37. It was back-inserted by the moderator well after the conversation had moved on; so it was easily missed. I think it might be a new angle on the 'hard-problem' and I would like the idea to be subjected to discussion/ criticism in this forum. Come on you guys; don't hold back!

By John Jacob Lyons (not verified) on 03 Nov 2011 #permalink

But in the meanwhile, I canât help appreciating Ambrose Bierceâs reformulation: cogito cogito ergo cogito sumââI think I think therefore I think I am,â which Bierce noted might actually be as close to truth as philosophers, at least, have ever gotten.

I was going to say that the closest I can come is that I'm aware of my awareness.
An internal dialogue akin to language between individuals.

That's what I understand consciousness to be.
Yeah, well, whatever, I don't even think I understand how hard this question is. It is hard to imagine how hard it is. Like trying to picture infinity.

The perfect fit door, bo moore, that's very good.

By tushcloots (not verified) on 04 Nov 2011 #permalink

Geology is a great example of how archaic baggage (a static earth), once dumped overboard, can release a tidal wave of innovation in science. The view of earth as a dynamic system that is susceptible to distant events in the universe, as well as one that is sensitive to human activity at the smallest scale, is akin to the Copernican Revolution!

The awareness of earth as a dynamic system has provided the missing engine that drives evolution, and has focused the search for the origin for life in physical process - and opened the search for the generation and evolution of life on other planets. It's a great time to be in science - and I don't think we quite understand the revolutionary step this is!

Is it obvious that we have free will? I've never believed that I had it. Well, not since I first understood the phrase, anyway. I may sometimes be unable to articulate the reasons why I make a particular decision, but I never believe that there are no such reasons.

Which presumably means that you wouldn't recognise the inside of my mind even if you could see it, and I wouldn't recognise the inside of yours. Are we both conscious?

By Ian Kemmish (not verified) on 04 Nov 2011 #permalink

@ppnl:

Again I feel that you are missing the point.

I could accuse you of the same thing, but that wouldn't help either of us understand the other. I'm pretty sure I do understand what you're saying and I'm trying to point out a specific problem with it.

For example how do I make a computer "experience" color so that it may develop this algorithmic process that we do not have at birth?

You CAN'T make a computer experience color. Computers are not the sorts of things that have experiences. But then, brains are not computers.

We may not have the algorithmic process to parse certain frequencies of light into color words. But we do have the algorithmic process to develop that algorithmic process. And again that process need not make any claim to experience.

I seriously doubt those processes are algorithmic. There is no floating point unit or registers or L1 cache in your brain. There is no reason to suspect your brain is running algorithms. There are a lot of reasons to suspect otherwise.

I have really really never understood people who claim that consciousness results from language. I can only assume that anyone who is able to look at a suffering animal and claim that it isn't experiencing pain has never had a pet. And they probably shouldn't.

Pretty sure I never said that.

Language increases the complexity of our experience of consciousness but it seems entirely irrelevant to the naked fact of experience.

This is exactly the assertion I am arguing against. The increase in complexity due to language is, I think, the reason that our internal, subjective worlds are so complex. I think it's the interplay of language with what you call the naked fact of experience (pretty good phrase, BTW) that is what many people think of as consciousness.

@bomoore:

That was Tom saying your spiel was good, although for the record I agree, I think you and I have pretty similar perspectives on this.

"You CAN'T make a computer experience color."

We can't make a human do so either.

We can make the human "experience" something that isn't there: the blind spot.

We can make light sensitive elements send a signal to a data collection device where knowledge of the sensitivity to colour of that element can be transformed into a colour picture element. For humans, that would be "use the cones" (IIRC), for computers, "use a beyer mask on a CCD".

@Wow:

Actually, I'm pretty sure that if you show a color swatch to someone you make that person experience color. But I'm not sure I understand what you're trying to get at or how it contradicts the point I was trying to make.

I was always fond of this: "If the human brain were simple enough for us to understand, we would be too simple to understand it."

We can approximate, with every measure available, and become incrementally closer to the truth. That's what I do every day. It's my job. Life is sweet. Science rules.

I agree with Barash that the pursuit of questions of consciousness has little utility beyond knowing, and this is of decreased use to someone who performs purely clinical-minded research.

I also agree with Rosenhouse, clearly, in that we are not as far from making sense of some sort of overarching theory of mind, rooted in the biological function of the brain, as Barash seems to say we are.

When it comes down to whether or not the data collected is sound, that's where there is the greatest debate. Methods in neuroscience have extreme variance. They can be very precise, or they can be very weak estimations. While different combinations of these constructs allow for many different models of function to be built, and verified, not every model wins. It's also important to keep in mind that neuroscience is a discipline still in it's youth. There are some brilliant minds working at every level to define a construct that most accurately describes their slice of the pie. In time we will get there. Fresh minds join the ranks yearly.

Ultimately every one of these constructs will prove useful- in as many ways as there are to understand how the brain creates what we experience from atom to cell to organ to body and mind, there are as many ways for the system to go wrong. Knowledge here, is power.

By neuroscientist (not verified) on 04 Nov 2011 #permalink

@Neil Bates:

Dan L, I'm not sure what your point is, but it seems tangential to mine and not contradicting the sort of points I've made.

No, you said:

Nor did the brain "need" to have an actual experience for us, to have the behavioral control, that is the key issue.

That's not at all tangential to what I'm saying. I'm arguing the inverse of this proposition, that the brain does need to have an actual experience for the sorts of complex behavioral controls exhibited by human beings. Simply asserting that it's not necessary is begging the question.

That, and a fair amount of what I've been saying was prompted by your "we have to learn color words -- so what?" comment from earlier. So no, I don't think it's tangential to what you're arguing at all.

Some misconceptions floating around, first in reply to Alex: you still greatly misunderstand my points, even after much repetition. I can accept my literary indulgence contributed to not always being clear, however: I have said several times that I am referring to the much-debated issue of understanding how the stark feel of our *everyday* experiences - not special ones like near-death - can be fit into the concept of "signals running around in the brain." The sort of "third-way" answer to that (neither simple monism nor dualism) is relative or dual-aspect conception. That does not mean "supernatural", it means that the same natural process is relative in character or traits to how it is accessed etc. I bring in quantum mechanics because that is a known analogy, in the sense of outcomes and apparent "character" of things being relative to how we measure them. Chopra etc, is the straw man, not things I've said.

Wow: you are talking about having a way to discriminate between wavelengths, no big deal for "machines" but that is not the issue. The issue is the way colors *look to us*, that funky beautiful quality and not just "being able to know the difference." Some people just can't "get" this, if you don't well OK but it doesn't make it another issue instead.

Dan L, it is highly debatable whether the brain needs the "actual experience" to do whatever. I'm not just asserting or begging the quesiton any more AFAICT than someone saying it does need to, I have my reasons and so do the thinkers who I agree with. Remember, we're still talking about whether experience needs to have that subjective aspect "to us" which kind of chips against the idea that we "need to" to behave in a given way.

Anyone, check out what Tom English had to say, it is elegant and rather well frames the issue. I seems to support something akin to the property dualism I've promoted here.

By Neil Bates (not verified) on 04 Nov 2011 #permalink

Is it possible for us to agree on a one-sentence definition/ explanation of 'consciousness'? My attempt is:-

"The adaptive, subjective model of reality that emerges from the integration of concurrent information from the senses and cognition."

By John Jacob Lyons (not verified) on 05 Nov 2011 #permalink

No. Your definition contains a direct reference to that which needs to be explained. A definition is of little use unless it shows me a way to build the thing being defined.

I do not know what being "subjective" entails physically. I can build an algorithmic adaptive model of reality. But it has no use for the "subjective". I have very good empirical reason to believe that my algorithmic adaptive model will function every bit as well as any subjective adaptive model. So what is the cause or effect of the subjective?

Anyway I hope this shows up. My last comment is still in moderation after several days.

Thanks for your response ppnl.

Yes, 'subjective' could make it tautological. Rewrite as -----
"For any particular organism ---" and delete the word 'subjective. However, the word 'organism' implies something biotic and so we bump into the problem of distinguishing the biotic from the abiotic. Solving this one will, as far as I am aware, have to wait for a while.

Sorry ppnl but I can't produce a definition that would show you how to build a consciousness; that's a lot to ask!!

By John Jacob Lyons (not verified) on 06 Nov 2011 #permalink

Dan L, it is highly debatable whether the brain needs the "actual experience" to do whatever.

Yes, that is what I'm saying.

I'm not just asserting or begging the quesiton any more AFAICT than someone saying it does need to, I have my reasons and so do the thinkers who I agree with.

Simply asserting what is "highly debatable" is "begging the question" by definition. I am not simply saying "yes you need the experience," I am making evidence-based arguments for it. That is why I would not describe what I am doing as "begging the question."

Remember, we're still talking about whether experience needs to have that subjective aspect "to us" which kind of chips against the idea that we "need to" to behave in a given way.

Not even sure what you're trying to say here.

I too am a materialist/determinist/atheistic/hard science person, and I don't have any theories better than those already presented here in this thread.

What I do have is the relatively unusual experience of having an 8cm tumor removed from the left frontal lobe of my brain. I detail my experiences fully in my blog (click on my name if you're at all interested), but for those of you who aren't willing to stroke my ego quite THAT much, here's a short version:

I remember going under for the surgery, and I remember waking up nine hours later. My first thoughts were of surprise at how sharp my mind felt; I knew where I was, why I was there, who I was with... mind you that before the surgery they didn't know if I'd ever even be able to speak again.

It took me some time to take inventory of my mind and pull myself together, and I mean that in a more literal sense than you might imagine. I felt as though I had been shattered. Most of the pieces of me were still there, but they were looser, and less restricted. Almost liquid. Like I had to put myself back together again, and there was no guarantee - or requirement - everything would fit back together as before.

I felt like my consciousness was as much a function of my brain as moving my hands, or walking. The pathways that used to play in to my personality had been damaged, so my brain found new pathways. It remembered who I was and how my sense of self was put together, and found ways to preserve that, however imperfectly.

In the end, I have no idea if I am who I used to be, before the surgery. That cancer, and all the darkness it brought me, came from my own body and cells. It was part of me, and now it's gone. My conclusion, as completely uninformative and unsatisfying as it may be, is that I am now who I am now, and I can work with that.

It's still a very strange sensation, trying to understand how to mourn the loss of oneself.

Jason,
I've got a comment held up in moderation. I wouldn't bother over it except that the last time I was getting involved in a discussion here I lost some good-sized posts into the ether. Can you tell if I'm doing something to trip the spam-catcher?

@71 Dan L:

That's not at all tangential to what I'm saying. I'm arguing the inverse of this proposition, that the brain does need to have an actual experience for the sorts of complex behavioral controls exhibited by human beings. Simply asserting that it's not necessary is begging the question.

That, and a fair amount of what I've been saying was prompted by your "we have to learn color words -- so what?" comment from earlier. So no, I don't think it's tangential to what you're arguing at all.

Dan, my mom was a very good artist and I grew up in a household with a lot of color. When I see "ochre" I experience ochre. Its between orange and yellow, and I see it and name it as different from either. There are a lot of other colors that I name and experience that apparently many others do not. I realize this is a first person report, but perhaps it bears on the distinction you are trying to make.

@74
" Sorry ppnl but I can't produce a definition that would show you how to build a consciousness; that's a lot to ask!!"

Unless you can say how I can in principle build one you may as well be defining magical terms. We could just as well be talking about balrogs and unicorns.

@ ppnl

"Unless you can say how I can in principle build one ---"

Because you say "I", I firstly assume that you mean "create" rather than "evolve". On this assumption, your challenge is impossible because consciousness cannot be created; I suggest that in certain circumstances it will evolve over evolutionary time. The same comments would apply to a challenge to (in principle) provide instructions for building the human eye.

If you do indeed allow the evolutionary process in your challenge, see Post 37 for a start. Senses evolved. I am suggesting that as synchronous senses became available, consciousness was an adaptive model of reality that emerged.

What do I mean by 'emerged'? For anyone unfamiliar with this philosophical/ scientific use of the word, please type 'emergence' into Wikipedia.

By John Jacob Lyons (not verified) on 07 Nov 2011 #permalink

Many books have been written on this subject and it has been much discussed both here and elsewhere. It has even been referred to as "the most difficult problem in science".

Here is a challenge of my own -- I contend that my own succinct definition/explanation of consciousness is the best one we have at present. Can anyone suggest a better one? Just to remind you, it is that;

"For any particular organism, consciousness is the adaptive model of reality that emerges from the integration of concurrent information from the senses and cognition."

By John Jacob Lyons (not verified) on 08 Nov 2011 #permalink

When someone asks "how" the brain could possibly "generate consciousness", I like to ask how silicon could possibly generate Windows. Sure you can explain that Windows is an emergent process and there is no "particle" of Windows to be found in a computer, you can even explain how every single part works, but where does the computer stop being the computer and start being Windows? If I dissect the computer, where will I find Windows?

If you see how ridiculous that sounds, you will see how ridiculous I find "the hard problem".

I think the only reason why it's brought up is to give religion something to answer.

Which it doesn't do, but hey, we're only allowed to admit science doesn't answer it...

"When someone asks "how" the brain could possibly "generate consciousness", I like to ask how silicon could possibly generate Windows."

This is an interesting analogy and it almost works MSD. The difference is that Windows needs human action to provide its facility to that human. It provides no facility at all to the computer that contains it. On the other hand, the human brain provides the facility of consciousness, directly, to the relevant human.

By John Jacob Lyons (not verified) on 09 Nov 2011 #permalink

"When someone asks "how" the brain could possibly "generate consciousness", I like to ask how silicon could possibly generate Windows."

One of the most interesting classes I've taken was a basic applied electronics class. Day 1: "this is a wire. It carries charge." Day last: "build a digital stopwatch."

It didn't give me any insights into consciousness, but it answered your question. :)

Windows, or any OS, actually does provide facility to the computer that contains it. The BIOS gets instructions on what to do FROM the OS. Further, there are numerous programs that run entirely without the need for human input and can be easily set up to run from the moment the computer starts up.

I'm really not sure how that's relevant though, unless I misunderstand your use of "facility".

Eric, yep, it's perfectly explainable, and no one would even think to consider a program as mysteriously outside the hardware that operates it, yet somehow awareness needs it? I consider "awareness" no different than any other program.

"The BIOS gets instructions on what to do FROM the OS ---"

Consciousness is an adaptive 'facility' of the brain of an organism in the sense that it increases the expected frequency of occurrence of the particular pattern of gene-variants carried by that organism in future generations. It does this by increasing the probability of survival to maturity/ expected fecundity.

I respectfully suggest that this is not analogous to an OS simply passing information to a BIOS MSD.

By John Jacob Lyons (not verified) on 10 Nov 2011 #permalink

Where, inside a watch, does "time" exist?

If you mean one evolved and one is designed, then you're right, the analogy breaks down utterly there (and in many other ways of comparing a brain to a computer), however I don't see how that's a problem for the specific thing I was comparing with the analogy.

Look up David Chalmers, presently at U.Australia, and widely considered to be the cutting-edge thinker on the philosophy of consciousness.

In a nutshell, his "interactionist" theory of mind says that consciousness arises when certain types of physical systems, notably brains, interact with information. He gives information an ontological status, and he uses the Platonic term "qualia" to refer to subjective sensations as having real existence.

For the neural mechanisms, see Penrose & Hameroff. Their "orchestrated objective-reduction" theory (which has spawned > 20 falsifiable hypotheses that are in the process of being tested) is concerned with neural computation at the level of proteins in subcellular structures in the neurons. What they are saying is that neural computation is vastly more complex than has been envisioned so far. Thus one of the reasons we can't effectively model a whole mind yet, is that we are radically underestimating the quantity of interconnected parts that are needed.

For free will, see _Order in spontaneous behavior_, Maye et. al., PLoS One: evidence suggestive of free will in the turning-in-flight behavior of fruit flies. This also tends to be supportive of Penrose & Hameroff, in that fruit flies have among the simplest brains that are routinely studied by science, so the degree of information processing needed to generate apparently freely-chosen behavior must necessarily be occurring at a deeper level in the system.

The idea that consciousness is an insoluble problem, is obscurantist and ultimately relegates mind to a supernatural phenomenon, and thereby leads back to primitive mind/body dualism. The assumption that consciousness is a wholly natural phenomenon, whether or not bounded by the individual brain, entails that the problem is soluble, even if solving it takes longer than our lifetimes.

"In a nutshell, his "interactionist" theory of mind says that consciousness arises when certain types of physical systems, notably brains, interact with information."

Dear g724,

This does not appear to throw much light on the nature of consciousness. May I offer you by own, 'in a nutshell', explanation of consciousness given earlier in this thread but repeated here for convenience:-

"For any particular organism, consciousness is the adaptive, model of reality that emerges from the integration of concurrent information from the senses and cognition."

By John Jacob Lyons (not verified) on 15 Nov 2011 #permalink

Have you read "Self Comes to Mind by Damasio? Being a hard-nosed materialist I like to go way back to atoms and molecules so I also liked "Wetware: A Computer in Every Living Cell" by Dennis Bray. If you think Dawkins is up to dare on evolution try "Life Ascending: The Ten Great Inventions of Evolution" by Nick Lane, one chapter is on consciousness. I am waiting for Pat Churchland to put her wonderful relativism in print.

Hi Jason,

You wrote;

"Don't like my explanation? Think I'm waving my hands? Fine. Give me a better one. I'm all ears."

Looking back over this thread, I notice that you haven't commented on my explanation at Post 37. Please take a look. What do you think?

By John Jacob Lyons (not verified) on 18 Nov 2011 #permalink

Damasio has an answer: Consciousness starts arising when some parts of the brain sense some other parts, the latter sense the body and its interactions with the environment.

I think that this very question being an issue is quite is the issue in itself. Modern science is too thirsty to explain everything. Iâm satisfied with knowing that the discovered functions of my brain, both current and future, result in my experience of consciousness. Can it really be explained through scientific theory? Iâm not denying the importance of science, but I donât believe it should be over-estimated either. There is a line where science is no longer helpful in understanding. I agree with your theories on free will, but I donât think the study needs to go much farther.
Iâd also like to urge you not to use such offensive language in an argument such as this. Comparing oneâs views to those of a âstonerâ is a very disrespectful and ineffective way to make a counterargument. The quote you provided sounds like someone who believes in a universal harmony and simply accepts the world as it is- quite the opposite of science. Therefore, I donât think itâs a necessary argument.

"Modern science is too thirsty to explain everything."

Yeah, how DARE anyone try to answer questions.

Hang on... If you say "God did it", then THAT'S answering questions.

I guess religion is too thirsty to explain everything, eh?

> Can it really be explained through scientific theory?

Can anything else explain it?

No.

> Comparing oneâs views to those of a âstonerâ is a very disrespectful and ineffective way to make a counterargument

Nope. It's a comparison.

If I compared thee to a summer's day, isn't that disrespectful too?

Whining about how it's disrespectful is an ineffective counterargument.

The quote provided sounded like something a stoner would say.

Nobody called them a stoner.