Why Aren’t We Zombies? (The Science of Consciousness)

Tonight I attended a panel discussion at the New York Academy of Sciences titled “The Thinking Ape: The Enigma of Human Consciousness.” The panel was presented by the NYAS, The Nour Foundation, and To the Best of Our Knowledge, a nationally-syndicated program on Wisconsin Public Radio.

The dream team: David Chalmers, the well-known philosopher of mind; Daniel Kahneman, the well-known cognitive psychologist and Nobel Laureate; Laurie Santos, a bright young psychologist at Yale who studies primate cognition; Nicholas Schiff, a neurologist who studies disorders of consciousness; and the moderator, Steve Paulson, the host of To the Best of Our Knowledge.

I jotted down a few notes. Here they are, with comments. (Everything is paraphrased, unless in quotes.)

Chalmers: Roughly, science progresses along a spectrum. Physics explains chemistry, which explains biology, which explains psychology, which explains sociology. Consciousness is a problem for science because it doesn’t fit in that spectrum.

Santos: Fifty years ago there would have been behaviorists up here saying the human mind is a black box and impossible to understand. Now we understand the mind much better. Maybe in another 50 years a new tool will help us understand consciousness and we’ll view today’s pessimism as foolish.

Kahneman: I’m in the minority in that I’ve never been very interested in explaining consciousness. It’s hard to be interested in a question if you don’t even know the structure of a possible answer.

Chalmers: Right now all we can do is match brain states to behavior and to reported experience. Currently consciousness studies is a science of correlation, not explanation. [To me, this was the comment of the night. I also think consciousness studies will always be a science of correlation. Later there was an interesting digression on whether all science, even physics, was a science of correlation, and whether having enough correlations counts as an explanation.]

Kahneman: We’ll have robots that appear conscious before we can explain consciousness.
Chalmers: I’ll be convinced a computer is conscious when it says it’s having trouble explaining why it has subjective states.
Kahneman: I don’t think being a philosopher is an indicator of consciousness. [Ha ha. He was teasing (I think), but it’s a valid point. Aside from the obvious fact that a computer can easily be programmed to automatically spit out “Why am I conscious?,” an unconscious entity can raise genuine questions about internal anomalies that give rise to such outputs as “Why am I conscious?”]

Kahneman: Emotions are important in attributions of consciousness. [I’m not sure if by emotions he meant the recognition of emotions in the subject, or the observer’s emotional response to the subject’s behavior. Either way, I agree. An intelligent but cold AI would not necessarily be seen as conscious, and targets such as babies that trigger empathy trigger anthropomorphism.]

Schiff: Right now there’s a lot of work on minimally conscious states. Certain reactions, such as tracking movement with the eyes, don’t necessarily imply consciousness, but we can infer that they might, given that they correlate with later recovery.

Santos: In work on animal cognition, people are starting to think that it’s not just cognitive abilities such as language that separate humans from other animals, but it’s our motivation to communicate. [See, for instance, this chapter by the anthropologist Michael Tomasello: PDF.] [Santos also made the funny remark that if a bird could carry on a conversation, she doubts it would have anything interesting to say—meaning that it’s more than just language that sets us apart. My thought was that sustaining human-level language capabilities would probably require a great deal of brain complexity, and thus general intelligence, so the bird probably would be worth chatting up after all. But there are debates about the modularity of cognition.]

Chalmers: I’m sympathetic to panpsychism—the perspective that consciousness, or “proto-consciousness,” is an irreducible part of the universe, like matter, and that it is present not just in the brain but everywhere. [I recommend the science writer Jim Holt’s essay “Mind of a Rock.”]

Schiff: The hypothesis that science cannot explain consciousness is a reasonable one, but not an interesting one. It doesn’t give us anything to do. I prefer to bracket it off and carry on like it’s false.

Kahneman: Regarding things that can be conscious, “I see no reason it has to be made of meat.” [I.e., why it must be a brain. Here they discuss whether consciousness is best explained in terms of information processing rather than physical processes. I think the approach with the most promise in this realm is Giulio Tononi’s Integrated Information Theory. But information theories can’t explain why consciousness emerges from unconscious matter to begin with—what Chalmers has called “the hard problem of consciousness.”]
Chalmers: Danny, if one by one we replaced your neurons with functional equivalents made of silicon, would you be conscious? [I’m reticent to use of thought experiments in which human consciousness is run on machinery other than the human brain, because I think such propositions miss something. I don’t believe two identical states of consciousness in principle require two identical physical substrates, but I suspect that they do in fact. A computer simulating all of the functionality of a single neuron, in order to approach its efficiency, would have to become smaller and smaller and more and more like a neuron until it, in fact, becomes a neuron. The same might go for reproducing the functionality of the brain as a whole.]

Audience member: What are your definitions of consciousness?
Chalmers: The best we can do is to say, as Thomas Nagel does, that it is something it is like to be. We can talk about what it is like to be me, or a bat, but not this water bottle.
Santos: If the philosopher can’t define it, I’m not going to try.
Kahneman: We can’t define consciousness, only our intuitions about it.

Audience member: What’s the purpose of defining consciousness?
Schiff: One practical use is in treating brain-damaged patients. When do we give up on them? The science of who is and who is not conscious is becoming more uncertain, making such decisions harder.

Audience member: Is it possible to have an objective science of subjectivity?
Chalmers: Yes, you can state facts about feelings. Also, first-hand experience is important to the science of consciousness.
Kahneman: I don’t think anyone is protesting that. [True, but I might add an asterisk and make the point that, technically, it’s third-party observations of people reporting first-hand experiences that are important. The first-hand experiences themselves are inherently unsuitable as scientific data; by definition they can be experienced by only one person, and scientific observations need to be reproducible.]

Kahneman: The best response to paradoxes is to simply walk away. [Don’t recall what this was in response to, but solid advice.]

Audience member: We’ve been hearing a lot about scientists. Which writers have offered the most insight into consciousness?
Chalmers: “Proust is a master phenomenologist.”
Schiff: In Helen Keller’s book Teacher, she said she did not have a self before she learned language; she was a “phantom.”
Paulson: William James is my hero.
[A couple panelists mention Oliver Sacks.]

Audience member: What is the purpose of consciousness?
Chalmers: {Still paraphrasing:] No one knows. We might as well be zombies.

Surprisingly, the phrase “free will” was never uttered, although I saw it on the horizon when the topic of the function of consciousness came up. For the record, I think consciousness has no function; it’s a side-effect or “epiphenomenon” of neural processes. We might as well be zombies indeed. (For research on the function of psychological processes that happen to give rise to consciousness, see Roy Baumeister, although in his language he ascribes this functionality to consciousness itself: PDF.)

Also, Paulson asked whether self-awareness is possible without consciousness. This one is easy. If by self-awareness one means self-consciousness, the answer is No. You can’t be conscious of your consciousness if you have no consciousness to be conscious of. But if by self-awareness one means simply a capacity for internal feedback, the answer is Yes. Any machine that monitors its own functioning could be said to be self-aware, without being conscious (but see: panpsychism). A harder question is whether one can have consciousness without self-consciousness. Can you have a subjective state without being able to reflect on it? I’ll let you think on that one. In the meantime, Beware of the Unicorn.

UPDATE: Video of the event is now online.

15 thoughts on “Why Aren’t We Zombies? (The Science of Consciousness)

  1. Pingback: Are we zombies? Chalmers and Kahneman on Consciousness [video] @sciandthecity @NourFoundation « zombielaw

  2. Pingback: So Ridiculous It’s Cool | Burners.Me: Me, Burners and The Man

  3. First off, I read your book, learned a bunch, enjoyed it, and reviewed it (as Penman). I hope I did not mischaracterize it too badly and would be interested in any views on that.
    The articles on the blog are interesting.
    In regards to this particular article, I was wondering if and how the work outlined in Consciousness and the Brain by Stanislaus Dehaene, which came out after the above discussion, had changed your views on consciousness.

    • Hi, Bart.
      Thanks very much for your review. I think you nailed it.
      I read Dehaene’s book (and actually reviewed it here: http://wapo.st/NKisdz). I learned a lot about neuroscience but not much about the hard problem of consciousness. He writes only a paragraph or two about the problem, and only to dismiss it.

  4. I thoroughly enjoyed both substance and style of the informative reviews and thank you.
    My feeling is that Dehaene may be correct that consciousness evolved because it offered a “thickening” of the present moment, allowed extension of an attended thought over time, which would facilitate the beginnings of planning, with obvious evolutionary advantages. The beginning of the ability to have a “remembered present.”
    Also, I think that later symbol use and sensory information could be compared to binocular vision, in a way, in that the information from one source, when compared to the other, gave an added dimension to consciousness. I find it interesting that according to a number of researchers, most animals do not seem to have “episodic” memory and we do, and wonder exactly how much symbol use has facilitated human development of that type of memory. It seems to me symbol use would help a great deal in the “storage and retrieval” of such memories.
    Of course they say that poor scholars always have an axe to grind and symbol use may be mine. But the sparks do fly!
    Thanks again.

    • Hi Bart,
      There are many reasons the *neural correlates* of consciousness may have evolved—episodic memory, symbol use, etc. Those are computational functions we can attribute to the wetware of the brain. But the evolution of those abilities does not explain why they should be accompanied by consciousness.

  5. Matthew,
    It seems to me that you are speaking of an unbridgeable gap between language and experience, and if so, much as I am involved in symbol use and study, and as important as I think it is, I agree with that and think that gap is one of the primary tensions in life.
    Many things are truly unspeakable: you cannot there from here.
    That is one of the reasons I love surfing and hiking mountain trails.
    I thank you again for your time and the discussion.

  6. Panpsychism is the key, but in the most elementary way.
    Each and every interacting atom displays its awareness of this interaction.
    Take for instance the encounter of an atom with a photon.
    If the atom would not sense this photon, nothing would happen.
    What actually happens is that either the photon is absorbed by the atom or the atom changes its momentum. When absorbed, the atom becomes first exited, it does not merely emits immediately a photon to become relaxed again, but this happens only after a certain time that cannot predicted in any way.
    It cannot be denied that an atom is aware of its exitation, that it does not want to be exited because sooner or later it relaxes by emitting a photon and it does this in the most individual way.
    Of course this behaviour can be explained by quantum field theory, but this makes no difference.
    The bottom line is that an atom can observe, feel and act in a logical and individual way.
    It is of course a big leap to go from this elementary display of consciousness to arrive at the human level, but the result is apparently that an ordered structure of atoms can not only construct an image of itself in its environment, but it is also able to recognize itself in this image and to recognize this recognition.
    This is only possible if this image is in sufficient detail and therefore we need a complex brain.
    I expect that it cannot be avoided that machines become self aware once they reach the required complexity.
    Is consciousness explained in this way? Being an atomic structure myself, I say yes, but the problem with individuality is that I can only speak for my self.

    • “…The bottom line is that an atom can observe, feel and act in a logical and individual way….”

      I don’t think this holds, as tempting as it is (for me) to find it true (since it would answer so much). I.e., if every mechanical reaction implies awareness, does that mean that our consciousness is made up only of physical causes and effects?

      I went down this path a little way, just to imagining if animals were “conscious”, or insects, or microbes, but thought there was indeed a difference somewhere along the spectrum. But even if not until the smallest prokaryote or virus, then certainly at the point where something is no longer considered biological?

      • If nobody can pinpoint the boundary between biological and non-biological systems, I think there is no real boundary at all, it’s just a meaningless and distracting distinction.
        There are other examples: life and dead, material and immaterial, body and soul, two points in spacetime within Heisenbergs uncertainty interval, deterministic and random reality, God or no God, natural or supernatural, you get the point; in general all kinds of dichotomy that are never observed so these don’t need to be part of reality. You cannot miss something when you ignore something that is never observed directly or indirectly.

      • [if every mechanical reaction implies awareness, does that mean that our consciousness is made up only of physical causes and effects?]
        If you don’t accept nonphysical explanations, there is no alternative. Of course you cannot compare an atom with a complicated machine, let alone with an experiencing human being. If you accept panpsychism there remains a huge lot to explain. But remember that e.g. the process that converts visible electromagnetic radiation to action potentials in our brain is well understood and this process enables a neuron to experience this radiation indirectly; the same is valid for higher level neurons connected to this neuron so there seems an unbroken chain between the observation of simple physical phenomena at the lowest level and the rich high level experience that we know and that is build up in from almost uncountable many low level experiences. I said that we are nothing but a bunch of atoms; in a certain way we are also nothing else but a bunch of neurons with a wide band connection to reality.
        In my opinion the fact that we call this a biological or a physical process has no added value, it is nothing more than a process.

  7. I think Incomplete Nature by Terrence Deacon is the best work to study about the possible way that life (and eventually consciousness) arose, although parts of it are tough sledding. And for explanations of consciousness, reading Stanislaus Dehaene and Antonio Damasio has given me the deepest understanding. Damasio divides consciousness into two types, core and autobiographical. Core consciousness would be that immediate “state of awareness between self and surroundings.” Autobiographical consciousness is the kind of extended consciousness we humans usually experience, with all of our memories of the past and plans for the future. Each type conferred evolutionary advantage. Dehaene has captured the signatures of brain activity during consciousness of objects, in the lab. Damasio has outlined a tentative explanation concerning qualia that is the first that really makes sense to me. They are two top neuroscientists whose accounts actually complement each other somewhat.

    • [Any machine that monitors its own functioning could be said to be self-aware, without being conscious.]
      The only thing missing is that it is observing its own motoring. A smart machine can conclude that itself is doing this. Has everybody forgotten this strange loop of Douglas Hofstadter?
      I don’t see any need for panpsychism. The hard problem is created by assuming that we have a a body while denying that we are our body.
      A machine that possesses our complete set of functioning sensory organs in a copy of a human body posesses a complete set of human feelings so it is not a zomby at all. It is a human being.

Leave a Reply to Bert Morriën Cancel reply

Your email address will not be published. Required fields are marked *