INTEGRAL WORLD: EXPLORING THEORIES OF EVERYTHING
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Publication dates of essays (month/year) can be found under "Essays".
David Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).
Are Brain States Equal to Mental States?
A reflection on the Hard Problem of Consciousness
Before I begin my reflection on Zombie consciousness and the hard problem, I want to congratulate Andy Smith on his magnificent essay, “What Good is Consciousness, Anyway?”. He lays out his points very clearly and has quite properly placed the hard problem of consciousness in an evolutionary context. Smith's essay deserves close study since it is one of the most cogent presentations I have ever read on this subject, which is usually mired in technical jargon and obscurity. My response is more of a reflection than a rebuttal on what Andy has written, since there are many angles to the study of consciousness and the jury is still out about where all of this will end up. I just know for myself that Smith has got me thinking anew on this subject and for that I give him my deepest thanks.
In an earlier essay, Zombie Consciousness 101: Alan Turing, the Virtual Simulator Hypothesis, and Recalibrating the Hard Problem, I attempted to turn the issue of mindless automatons on its head (with a bad pun included) by pointing out that, “This [very issue] begs a much larger question: how do we really know that the Zombie is merely a mindless automaton? The obvious answer is that we don't.” In other words, while it is certainly true that we can project consciousness on all sorts of objects that may lack it (think of Disneyland's Great Moments with Mr. Lincoln or the various animatronics on the Pirates of the Caribbean ride), the reverse is also the case where we can believe someone is a zombie or in a vegetative state when, in point of fact, they are not. As I explained in length in the previous essay,
“For instance, medical doctors have only recently come to realize that some coma patients who they mistakenly believed were in a completely vegetative, non-aware state, were in fact quite conscious but were unable to communicate such outwardly. The case of Ron Houben is chilling case in point. As the Daily Mail in U.K. reported,
So, while I may know what I experience personally (the qualia of this or that), I simply don't know the contents of your mind (the other). Because of this I conjure up all sorts of models about why such and such behaves the way they do, but never absolutely knowing if such modeling on my part is absolutely accurate.
Thus, the real issue of Zombie consciousness is what is known in philosophy as the “problem of other minds.”
Michael Lacewing at the University of London illustrates the intrinsic difficulty of the Zombie hypothesis by underlining its inconceivability in the following handout that he made for his class to tackle the problem:
As philosopher Saul Kripke famously argued, “. If A is identical to B, then A and B are the same thing. It's not possible for A to be B and for it not to be B. If A is B, then A is B in every possible world.”
Along these contentious lines, one can argue that if a supposed Zombie has all the neural correlates that we as humans have (the same exact brain structure, for instance) then it would by that very definition and functionality have consciousness. This, of course, dovetails with Giulio Tononi's Integrated Information Theory (IIT), where given the complexity of certain material substrates one can determine the relative level of Phi: “Given any such system, the theory predicts whether that system is conscious, to what degree it is conscious, and what particular experience it is having (see Central identity). According to IIT, a system's consciousness is determined by its causal properties and is therefore an intrinsic, fundamental property of any physical system.”
While the Zombie idea seems to illuminate the hard problem of consciousness I suspect that what it does is actually beg the question by allowing us to imagine something that is (in terms of our own consciousness) unimaginable. This is akin to that classic philosophical query which asks us the following: “If a tree falls in the forest and there is no one to hear it does it make a sound?”
On the surface this may seem to be a reasonable, even important question, but on deeper inspection it turns out to be a fake conundrum, since it arises from a semantic confusion similar to “Hey Dave, have you stopped beating your wife? Yes or no?” The only way to accurately respond to such an either/or question is to usurp the question itself and explain that it is ill-framed and as such presumes far too much in its grammatical construction.
Therefore, I think Andy Smith is assuming far too much when he argues the following,
“The short answer is that a zombie does not feel guilt, remorse, etc., but it has the same underlying brain activity that, according to materialism, results in the same behavior. Ask yourself, for example, how is the feeling of guilt going to affect what you do? Are you going to apologize to someone? Are you going to change some behavior which you regret, so that you act differently in the future? Are you just going to mope around for a while? Whatever it is, a zombie can do the same thing.”
The idea that the Zombie is identical in every way to me physically means (along the lines of Kripke's logical syllogism) that he is me and thus would have self-reflective awareness and wouldn't be a mindless automaton.
Put differently, to imagine that a Zombie has the same functional neural correlates of a human being with self-reflective awareness is to actually define such as a human being and not as a Zombie.
This means that the hard problem of consciousness is directly related to our difficulty in understanding how particular physical forms of complexity lead to certain forms of awareness and how they mirror each other.
As Patricia Churchland might put it, “brain states are mental states.”
If this is true, we should, given the recent work in ITT, be able predict whether specific complex physical systems have consciousness or not.
In this context, there are indeed very good reasons why evolution would have favored complex physical systems of a certain kind because those very systems would by such complexity house sophisticated forms of self-reflective awareness which have the ability to in-source varying actions before out-sourcing them in a real-world exterior to itself.
Therefore, natural selection doesn't have to adjudicate the “interior” forms of awareness, since the exterior complexity that houses such is sufficient to be passed on. Why? Because that particular brain by its very structure produces the virtual simulation advantage and therefore is a favored trait that has emerged among higher primates.
Thus, the hard problem of consciousness is truly a physical problem that needs to be resolved since they are not mutually exclusive.
The physicalist response to the Zombie hypothesis is summarized by Lacewing when he writes,
“The fact that we can conceive of zombies doesn't show that zombies are metaphysically possible. If phenomenal properties just are certain physical and/or functional properties, then it isn't possible for zombies to exist. Given the physical properties we have, if physicalism is true, it just isn't possible for a being with the same physical properties not to have consciousness as well.”
Already progress has been made in ITT. In a recent paper published in the Frontiers of Human Neuroscience, “Estimating the Integrated Information Measure Phi from High-Density Electroencephalography during States of Consciousness in Humans,” the authors conclude their highly technical study with the following summation:
Essentially, the physicalist argument boils down to this: “Phenomenal properties cannot differ independently of physical properties.” This, if true, disqualifies the Zombie objection.
However, we are still in a very rudimentary stage in understanding consciousness so I think it would be highly premature on my part to say that P = P and that settles the argument.
Rather, Andy Smith is opening the door to another possibility that we should take seriously, even if it contravenes our own pet biases. The Zombie hypothesis presupposes that there could in fact be physical systems that are identical to our own but which do not have self-reflective awareness such as we possess. They are to crib an oft used cliché' “soulless automatons”.
If this is true, why then would evolution have favored those with self-reflective awareness and not mindless bots if both can exhibit the exact same behavior?
As Andy Smith explains,
“Since only our behavior, not how we experience it, can be subject to selection, consciousness adds nothing that could be the basis for natural selection to act on. The short answer is that a zombie does not feel guilt, remorse, etc., but it has the same underlying brain activity that, according to materialism, results in the same behavior. Ask yourself, for example, how is the feeling of guilt going to affect what you do? Are you going to apologize to someone? Are you going to change some behavior which you regret, so that you act differently in the future? Are you just going to mope around for a while? Whatever it is, a zombie can do the same thing.”
The objection here, of course, is that natural selection is not a force, per se, but rather a description of what organisms survive in any competitive eco niche and thus passes on those favorable traits which allowed it to pass on its genetic code. So, it is not that evolution is “acting” on anything per se, but rather that we are describing after the fact results and then imputing why such and such organisms may have survived nature's sieving process. If sufficiently complex neuronal systems must have attendant subjective forms of awareness which are intertwined with them, then evolution would indeed favor such organisms without having to “know” the internal states of such. Hence, our “experience” and our “consciousness”, in this context, does indeed “add” to our survival and would have been selected for (not against), even if nature would be “blind” to such interior states of being.
Therefore, this notion would upend Andy Smith's conclusion when he writes, “So to me, the conclusion that consciousness, in the hard sense, has no survival value is obvious.”
Quite the opposite, if we accept on some level that mental states are brain states then there is enormous value to our sense of qualia and this, no doubt, would have been favored in our evolutionary history since being able to virtually simulate varying possibilities (with the attendant emotional feedback loops, like guilt, desire, love, remorse, etc.) gives us a tremendous advantage over creatures with less imaginative variability.
I suspect that we will better confront the Zombie notion of consciousness when we have completed our reverse engineering of the human brain and have mapped out (functionally) the complexity necessary to generate human-like awareness in computational simulations. If we do succeed in the latter, we will confront a fundamental ethical dilemma because if a robot acts in every way like a human being do we then have to confer on it the same rights and freedoms that we too possess? If consciousness is indeed substrate neutral and only dependent on sufficiently complex networking, then it will revolutionize not only our understanding of consciousness but what it means to be human. We will undergo a radical transformation that is even greater than what was achieved in the Copernican, Newtonian, Darwinian, and Einsteinian revolutions.
In conclusion, I don't want to suggest that I have somehow figured out the knotty issue of consciousness. Far from it. Rather, it is only to point out that there several avenues of thought on this subject and I think it is wise to try on as many hats as possible to see through their various strengths and weaknesses. It may well be that we will end up agreeing with Colin McGinn and the Mysterianism school that proclaims, “Consciousness is a mystery that human intelligence will never unravel.” Or, we could criticize the mind is brain idea as simply unintelligible or unimaginable, as Sam Harris does when he argues,
“Likewise, the idea that consciousness is identical to (or emerged from) unconscious physical events is, I would argue, impossible to properly conceivewhich is to say that we can think we are thinking it, but we are mistaken. We can say the right words, of course'consciousness emerges from unconscious information processing.” We can also say “Some squares are as round as circles” and “2 plus 2 equals 7.' But are we really thinking these things all the way through? I don't think so. Consciousnessthe sheer fact that this universe is illuminated by sentienceis precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don't know what that sentence meansand I don't think anyone else does either.”
I had likened our difficulty in studying consciousness to a Mobius strip in a previous essay ["Inside Outside"] where I opined,
“Perhaps the Mobius strip is a useful, albeit limited, metaphor here to invoke since its unusual properties on first sight boggle the mind: “a surface with only one side and only one boundary component. It has the mathematical property of being non-orientable.” Analogously, the difficulty we have with studying consciousness is precisely that we cannot communicate what it is without losing the very quality that makes it such. Imagine that you wanted to convey what it was like to dream to someone who never dreamt and only had access to a waking state. How could you communicate dreaming without dreaming itself? Harris suggests strongly that you cannot and that any attempt to circumvent the obvious is simply unintelligible. How can one convey something that is non-orientable to that which is only orientable, if one has to forego the very thing that would communicate it? Isn't the breakdown of consciousness (which is strategically speaking the reductionistic paradigm of Crick, Churchland and others) precisely the problem, since it breaks apart the very thing that must be experienced as a gestalt? Consciousness isn't a thing to be described among other things, since it is the context, not the content, of what is experienced. The Mobius strip is, by definition, an endless surface and any attempt to reorient it (by definition again) transforms it into that which it is not. Isn't consciousness akin to this very definitional paradox? My very attempt to correlate it or analogize it or reduce it or explain it simply upends a genuine understanding of what it is to experience it, since consciousness isn't a thing to be described among other things, since it is the context, not the content, of what is experienced. More precisely, consciousness cannot be exported as a piece of content, since it is a whole context in which such appearances arise. My attempts to reduce that holism to parts ipso facto means that whatever follows will be irretrievably “lost” in translation.
I concluded the same essay with this lament,
“In the study of consciousness it appears we may have to confront an epistemological complementarity where any objective study (via third person analysis) of qualia must by necessity lose in translation a fundamental feature of the very phenomenon under inspection. Conversely, any purely subjective endeavor to explore consciousness must by its very act forego any attempt to maximally objectify what is experienced, lest the experience itself be lost in attempting to exteriorize that which is de facto interior. A broken down melody is, to quote one distinguished musician, no longer a melody. Similarly a broken down consciousness is no longer itself. . . .”
Let me conclude by saying that Andy Smith has provided us with much fuel for thought and I think what he writes necessitates taking a multi-layered view of consciousness, one which seriously considers the hard problem as he has presented it. It may well be that we will have to tackle the problem of consciousness with tools yet unimagined. My own hunch, though, is that those very tools won't be properly understood or utilized until we have completed our physical understanding of the brain and its housing. Ironically, by focusing on the physics of consciousness we may eventually be opening possibilities beyond such limits.
Comment Form is loading comments...