TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Andy SmithAndrew P. Smith, who has a background in molecular biology, neuroscience and pharmacology, is author of e-books Worlds within Worlds and the novel Noosphere II, which are both available online. He has recently self-published "The Dimensions of Experience: A Natural History of Consciousness" (Xlibris, 2008).

SEE MORE ESSAYS WRITTEN BY ANDY SMITH

What Good is Consciousness, Anyway?

A Brief Reply to David Lane and Steve Taylor

Andy Smith

Consciousness in the hard sense is far more subtle than it's frequently given credit for being.

In my previous article, Panning Panisms, I claimed that consciousness, in the hard sense as experience, has no known survival value. A zombie, defined as a being with the physical makeup identical to an ordinary human being, but experiencing no consciousness, would behave identically to a conscious human. Since only our behavior, not how we experience it, can be subject to selection, consciousness adds nothing that could be the basis for natural selection to act on.

David Lane commented:

wouldn't the sense of qualia (the hard problem?) make us respond differently than a zombie, since we (and not the walking dead) would feel such things as guilt, remorse, happiness, love, and the like and because of those are [our] "intentional" stances would be different.

Before responding to this, I want to point out that by taking this position, Lane is actually offering some support to Steve Taylor. One of Taylor's major points is that there is no way in the framework of materialism that mind can interact with body. While I noted that this isn't true—mind as a material phenomenon can certainly interact with other material processes—he is correct that materialists have no answer to the question of how consciousness in the hard sense can interact with material processes.

But by claiming that consciousness in this sense can affect our behavior, in ways that can be acted upon by natural selection, David Lane is making precisely that claim. In this respect, he is actually closer to Steve Taylor's view than I am—or at least closer to accepting evidence that Taylor uses to support his view. Taylor also accepts that this interaction of consciousness with material processes occurs. Where the two differ is that Taylor explains it through his universal consciousness, whereas Lane believes that the interaction may ultimately be explained without challenging materialism. But both have committed themselves to a position where this issue has to be addressed.

zombie

Having said that, what is my response to David Lane's comment? The short answer is that a zombie does not feel guilt, remorse, etc., but it has the same underlying brain activity that, according to materialism, results in the same behavior. Ask yourself, for example, how is the feeling of guilt going to affect what you do? Are you going to apologize to someone? Are you going to change some behavior which you regret, so that you act differently in the future? Are you just going to mope around for a while? Whatever it is, a zombie can do the same thing.

In fact, Lane himself gives the game away when he adds, “I fully concede that the OTHER person (the "other" mind) could indeed be a zombie and fake it and I would respond (via Turing) as if it were a real person.” See the contradiction? First, Lane implies that someone who is conscious would behave in some way differently from an unconscious zombie. But then he implies that a zombie would behave in a way that is indistinguishable from someone who is conscious. If consciousness affects how we behave, how can we not tell when the behavior of someone else is unconscious? All we have to do is look for that alleged difference in intentional stance.

Again, I can use David Lane's own words here. As I noted in my article, we had a discussion several years ago about this (see my article at this site, "On Religions and Scientific Agendas"). At that time, he provided this rationale for why consciousness has survival benefits:

Evolutionary biology has given us a quite reasonable explanation for why we experience things the way we do. Consciousness confers an evolutionary advantage over those who don't have it by allowing self-reflective humans (and other mammals) the ability to virtually simulate their environment before having to act instinctually upon it. This confers a tremendous benefit for any number of reasons, not the least of which is our ability to “plan” before we act. As Donald Griffin points out in Animal Minds: “Conscious thinking may well be a core function of central nervous systems. For conscious animals enjoy the advantage of being able to think about alternative actions and select behavior they believe will get them what they want or help them avoid what they dislike or fear.”

The part of this passage I agree with is that some new adaptation can only be selected for if it involves a change in behavior. An organism that performed these virtual simulations in its mind, but did not act on them in any way, would not get any benefit from them. In this case, these simulations—and the ability to perform them--could not be subject to natural selection. So clearly both Lane and Griffin accept that virtual simulations only provide a benefit to the extent that they result in some form of behavior.

The part of this passage I disagree with is the assumption that these simulations require consciousness. They don't, and it seems to me that should be obvious to any materialist. A materialist believes that any simulations must be associated with specific neural activity, from which it follows that the neural activity alone is all that is required. A zombie in theory could perform these simulations just as well as a conscious organism. If one does not agree with this, one must have a view of consciousness as something non-material, where these simulations are performed, which then somehow acts on the neural activity to change it. While Steve Taylor could conceivably accept a position like this, a materialist, as Lane claims to be, can't.

This passage also seems, again, to contradict David Lane's own views. A zombie, according to his reasoning in the above passage, would not be able to perform such mental simulations, and therefore would not be able to consider alternative actions, and plan some behavior before engaging in it. But as I noted earlier, Lane has also said, or offered up the possibility, that he would not be able to tell a zombie from a conscious person. It seems to me that if a zombie is behaving without planning its behavior at all, it would be very easy for us to conclude that it was in fact a zombie. It would certainly be very easy to design tests to take advantage of this flaw. I think even the Turing test could make the distinction.

Thinking the Unthinkable

So to me, the conclusion that consciousness, in the hard sense, has no survival value is obvious. This does not seem to be so obvious, though, to David Lane, and apparently to many scientists and philosophers. Why not? It's a fact that conscious neural processes, as they actually exist in ourselves and probably in other animals, are associated with properties—such as Lane's mental simulations, or more generally speaking, flexibility in behavior—that unconscious processes don't have. That being the case, it seems natural to assume that they have these properties precisely because they are conscious. V.S. Ramachandran (1997), with whom I believe David's wife Andrea has worked with, has made this point especially well.

But the fact that conscious neural processes have properties not seen in unconscious processes doesn't prove that consciousness is required for these properties. It could be that consciousness evolved with such processes for other reasons. This is where zombie arguments enter the picture. We try to imagine conscious processes, in the neural or material sense, which in fact are not conscious. But these arguments are highly contentious. They depend on the assumption that zombies in this sense are possible, or what philosophers usually call conceivable. I can't possibly do this literature justice here, but I will briefly provide my take on it.

I begin by observing that materialists, by definition, believe that every form of consciousness is associated with some form of brain activity, what David Chalmers calls a “neural correlate of consciousness.” Materialists may differ in how they believe consciousness and neural activity are related, but in this discussion, I will consider only strict or what Wilber (1995) calls gross reductionists (Richard Dawkins (1986) calls them precipice reductionists; Daniel Dennett (1995), greedy reductionists), who argue that consciousness is identical to the brain activity. I do this for the sake of brevity, and because as far as I can see, any conclusion that applies to strict reductionism also applies to less strict forms, what Wilber calls subtle reductionism (Dawkins: hierarchical reductionism; Dennett: good reductionism). Reductionists of this kind believe that consciousness in some manner emerges from the brain activity. For example, one might propose that there is some kind of energy field resulting from brain activity, and consciousness is associated with this field. As long as this field is considered physical, this kind of view is quite consistent with materialism.

From the point of view of strict reductionism, a zombie is a being which possesses only neural activity, manifesting no consciousness. Here's where it gets tricky. Some—not all-- philosophers might argue that if consciousness is identical to neural activity, then there could be no neural activity—of the same kind as that manifesting consciousness—without consciousness. Therefore, the zombie is what philosophers call inconceivable, and can't be used as an argument.

I don't find this response very convincing, though. In the first place, it's no evidence for the survival value of consciousness. It's simply an argument against appealing to zombies to come to the conclusion that consciousness has no survival value. Zombies don't seem relevant.

More than that, though, the notion that neural activity would not suffice to produce the behavior is not consistent with materialism. If we assume that many animals are not conscious in the hard sense—and all, or nearly all, materialists do assume this—then we know that some kinds of neural activity are the entire basis for selection. Materialists would not argue, for example, that natural selection has produced the honeybee dance that signals the location of a nectar source by acting on the conscious experience of the bees. They view it as acting solely on the behavior of the bees, as observable by a third party, which is determined entirely by neural activity (perhaps in conjunction with other material processes).

Why would it be any different for us? If neural activity is sufficient in some cases, why wouldn't it be sufficient in all cases? What does conscious experience add? By definition, the experience itself is not something that has any observable effect or correlate in the environment, so how could it add anything that natural selection could operate on? Our feeling of guilt, for example, is not observable by a third party in any manner except through our behavior, and this behavior—by definition, according to a materialist holding the strict reductionist view--is determined entirely by brain processes (again, other material processes may be involved).

The Truth Hurts

This is the best argument I can make in a short article. It may be that the seemingly logical impossibility of understanding how consciousness can be related to material processes also makes it impossible for us to reason logically about how these material processes could exist in the absence of consciousness. It seems that a certain point in these arguments, all of us fall back on the very premises that we are trying to prove.

In closing, though, maybe a simple example will help. Suppose you have a toothache. It's very painful. Because of this pain that you feel, you engage in certain forms of behavior. Perhaps you complain to others about the toothache. You might take some kind of medicine in hopes of alleviating it. If the pain persists, you visit a dentist.

The assumption many people seem to make is that a zombie, which does not experience the pain of a toothache, would not act in this fashion. But this view results from misunderstanding the relationship of consciousness to behavior, as a materialist understands it. Why do you feel pain? Because of activity in certain areas of the brain. A zombie would also have this activity, and while it would not experience pain, this activity would cause it to act exactly as if it did. It would complain to others, because from a materialist's point of view, the complaining results not from the experience of pain, but from neural connections between the pain center and other areas where the activity of the pain center is expressed. When a zombie said, my tooth hurts, what it would mean or be expressing is that there was over stimulation of its brain pain center. It would be reporting on this.

Similarly, while a zombie would not seek relief from pain that it didn't experience, it would act in ways that reduced the activity in the pain center. Why would it want to reduce this activity if it weren't painful? Because—again, this is the materialist view—we have evolved in such a way that when the pain center is over-stimulated, other brain areas are activated that result in behavior which can reduce this stimulation. If such neural activity didn't exist, then we conscious organisms would have no way of alleviating the pain that we do feel.

Consciousness in the hard sense is far more subtle than it's frequently given credit for being. From a strict scientific or third person view, it doesn't exist. It isn't there, in the world that we all share. That's why David Lane could say that he wouldn't be able to tell if another person were a zombie. And it's also why there really is no known evolutionary reason for why we experience it.

A combination of flaws

This understanding of consciousness has implications for many other issues. I want to conclude this article by discussing one of these other issues, because Steve Taylor raised it in the course of my commenting on his latest article, Consciousness and Complexity: A Defence of Panspiritism. This is the so-called combination problem, which is thought to be a major difficult for panpsychism. According to this problem, one apparently can't combine several or more individual consciousnesses into a greater consciousness. This being the case, how could the very simple consciousness that panpsychism proposes is associated with simple forms of matter, like subatomic particles, result in the very complex form of consciousness that we exhibit?

William James posed this problem by noting that if one brought together a group of individuals, their individual consciousnesses (“feelings”) would not sum or combine into anything greater:

Take a hundred of them [feelings], shuffle them and pack them as close together as you can (whatever that may mean); still each remains the same feelings it always was, shut in its own skin, windowless, ignorant of what the other feelings are and mean.

There are numerous major misunderstandings underlying the combination problem. The first is the assumption that if there is a higher or greater consciousness resulting from the combination of the individual consciousnesses, this should be readily apparent to the individuals, or to some independent observer. Why should it be? Higher consciousness in fact by definition is not accessible to a lower consciousness. Imagine if the individual neurons in our brains were conscious, which doesn't seem entirely implausible. Does it follow that by combining with many other neurons, each individual neuron should experience the consciousness of the whole organism or human being? Of course not.

This same fallacy underlies the Chinese Nation argument, which philosophers sometimes cite as evidence that human beings can't combine to create a higher form of consciousness. The claim is that a very large population of individuals, such as is found in China, should combine to form a higher consciousness, yet they don't. The assumption, again, is that if such a consciousness did emerge, it would be readily apparent to individuals. This doesn't follow at all. On the contrary, the individuals should be mostly unaware of it. That so many philosophers would think otherwise frankly reveals how little experience most of them have with higher or altered states of consciousness.

The second misunderstanding is the assumption that if combination does occur, it should take place simply and instantaneously. Steve Taylor furnished me with a link to an article on the combination problem (Goff 2011) in which the inability of consciousness to combine was contrasted with the ability of lego blocks to form some larger structure, such as a tower. This is an extremely poor example of combination, because the tower is not much different from the sum of the individual blocks. It's larger and has a different shape, but other than that, it's obviously nothing but a collection of blocks, what Wilber has referred to as a heap.

Combination in evolution is a far more subtle and complex process than simply assembling lego blocks, and takes place only over very long periods of time. During this time, the individual units—what Wilber refers to as holons—develop new ways of interacting with each other, and these interactions result in new properties, both in the individuals themselves, and in the higher-order holon that emerges from their combination. For example, cells associated to form organisms. This did not happen by a simple process of cells clumping together like lego blocks. The cells developed new forms of communication, which allowed them to associate with each other in ways that had not been possible before. The resulting organism was not simply larger and of a different shape, but had all kinds of properties that the individual cells lacked. For example, the organism is independent of any of its individual cells. This is not true for a lego assembly.

The same is true of individual humans associating into societies, and this leads to the third major misunderstanding of the combination problem. James assumed that individual humans had not already combined. He didn't seem to appreciate that there already is a higher, social consciousness, resulting from tens of thousands of years of evolution. He naively believed that because human beings are physically separate, with their brains housed in individual skulls, that their consciousness could not have previously been affected by interactions with each other.

Of course they can be, and are. A prime example is language. A single, asocial human being could never develop language, because it depends on shared meanings. As the postmodernists teach us, the definitions of words come from context, and context only emerges from interactions of individuals with each other. The vast web of human language is just one example of a higher, shared context that was created by many individual consciousnesses interacting with each other. James was a brilliant psychologist, but in his view of the combination problem, he was like a fish, apparently completely unaware of the water surrounding him.

Finally, there is a fourth major understanding underpinning the combination problem, and this is the one that's relevant to our discussing of the relationship of consciousness to material processes. As I pointed out earlier, very few people seem to appreciate how subtle consciousness in the hard sense is. Those who accept the combination problem illustrate this perfectly. They assume that human consciousness, in the hard sense, is highly complex, leading them to wonder how it could possibly result from the association of simpler forms of consciousness.

When someone says that human consciousness is complex, however, he isn't referring to consciousness in the hard sense. He's referring to consciousness in the soft sense. What is an example of the complexity of consciousness? Our ability to perform mental simulations, to plan for the future? To think abstractly? To remember events in the past? Mathematics? Logic? Whatever one cares to name as an example, it's a functional aspect of consciousness, which means it can be explained, in principle, in terms of material processes. In other words, it's the complexity of these material processes that needs to be explained, and materialism has already achieved this to a very large extent. Our understanding is not complete, but very few scientists doubt that this problem, unlike the hard problem, is tractable.

It is, at the very least, an open question whether our conscious experience is equally complex. We experience complexity that is created by material processes, but does this consciousness itself have to be complex to experience this? Consider the experience we have of being in a room—the four walls, floor and ceiling, enclosing a three dimensional space containing three-dimensional objects, each with its own unique shape, texture, color, and so on. There is no question that highly complex processes in the brain are necessary to represent all the information contained in this scene. The question is, does our experience have to be equally complex to grasp it? Those who cite the combination problem assume that the answer is yes, but I don't believe this is clear at all.

We do know that the vast amount of processing in our brain occurs unconsciously. What we experience consciously is only the tip of the iceberg. It's a relatively simple presentation of a very complex phenomenon. As anyone who stores videos on a computer knows, they contain a vast amount of information, but even a child can experience this information easily as its presented to consciousness. Most of our language processes occus beneath our awareness—to the extent that we are mostly unaware of the rules that govern language—yet we all learn at a young age to speak and comprehend quickly and virtually perfectly.

Taking all these points together, I don't believe the combination problem is the obstacle to the panpsychist view that many assume it is. This is an important point to make, because panpsychism, whatever its other flaws, does avoid the central problem that bedevils both materialism, as defended by David Lane in this discussion, and Steve Taylor's panspiritism: how do material processes interact with consciousness?

REFERENCES

Dawkins, R (1986) Sociobiology and the New Storm in a Teacup. In Science and Beyond. Edited by S. Rose and L. Appiganese. 61-78. Oxford: Blackwell

Dennett, D (1995) Darwin's Dangerous Idea. London: Penguin

Ramachandran, VS (1997) Three laws of qualia: what neurology tells us about the biological functions of consciousness. J Consciousness Studies 4, 429-57

Wilber, K (1995) Sex, Ecology, Sprituality. Boston: Shambhala






Comment Form is loading comments...