TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Andrea Diem-LaneAndrea Diem-Lane is a tenured Professor of Philosophy at Mt. San Antonio College, where she has been teaching since 1991. Professor Diem has published several scholarly books and articles, including The Gnostic Mystery and When Gods Decay. She is married to Dr. David Lane, with whom she has two children, Shaun-Michael and Kelly-Joseph.


The Rise of
the Machine Brain

A Review of The Cambridge
Handbook of Artificial Intelligence

Andrea Diem-Lane

With all of this said, in a 100 years time when we may be in full blown "AI summer," the world may seem unrecognizable to us.

While computer technology has no doubt become more progressive, the question remains whether we will one day have full-blown artificial intelligence. The idea that we can actually create a being that may have consciousness and far surpass our own intelligence is astounding, but the ramifications of such a creation, whether it be our saving grace or demise, are still unknown to us. Many argue that since an AI explosion is in the very near future and it can so radically transform our lives, we should be diligently addressing all topics related to AI.

The Cambridge Handbook of Artificial Intelligence edited by Keith Frankish and William M. Ramsey does just that. The text explores many AI topics from a variety of angles, including computer and cognitive science perspectives and philosophical and ethics ones. Though the editors make a point that this “guide's” main focus is comprehending the latest in AI research and the theoretical issues of that research, many of the essays are written by philosophers of the mind who raise important philosophical issues, such as are machines conscious and what do we mean by that, can they be programmed to be ethical, and what does the future hold in light of AI?

The text is broken up into four main sections: the foundations (an examination of AI accomplishments, research areas, and philosophical roots); architectures (an understanding of good old-fashion AI or GOFAI, connectionism and neural networks, embedded cognition, etc.); dimensions (an investigation of different dimensions of intelligence from learning to emotion to consciousness, etc.); and, lastly, extensions (highlighting the world of robotics, artificial life and the ethical aspects of AI). The 15 articles that make up these sections are “stand-alone pieces” with often overlapping material. For instance, John Searle's famous Chinese Room argument is discussed in several of its articles.

The Cambridge Handbook of Artificial Intelligence

In the beginning chapter, Stan Franklin walks the reader through the key moments in AI, including Alan Turing's significant contributions and the offering of the Loebner Prize of 100,000 dollars to the first program able to pass the Turing Test (an award still waiting to be declared but I suspect it will be claimed in the very near future). We learn here that the reference “AI” was coined shortly after Turing in the 1950s by John McCarthy, who worked at Dartmouth at the time, but later at MIT and then Stanford. The author explores many fascinating events of AI history, including the moment machine learning was born with Arthur Samuel's checker playing program, and the case of Deep Blue beating the chess champion Gary Kasparov and Watson defeating champions on the game Jeopardy. In poetic form, the writer declares that in the second decade of the 21st century AI is emerging out of an AI winter (with some progress being made) to an AI spring and maybe even an AI summer (with major accomplishments on their way).

Franklin's article is followed by a couple of essays which explore the philosophical aspects of AI, asking the profound question what is it to have a mind and whether computers will possess them. The authors review noted philosophical positions within the field, including Daniel Dennett's strong AI position that consciousness is substrate neutral and so computers modeled after the human brain can have awareness. This is challenged by Hubert Dreyfus' critique that AI will not match human intelligence. Unlike computers, human's thinking is non-algorithmic or rule orientated (a similar argument was made by Roger Penrose). Dreyfus asserts:

“Our ability to understand the world and other people is a non-declarative type of know-how skill that is not amenable to GOFAI propositional codification. It is inarticulate, pre-conceptual, and has an indispensable phenomenological dimension that cannot be captured by any rule based system.”

Another objection to strong AI comes from John Searle. According to his weak AI position, there is no qualia for the computer as there is for the human mind. Computers, he says, simulate conscious understanding but do not have it. To demonstrate this Searle devised the Chinese Room thought experiment. Imagine you are in a closed room and trying to communicate in Chinese with one outside the room. To do so you slide under the door frame questions in Chinese. On the other side of the door the recipient slides the correct answers to the questions back to you. From your perspective you may think that you are communicating with a real Chinese speaker who understands Chinese, but in truth, according to this thought experiment, you are communicating with a computer which is simply manipulating syntactical symbols but not understanding their meaning or semantical content. In a similar vein, another thought experiment also mentioned in the reading is Ned Block's “China brain” thought experiment. As a criticism of machine functionalism, in this thought experiment one is asked to imagine that the entire population of China acted like a human mind for 60 min. If functionalism is correct, then one would expect that if the human mind was simulated perfectly, with individual citizens acting as neurons connected in just the right way, then a conscious mind would emerge out of this. But, as Block points out, this is absurd.

Despite some philosophical objections that strong AI awaits our future, other articles investigate evidence for it and the amazing strides made in AI research. One such area is machine learning, perhaps one of “the most rapidly growing areas of computer science.” Another area of research is developing artificial agents that may be able to experience and express emotions. Though we face the “other minds problem” (that is, not being able to know the internal states of another or if there are internal states), we encounter the same problem for humans. If the “right kind of architecture” is implemented for emotion, as it is for the human brain, then there is no reason to argue conscious emotions cannot be experience in AI. If one takes a substrate independent view, then simulating the physics of the brain well enough can allow for consciousness and emotional states.

In one of the chapters in the last section of the book promising lines of research in robotics are investigated. For instance, interfacing technology with neural tissue may allow for prosthetics to be controlled by the brain. Perhaps even more fascinating is the possibility of creating hybrid machines, or robots with “biological matter in their control systems.” Whether digital electronics are put in our brains or whether some of our organic matter is place in the robot, future robotics, it is predicted, will become as common place as computers are on every desk.

Furthermore, artificial life research (hard artificial life of autonomous robots; wet artificial life of test tubes and creating artificial cells from living and non-living materials; and soft artificial life of computer simulations or digital constructions) is elucidated. Whether we can create life synthetically and whether software systems can be “alive” raises profound philosophical concerns. Creating such life forms places us in “unchartered ethical terrain.”

One of the most enlightening articles in the text that deals with the subject of ethics is Nick Bostrom's and Eliezer Yudkowsky's The Ethics of Artificial Intelligence. The authors evaluate important ethical issues, such as preventing machines from harming humans and the world at large, and also considering the moral status of machines themselves. For AI to be built to act safely and ethically it needs to have “general” intelligence and not be simply task specific. Artificial General Intelligence (or AGI) will be able to “foresee the consequences of millions of different actions across domains.” Furthermore, to determine whether a machine should have moral status, we need to ask if the machine has both sentience (qualia) and sapience (or personhood). If these are established conceivably one should not discriminate morally against it. According to the Principle of Ontogeny Non-Discrimination, if two beings have the “same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.” We apply this reasoning to humans created in vitro fertilization and would presumably to humans if they were created as clones, and so reasonably we should likewise bring it into play when creating artificial cognitive systems.

In this same article there is a fascinating section on whole brain emulation, or “uploading” the human brain unto a digital computer. To do this one may have to destroy the original organic matter as it is dissected into thin slices in order to capture all of its neuronal connections. Once a precise three dimensional map of the brain is achieved then this can be uploaded to a powerful computer and the new digital brain may have consciousness as the old one did. Either this new brain can live in a virtual reality or be connected to a robotic body and live in the external world.

In this discussion, the writers raise the issue that artificial minds may experience subjective time very differently than we do. Four years of objective time to us may be experienced by a “fast AI” as a millennia of subjective time. The Principle of Subjective Rate of Time is introduced here: “In cases where the duration of an experience is of basic normative significance, it is the experience's subjective duration that counts.” In this context, then, if a fast AI and a human were both in pain, it may be morally relevant to alleviate the pain the AI experiences before one helps the human. Or, in an another case, if an upload committed a crime and it was sentenced to two years in prison, we would have to ask if this should be two objective years (experienced subjectively as a very, very long time at the fast AI level) or two subjective years (for us just a couple of days).

David Chalmers
David Chalmers

Many of these same philosophical and ethical concerns are also addressed in Sam Harris' podcast, The Light of the Mind, in which Harris interviews philosopher David Chalmers. In the last 40 minutes of the almost two hour podcast, the conversation turns to AI. Chalmers did his Ph.D. at an AI lab at Indiana University and so AI research has been a particular interest of his. While he has spent a great deal of his career as a philosopher on the hard problem of consciousness (subjective, first person states of awareness), more recently, he has focused his energy on whether AI can experience such an internal state. Known for taking a non-reductionistic view, Chalmers argues that consciousness cannot be explained by standard physical processes. He claims that something needs to be added to the picture, such as new properties or an enriched physics in a panpsychic way. With this view, some label him an “innocent dualist.” Nonetheless, he entertains that AI can have consciousness much like us.

Ultimately, Chalmers argues there are two ways AI research can go. In one scenario, we become these superintelligent creatures by enhancing and augmenting ourselves or uploading ourselves (via reverse engineering). Once we have cracked the neural code we can replace our brain with durable computer parts and perhaps a new artificial body as well. Wisely, Chalmers suggests a gradual upgrade, one neuron replaced by one silicon chip at a time, and that one stay awake during the procedure to ensure consciousness stays on route. If “lights start to go out” in the procedure then it can be stopped before it is too late. Supposing that it is successful, the question remains will this be “me” or simply a copy or twin type of me. When we ask the silicon beings if they are the original of course they may try to convince us they are but we would really have no way of knowing if this was the case.

If we want to live virtually with no “meat body” we can reverse engineer our brains and upload them onto computers. Yet, an obvious ethical question arises out of this: if you delete an uploaded self are you committing murder? Chalmers contends that if there is consciousness (or what appears to be) then this being is in our “moral circle of concern.” On a personal note, at the end of the interview, Chalmers expresses his hope that in his life time he will one day be able to upload himself. With this immortality may possibly be achieved.

In another storyline, according to Chalmers, superintelligent beings are not like us and these new superintelligence AI may destructively take over the world, gobbling up our resources and eliminating anything that stands in its way, as Nick Bostrom in Superintelligence also speculates. Since AI may have the ability to mass reproduce (making copies of themselves) and each successor system may be more intelligent than the prior one, these smarter minds may pose great risks to our overall well-being (of course, it is also possible that they offer great benefits). Sam Harris interjects that with their far superior intelligence they may trample us as we do an ant hill.

With all of this said, in a 100 years time when we may be in full blown “AI summer,” the world may seem unrecognizable to us. Superintelligent conscious machines, humans with silicon chip brains and robotic bodies or uploaded human consciousness, and even possibly teletransportation, may be the norm. Since the tide of AI is rising this is the time we should focus our efforts in setting up game rules and ethical standards, helping to create a friendly future before it is too late.






Comment Form is loading comments...