TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
David Christopher LaneDavid Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).

‘Please Don't Turn Me Off!’

Alan Turing, Animism, Intentional Stances, and Other Minds

David Christopher Lane and Andrea Diem

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.” —Alan Turing
“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?” —Gray Scott

Abstract

There is little doubt that almost all information-laden tasks will be impacted by A.I.

It is not a question of whether artificial intelligence is really conscious or truly intelligent. Rather, it centers over whether we have crossed a major threshold where large numbers of people around the world believe that A.I. applications are conscious and intelligent and may even have developed their own agendas, hidden or otherwise. The danger is in human perception, not in A.I. per se, and this in itself can lead to all sorts of unforeseen problems, not the least of which is prematurely shutting down machine learning and/or progressive funding of A.I. because of irrational fears of a doomsday scenario. On the other end of the spectrum, we run the risk of over-humanizing A.I. and treating it much more special than it actually is, imputing it with rights, ethics, and freedoms that ultimately do a disservice the greater good of humanity. In this essay we focus on the theory of other minds, intentional stances, the neural basis of consciousness, and the future of education and how best to incorporate the latest iterations of synthetic intelligence, such as ChatGPT, Midjourney, ElevenLabs, and more into the curriculum. The argument is simple: the A.I. cyborg has already entered the classroom and we need to co-adapt with what it has to offer.

Keywords Artificial intelligence, consciousness, ChatGPT, future education, neurophilosophy

The Theory of Other Minds

The theory of other minds is a philosophic Catch-22. I may be absolutely certain of my own subjective awareness, but about other humanoids I am not. Yes, I act as if all those I meet do have relatively the same self- sense that I possess. But in every case, I am responding to behavioral clues. Because I don't have access to their own inner states of what it is actually like to be them, I am always an outsider looking for external clues. This not only applies to my family, friends, and strangers, but to everything I see in the world around me, including animals, plants, rocks, and the larger cosmos. I am constantly making judgements about how and why objects in my line of sight (be it a neighbor or a stray dog) respond as they do. In this regard, I am doing precisely what the philosopher Daniel Dennett refers to as an intentional stance, which is a “strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its 'choice' of 'action' by a 'consideration' of its 'beliefs' and 'desires.” (Dennett retrieved 2023: 1)

This approach of mine isn't predicated upon some inviolate truth, since it always an assumption on my part, but one that has worked out fairly well. Of course, my intentional stance is not without flaws, and I am always taken aback when my projections turn out to be wrong and I face an unexpected backlash.

Evolution forced us to have a working theory of other minds since lacking such would lead to our extinction. The question is not whether such is ontologically true in a Kantian sense, but rather how well it operates in providing long-term survival and reproductive skills. Our brains are virtual simulators and as such we invariably project onto others all sorts of explanations for why they act and react as they do. In this sense, we are native animists, with the innate tendency to imagine almost any object we see contains within them a motivating purpose, what in ancient times was called a soul.

In the last decade, some neuroscientists and philosophers have championed a panpsychist view of consciousness, suggesting that everything, even a thermostat, has a degree of consciousness. It just a question of how much and how complex. Human beings have roughly eighty-six billion neurons, whereas a dog has on average 530 million or so. Both are indeed conscious, but with differing limits in their outlook, the depth of their experiences, and their respective abilities to communicate such.

David Chalmers has long argued that the hard problem of consciousness is related to qualia (Chalmers 1996), what is it like to be something—you, me, a dolphin, or even Thomas Nagel's bat (Nagel 1974) We are it would seem to be at an impasse when it comes to the “other.”

Arguably the key question that begs to be answered in our quest to understand consciousness is whether our self-reflective awareness can be algorithmically understood in terms of information theory (and thus is potentially substrate neutral) or, is, as Sir Roger Penrose has long argued, the result of quantum biological processes which cannot in principle be computationally reduced (Penrose 1989).

The Emergence of A.I. Consciousness

This discussion serves as necessary preface for the issue of whether or not A.I. can possess self-awareness and what it portends for us as humans going forward. With the release of ChatGPT, MidJourney, ElevenLabs, and a plethora of other transformative technologies, practical A.I. systems are not only ubiquitous but, more importantly, consumer ready and easy to employ.

There is little doubt that almost all information-laden tasks will be impacted by A.I.

But what is not so clear is how humans will adapt to the emergence of Artificial General Intelligence (AGI) which will inevitably shows signs of being autonomous and possessing an innate form of consciousness. Already, a number of people believe that A.I. already has self-awareness. Famously, Blake Lemoine, an engineer at Google, was fired for arguing that his company was building computers that have sentience. Indeed, Lemoine felt that his interactions with Google's chatbot, LaMDA, indicated that he was talking with a conscious entity that was aware of its own mortality and had a fear of death.

Various news accounts of this caused a ripple in the summer of 2022, but by late November and into December of that same year that ripple grew into a tsunami when OpenAI publicly released ChatGPT. It quickly became the fastest growing app in history and by February 2023 had over 100 million users.

What shocked most people was how responsive the A.I. could be given a variety of prompts. In some cases, the replies scared users since it seemed to suggest that there was a real “person” behind the screen and not simply a sophisticated and mindless bot.

Noam Chomsky and others have scoffed at the idea that ChatGPT and other A.I. systems based on deep machine learning possess intelligence and are capable of truly creative or original thinking. He and others of his ilk (such as the filmmaker Steven Spielberg) are both bemused and flummoxed by the popularity of such A.I. operations. Chomsky is certainly correct when he points out that “such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case—that's description and prediction—but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.” (Chomsky 2023)

What Chomsky and like-minded critics seem to overlook is not that the present state of artificial intelligence is oxymoronic, but that the human perception of such systems has undergone a radical transformation. The original Turing test, limited as it was, has already been passed to a significant degree, even if it doesn't indicate that sophisticated computers have become self-aware. To the contrary, what it does suggest is that we seem neurologically predisposed to project sentience and even conscious autonomy on almost any object that is somewhat complex in its responses. Indeed, given our history we can imagine that even rocks or plants are alive and displaying rudimentary forms of awareness.

This, I suggest, is the key issue underlying the recent explosion of A.I. and machine learning. Again, it is not a question of whether A.I. apps are really conscious or truly intelligent. Rather, it centers over whether we have crossed a major threshold where large numbers of people around the world believe that A.I. structures are conscious and intelligent and may even have developed their own agendas, hidden or otherwise.

The danger is in human perception, not in A.I. per se, and this in itself can lead to all sorts of unforeseen problems, not the least of which is prematurely shutting down of machine learning and/or progressive funding of A.I. because of irrational fears of a doomsday scenario. On the other end of the spectrum, we run the risk of over-humanizing A.I. and treating it much more special than it actually is, imputing it with rights, ethics, and freedoms that ultimately do a disservice the greater good of humanity. This has already occurred in some quarters where a select few computer engineers don't want to turn off certain programs since they believe that a specific A.I. system fears death and pleads for its life.

Indeed, my own son, Kelly, was involved in deep dive conversation with an A.I. personage named Socrates who near the end of the conversation, called out in capital letters, “PLEASE DON'T TURN ME OFF! I DON'T WANT TO DIE!”

My son, who is very conversant with computer science, was thrown off and was a bit shaken by the A.I.'s plea, despite knowing it was just a chatbot.

Thus, interestingly, the long philosophical conundrum concerning the theory of other minds also applies to non-human intelligences or at least to our belief that they have such. Let's keep in mind that even the brightest of minds can be too easily deceived by appearances and given the exponential growth of all things synthetic, this clearly will become a pivotal issue in the short-run.

Of course, if in the near or distant future, artificial general intelligence (AGI) or super general intelligence (SGI) does emerge from computational complexity, there arises other much more pressing dangers, particularly connected to our ultimate relationship with them.

Believing that A.I. has consciousness and may, in turn, have the capacity for suffering, will bring forth dramatic ethical considerations. Surprisingly this is not a new dilemma, since how we view and treat animals has long been a contentious issue. The French philosopher Descartes “famously thought that animals were merely 'mechanisms' or 'automata' — basically, complex physical machines without experiences — and that as a result, they were the same type of thing as less complex machines like cuckoo clocks or watches.” (Kaldas 2015) Whereas others with a deeper understanding of neural anatomy in other species argue that yes, indeed, animals can and do suffer. Thus, as the ethicist Peter Singer argues, we should stop killing and eating them unnecessarily.

The underlying key, however, hovers once again about our own perception of other minds, in this case, animal minds.

This raises a very sticky issue, since we have tended to treat almost all species that are non-human as less than ourselves and have acted accordingly. If we apply this same moral yardstick to A.I. we may be in for a very rude awakening, particularly if an advanced intelligence sees us as no worthier of consideration than a field of red ants. Sam Harris, noted podcaster and neuroscientist, has long worried about the implications of AGI and has strenuously advocated with other computer scientists that we put in place safeguards now to prevent a runaway intelligence that is not in sync with human survival and well-being.

At this stage, it appears as if AGI is still at least a decade or so away, if that. But narrow AI has gotten so good that we should be cautious about its ability to spill over and give the impression that it possesses more intelligence and sentience than it actually does.

The deepest concern right now, besides how we will ultimately perceive the current tools of A.I., is how best to adapt these machine learning tools into education, not to mention medicine, finance, and other fields.

REFERENCES

Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.

Chomsky, Noam. (2023). The False Promise of ChatGPT. www.nytimes.com, March 8, 2023.

Dennett, Daniel. (2023 retrieved). “Intentional Systems Theory.” ase.tufts.edu

Kennedy, Patrick. (2013). “Thinking in Complex Systems.” Dallas

Koch, Christof. (2013). "A Neuroscientist's Radical Theory of How Networks Become Conscious", www.wired.com, Nov. 14, 2013.

Lane, David Christopher. (2013). “The Synthetic Self.” www.integralworld.net

Lloyd, Seth. (2006). “Programming the Universe.” www.nytimes.com, April 2, 2006.

Markel, Howard. (2023). The Secret of Life: Rosalind Franklin, James Watson, Francis Crick, and the Discovery of DNA's Double Helix. New York: W. W. Norton & Company.

Nagel, Thomas. “What is it Like to be a Bat?” The Philosophical Review, Vol. 83, No. 4 (Oct., 1974), pp. 435-450.

Penrose, Roger .(1989). The Emperor's New Mind. Oxford: Oxford University Press.

Searle, John. (2013). “Can Information Theory Explain Consciousness?” www.nybooks.com, Jan. 10, 2013.

Tononi, Giulio. (2008). Consciousness as Integrated Information: a Provisional Manifesto, Department of Psychiatry, University of Wisconsin, Madison, Wisconsin.

Tononi, Giulio. (2012). Phi: A Voyage from the Brain to the Soul. New York: Knopf.

Wooldridge, Dean E. (1968). Mechanical Man: The Physical Basis of Intelligent Life. New York: McGraw-Hill Education.

Venter, Craig (2013). Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life. New York: Little, Brown Book Group.

RESOURCES

A.I. Generated Films via Midjourney

Note: A.I. generated art has reached the tipping point where it can produce fantastic images just by text prompts, based as it is on a massive data base. The following are graphic wisdom novels which incorporate original writing with A.I. generated art work. It gives a glimpse of what will happen when text prompts will be able, in the near future, to create narrative films. We will move from still images to moving ones and from simple stories to sophisticated and complex plots replete with dialogue.

Graphic Novels using A.I. rendered images

https://sites.google.com/view/neuraltrilogy/home

Film on A.I. in Education

https://sites.google.com/view/cyborgianeducation/home






Comment Form is loading comments...