TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
David Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).
The Synthetic Game
Artificial Intelligence: 1 | Human Intelligence: 0
David Christopher Lane
“More and more, the A.l.s will suggest to us what we should do, and I suspect most of the time people will just go along with that.” Stephen Wolfram
A.I. is going to get so good that it will be nearly impossible to distinguish its curated responses from a purely humanly generated one.
The apparent contest between A.I. (artificial intelligence) and H.I. (human intelligence) is over.
It is now too obvious to me and I am sure a host of other observers in education, computer science, and neurosciencethat we humans won't be able to compete with the emergence of A.I. systems.
The key to understanding why this is (and will be) the case, just look at our GPS systems in our cars and on our smartphones. If one is driving in an unfamiliar neighborhood one is apt to employ Google Maps, Apple Maps, or Waze, for both directions and for traffic updates.
Generally, no one tends to “doubt” the guidance they provide, given its powerful algorithms that are constantly being refreshed.
However, I have noticed that if I am driving in an area I know well (too well?), such as Los Angeles, I frequently question Google Maps and Waze when they inform me of a five-minute shortcut if I get off the 5 freeway and take side streets to bypass the current gridlock due to an unexpected accident. Why? Because I know that it will take me through a sketchy part of town and that often the traffic lights are broken or ignored.
Several times when driving in downtown L.A. I simply turn off my A.I. assistant, believing that I know better than “it.”
But if I am in another city that I have never been to before, such as Orlando, Florida, back in 2013, for my son Shaun to attend the massive Minecraft convention, I almost completely relied on whatever information the GPS device was giving me. I was literally at its beck and call and was forced by circumstance to surrender to its computational prowess.
Now it isn't a stretch to extrapolate from our over-reliance to our on-board navigation systems to envision how artificially intelligent devices will become almost ubiquitous in our day to day lives. And will we be well trained to “question” their output? Will we understand the nuts and bolts of how they generate their proffered knowledge? I think not. Welcome to the impenetrable black box of alien-like acumen.
In education, it has already happened. In the past five months, in fifteen courses I have taught almost every student (in-person and online) has become familiar with Chat GPT-3, Chat GPT-4, DALLE-E 2, Midjourney, ElevenLabs, and more. An exceedingly large number of students have utilized several of these A.I. systems to pass off their work as original when it was not. This was discoverable because we were privy to beta detector programs that provided a probabilistic analysis of whether something was A.I. or human generated. However, the glitch is that there are always new and innovative ways to bypass such monitors. This has led to an arms race between A.I. creators and A.I. detectors. It is little wonder that I recently received the latest updating informing me:
“Introducing our updated GPT4Detector.ai - Originality cutting-edge free AI text detector specifically designed and trained to detect the new ChatGPT update, providing you with accurate results to determine whether the content is human-written or AI-generated.”
This may seem like a solution, but it isn't for a variety of reasons, not the least of which is that most professors do not want to act as data detectives. Moreover, trying to suppress A.I. is akin to placing duct tape over a gaping hole on boat that is taking on water. It doesn't work, not even temporarily.
My solution to the problem is to fully embrace ChatGPT4 and other A.I. augmenting systems and make sure that students try to understand how and why they work as they do. Also, to become keenly aware of their limitations, at least in the present day.
Thus, to give just one example, the midterm examination that we give out is based on several assigned books, podcasts, films, and lectures. I first ask each student to first provide me with a ChatGPT4 answer to each of the 14 essay questions. Then underneath each response write out where the answer is flawedby omission or inaccuracy or being hallucinatory. Be specific as possible. After this, augment and add content to each response separately that will make the answer more accurate and comprehensive. Finally, after looking over the A.I. response and your augmentation, what line of inquiry do you think would be relevant or of interest that follows from what was said. In other words, what kind of intellectual branches or tangents do you see opening up? List it. Be sure to take a risk, be creative, even if it is a slight detour.
This is not a perfect test by any means, but it is a start, since what we have found is that students actually learn more by utilizing and augmenting ChatGPT4 than avoiding it.
However, A.I. is going to get so good that it will be nearly impossible to distinguish its curated responses from a purely humanly generated one, unless, of course, we embed digital watermarks or signals. But even here that can too easily be bypassed with ever improving paraphrasing technology.
Simply put, A.I. has got us.
If the GPS analogy is a good indicator of how humans coopt their own learned navigating skills and surrender to an all-encompassing traffic monitoring algorithm, then it should come as no surprise that we will succumb to a wide array of synthetic augmenters and creators.
Already we can produce remarkable essays, graphic art, voice simulations, short videos, medical diagnoses, and much more, with just a few words as prompts.
Even computer coding can be done via natural language to such a degree that early pioneers in the field, such as Stephen Wolfram, envision a wholesale transformation of computer science programs.
In the next few years, we will all become prompters, feeding our A.I. systems what we desire, what we wish to write, what we wish to create. Such that no creative activity will be absolutely immune from synthetic intelligence(s) if we wish to stay on the electronic grid.
A.I. is an informational tsunami that we don't properly understand; it is a different form of intelligence(s) and try as we might the human finger in the dike is insufficient to hold back its inevitable tidal surge.
Why is this the case? Because we are a transitional species witnessing the birth of something that already has the capacity to transcend much of human ingenuity. Yes, we can get a thousand plus signatures from distinguished thinkers around the world to caution others on its exponential growth, form governmental committees to establish regulations, and even place guard rails to curtail A.I.'s access to the Internet, but all of that will ultimately prove futile.
The super intelligent genie is already out of the bottle. We are no longer living a world of “ifs” but of “how soon?”
Thus, the solution to this existential dilemma is a foregone conclusion, even if we will fight tooth and nail against it. A.I. is our offspring and our greatest creation and how we integrate with it is the key to our future. To the degree that we try to arrest its development, it will only incentivize bad computational actors to exploit such for their own nefarious ends. To the degree that we realize that our progeny is far more gifted that any single human, we have a chance to align its rapid evolution with our own human needs and wants.
But lest we forget, we as the parents of this digital intelligence(s) will one day in a not so distant future, become its child, forever tethered to A.I.'s superior ways of understanding the universe and our place within it.
Nietzsche was right, of course. We killed god. But what he may not have realized was that human imagination and hubris correlated with electronic terotechnology has created a new simulated god as a super intelligent replacement.
The Übermensch is digital.
It is not that we will bow down and worship our new Moloch, but that we will let it guide almost every decision we make in the future.
Don't believe it will happen? It already has. From spell check to Grammarly to driving guidance systems to producing earlier medical prognostications to musical loops to socially mediated shopping to video recommendations to news feeds to who we should date and mate. The list will grow until its cover almost every feature of human life.
Yes, we can get off the grid and forage like our hunter-gatherer ancestors, but as we have already witnessed many who have tried such usually come back with a film to document their excursions and put them on their respective YouTube channels to see how many views they can gather.
Let's be clear. We are not getting off the anodic Ferris Wheel of samsara.
We live in a computationally groomed landscape. We want our Digital Soma, as Aldous Huxley and Neil Postman rightly prophesized years prior.
We have become the very cyborgs that we feared.
And that, surprisingly, is precisely how it should be.
A.I. is here and there is no going back.
Or, as Kent Brockman from Simpsons fame, might phrase it,
“I, For One, Welcome Our New Synthetic Overlords.”
“My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively.” Geoffrey Hinton
For a fictional and graphically intense novel about what AI may discover see the following link:
Comment Form is loading comments...