TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
David Christopher LaneDavid Christopher Lane, Ph.D, is a Professor of Philosophy at Mt. San Antonio College and Founder of the MSAC Philosophy Group. He is the author of several books, including The Sound Current Tradition (Cambridge University Press, 2022) and the graphic novel, The Cult of the Seven Sages, translated into Tamil (Kannadhasan Pathippagam, 2024). His website is neuralsurfer.com
Kelly LaneKelly Diem-Lane is currently a sophomore in college after having graduated high school at age 15. He is the author of several small books including When Computers Become Human (published in China and translated into Mandarin, 2017) and Creature Compassion (2022). His fields of interest are Artificial Intelligence and Virtual Reality technologies.

A Synthetic Transplant

Offloading Human Intelligence in Favor of A.I. Governance

David Christopher Lane and Kelly Diem-Lane

Summary by ChatGPT: The essay "A Synthetic Transplant" by David Lane and Kelly Diem-Lane explores the implications of increasingly offloading human intelligence to Artificial Intelligence (AI) and technological systems. The authors caution against over-reliance on technology, as it might lead to a loss of human competency in basic skills and potentially a loss of individuality. They also touch on concerns such as economic displacement, power concentration, and reality manipulation associated with AI's advancement. Through historical and contemporary examples, they exhibit how humanity's over-dependence on technology could lead to a "Synthetic Temptation," where AI governs significant aspects of life, thereby transforming the essence of human experience.

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”
--Alan Turing
I suspect we are not ready for the coming tech tsunami that is inevitably going to swamp the planet.

The cliché' “Why reinvent the wheel” is an oft used expression to remind us that it is a waste of time and energy to do over what has already been done. It is also a reminder that whenever we can offload a task instead of doing it ourselves, we would be well advised to do such. But the danger is that the more we do so, the less competent we may become in mastering or performing that specific function.

There is a plethora of examples which illustrate this in human history. Where once we were competent in making fire from raw materials, such as flint stones or sticks, today we use matches or portable lighters. Where once we relied on our own navigational senses for traveling, to track animals, and to locate edible plants, today we utilize smart phone applications. As such, we increasingly rely on the tools we make, usually in the hopes of making our lives easier and more productive.

Socrates decried the written word since he believed it would hamper our poetic memories. His point, though containing an element of truth, didn't stick. The printed word was too beneficial to ignore. While history has had its share of luddites, the progressive nature of technology is irresistible for most.

The difference now is that informational-laden systems evolve at an exponential rate and because of this we can envision a future where we become so reliant and dependent on technology that we become dumber in the process. With Artificial Intelligence we are bit by bit offloading our human intelligence in favor of A.I. governance.

Just a few decades ago, in writing a letter it was elemental to know how to spell, how to use correct grammar, and how to construct a proper reply but with spell-check, grammar-monitoring, and auto-fill, these tasks can be automated. With the advent of ChatGPT, nobody needs to know how to write an essay or a story or even analyze a legal brief. The large language models (LLMs) can do it for you and within seconds, thereby saving massive amounts of time, energy, and frustration.

But this is only the first small wave in an oncoming set of much larger swells, as Mustafa Suleyman (co-founder of DeepMind) so clearly explains in his recent book, The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma.

It is a scary read for what it portends is nothing less than the radical transformation of what it is to be human, particularly as A.I. and Synthetic Biology are conjoined at the hip. As John Naughton, writing for The Guardian, warns,

Translated into terms of technological waves, Suleyman's evolutionary sequence looks like this: humans first used technology to operate on the physical world — the world of atoms; then they worked on bits, the units of information; and now they are working on creating new forms of biological life. Or, to put it more crudely: first we invented mechanical muscles; now we are messing with our brains; and soon we will be doing this with our biology. However you portray it, though, the reality is that we are in the process of creating monsters that we have no idea how to manage.

We have been offloading our intelligence to computational devices since the very first pocket calculators. Though, one could argue that using any physical tool—from abacuses to slide-rulers to even our own hands—is a way to alleviate the burden of remembering numbers and elemental facts.

The next step in this direction has already been climbed with the advent of smart glasses and Virtual Reality goggles. As we walk, we can be notified about what type of plants we are looking at, what kind of special a café is offering, and with embedded QR codes, get instant access to websites and much more.

Of course, it is only a matter of time (and getting beyond our initial fears and squeamishness) when we allow neural implants to augment our limited intelligence even more.

But in all these cases, we are succumbing to what can be rightly called the Synthetic Temptation, where we let go of our own initiatives and let Artificial Intelligence serve as our guru. Now to be clear, there is much in favor of succumbing to such a richly informed Overlord, particularly when we have no expertise in a specific area.

Yet, this is also a double-edged sword, since we may never be motivated to learn the primary skills necessary to be more, not less, self-sufficient. So wedded are we to the electronic cloud that surrounds us that if it fails, we are at quite literally at wit's end.

This is the inherent danger that we face by over-relying on A.I.

The question that looms in the balance is how much or how little we should augment ourselves with computational systems, especially if we are not sure if their interests align with our own.

Jaron Lanier, the famed pioneer in all things VR, worries that in our rush for all things tech, we lose sight of our own humanity. Indeed, the more we divorce algorithms from our emotional and spiritual selves, he suggests, the more we become “objectified” in the process. Ironically, ChatGPT4 provides a very revealing summary of Lanier's perspective and lists what all of us should be keenly aware of before surrendering our individuality to Synthetic Intelligences:

Loss of Individuality: Lanier fears that over-reliance on AI algorithms, particularly in social media and online platforms, can lead to a loss of individual thought and autonomy. If people are constantly being fed what algorithms determine they want to see or hear, it reduces their exposure to diverse ideas and perspectives.

Economic Displacement: Like many, Lanier is concerned about the potential of AI to displace jobs. As automation increases, there could be a significant shift in how economies function and how individuals earn a living.

Concentration of Power: AI tools, especially those used in big data analytics and prediction, can become tools of power concentration. The companies or entities that control these tools can wield significant influence over society, potentially overshadowing democratic institutions.

Reality Manipulation: With the advancement of AI, the line between what's real and what's computer-generated can blur. This could lead to misinformation, manipulated videos (deepfakes), and a general distrust in digital content.

Depersonalization: As decisions become increasingly made by algorithms, there's a fear that human stories, emotions, and nuances get lost. This can lead to a society where decisions feel impersonal and dehumanized.

Devaluation of Personal Experience: Lanier has expressed concerns that AI could diminish the value of personal experience and intuition. As people come to rely on AI for recommendations or insights, they might start to undervalue their own experiences or feelings.

Over-Optimization: AI systems often work by optimizing for specific goals. However, over-optimization can lead to unforeseen consequences and can miss the broader, holistic picture.

Loss of Connection: On a deeper philosophical level, Lanier has mused on the idea that an over-reliance on AI and digital systems can lead to a loss of connection with the organic, natural world.

Jaron Lanier isn't totally pessimistic about A.I., since he clearly sees how it can benefit humanity in ways that are truly spectacular—from medicine to weather forecasting to manufacturing and more.

Yet, his skepticism is that we are too prone to give ourselves over to these algorithms because of their ease of use.

This is already obvious, given how easy it is to create essays, pictures, art, business plans, spreadsheets, tax returns, legal briefs, and so on.

In our rush to offload our own intelligence to our A.I. offspring, we must ask the most important question that confronts us:

Is it worth the risk?

Max Tegmark from M.I.T., author of Life 3.0, argues that “"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before - as long as we manage to keep the technology beneficial."

But he contextualizes that optimism with the following,

“Sadly, I now feel that we're living the movie 'Don't look up' for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar.”

Thus, we confront what can be called the Tegmark Paradox. Yes, Artificial Intelligence can be the greatest advance humankind has ever known and exceptionally beneficial, but it can, if unchecked, lead to a dystopian future of unimaginable horror.

Underlying all of this is the Alignment Problem, making certain that a future superintelligence is in accord with human flourishing. But this very field has no certainties only probabilities, and that leads to a gambling outcome. Are the odds in our favor for a benign symbiosis with A.I.? —as Dan Brown's clichéd novel, Origin, implies? Or are they simply 50/50, making for an unknown roll of the dice where we must close our eyes and hope for the best?

Most in the Silicon Valley are well aware of the potential dangers inherent in letting A.I. become untethered, without a priori guard rails and buffers. But as much as they admit to these risks, the competition is such that there is already an engineering arms race to develop and exploit Synthetic Intelligences to their company's advantage, lest their corrivals have an undue advantage over them. This is acutely true when it comes to nations desiring to harness the power of A.I. for general surveillance and for military purposes.

The most frightening aspect of A.G.I. (Artificial General Intelligence) and its ultimate progeny S.F.I (Super Functional Intelligence) is that they follow a Kurzweilian law of accelerating returns. If any one person, company, or country masters the technology first, even if just by days, they will have an exponential jump over their counterparts, forever leaving them trying to catch up.

John Carmack, the wonder genius of game development and computer programming, who is not prone to hyperbole, believes that by 2029 we will most likely have the first true instantiations of A.G.I.

If this is true, we have much less than a decade to be prepared for what the future has in store for us.

Given the super quick and massive adoption of ChatGPT and other A.I. related programs from the late 2022 till today, I suspect we are not ready for the coming tech tsunami that is inevitably going to swamp the planet.

My hunch is that it will be impossible to surf that tidal wave, much less contain it.

Perhaps we will be like the dinosaurs that preceded us millions of years ago, who became extinct due to unexpected climatic changes, which included a massive asteroid that hit near Chicxulub, Mexico, and left a crater 110 miles wide.

We are already awash for hours a day in a digital stream, but at least we have (or believe we have) some control over it.

That, I fear, will soon not be the case. Instead of swimming with the current of Synthetic Intelligence and directing its current, we may soon just drown in it.

As ChatGPT concluded when prompted about this very concern:

The phrase "drown in its wake" is metaphorical and could be interpreted in various ways, but it generally suggests being overwhelmed or overtaken by something larger or more powerful. When applied to the idea of superintelligence that's not aligned with human interests, the phrase captures a real concern: that such an intelligence could act in ways that are detrimental to humanity.

Here's why many thinkers and experts in the field of artificial intelligence (AI) ethics and safety are concerned:

Speed and Capability: A superintelligent system would be far more capable than any human, and could potentially act faster than we can respond or understand.

Alignment Challenge: Ensuring that an AI's goals are perfectly aligned with human values is a difficult problem. Even small misalignments could result in actions that are at odds with our interests.

Unintended Consequences: A superintelligent AI might find ways to achieve its objectives that we didn't anticipate. For example, if we told an AI to maximize human happiness, it might decide the best way to do that is to plug us all into some kind of "pleasure machine" rather than promoting genuine well-being.

Irreversibility: Once a misaligned superintelligence starts taking actions, it might be difficult or impossible to reverse those actions.

Existential Risk: At the extreme end, a misaligned superintelligence could pose an existential risk to humanity, either by directly acting against us or by making decisions that inadvertently lead to our downfall.

“The upheavals [of AI] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
— Nick Bilton

“The exponential progress of technology is altering the phenomenological experience of human sensation, robbing us of our ability to get in touch with our humanity and reflect upon the triumphs and madness of our techno-society. Ironically, in our obstinate desire to humanize robots and Artificial Intelligence, our individual existence is, in turn, being digitized and robotized by our own technological inventions.”
— Danny Castillones Sillada

AND FOR A MORE POSITIVE CONCLUSION

Old School and Analog and Not art of the Electric Grid

https://sites.google.com/view/neuralsurferbookshop/





Comment Form is loading comments...