Do you like this website?
Please support Integral World!
An independent forum for a critical discussion of the integral philosophy of Ken Wilber

powered by TinyLetter
Today is:
Publication dates of essays (month/year) can be found under "Essays".
David Christopher LaneDavid Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).


The Scaffolding of Reality

When the User Interface Degenerates
and What it Reveals about the Universe

Brandon Gillett, Vikraant Chowdhry,
and David Christopher Lane

Donald Hoffman
Donald Hoffman, “I believe that consciousness and its contents are all that exists.”
Make no mistake, Hoffman's conclusion, though not verified, is radical to the core.

The other day in my Honors Introduction to Philosophy course at Mt. San Antonio College, we were discussing Professor Donald Hoffman's theory of consciousness, where he argues that “Evolution has shaped us with perceptions that allow us to survive. But part of that involves hiding from us the stuff we don't need to know. And that's pretty much all of reality, whatever reality might be.” In his most recent book, The Case Against Reality: Why Evolution Hid the Truth from Our Eyes, Hoffman suggests that

“because there is no mathematical theory explaining the pattern of neural activity that creates consciousness, it may mean we are making a false assumption. . . . Perceptions [therefore] are a user interface, but not necessarily reality.”

Using the metaphor of the desktop computer, Hoffman goes on to explain that our awareness may be likened to the User Interface (UI) on the display screen, where all we really see and interact with is on the surface. What makes the computer function—the software programming and the hardware design—remains essentially hidden from view. Likewise, in our day to day world we don't glimpse the nuts and bolts of how our neurons fire or how our biochemistry modulates our experiences each moment to the next. We are like those prisoners in Plato's famous Allegory of the Cave who mistake shadows on the wall for reality. Yet, in our case, the subterranean cavern is our own body and brain, and we are not privy to the underlying causes that generate our particular cerebral symphony. Yes, we may hear the music that our minds generate, but the varying instruments within our cranium that create such wondrous melodies are hidden from our direct introspection.

While Hoffman's theory is grounded in the latest neuroscience research, he has extended his thesis far beyond what most scientists will accept as plausible. Indeed, it is so revolutionary that it upends everything we have taken to be true about physics, chemistry, and the universe at large. As Hoffman explained fourteen years ago for[1]:

The Case Against Reality

“I believe that consciousness and its contents are all that exists. Spacetime, matter and fields never were the fundamental denizens of the universe but have always been, from their beginning, among the humbler contents of consciousness, dependent on it for their very being.

The world of our daily experience—the world of tables, chairs, stars and people, with their attendant shapes, smells, feels and sounds—is a species-specific user interface to a realm far more complex, a realm whose essential character is conscious. It is unlikely that the contents of our interface in any way resemble that realm. Indeed the usefulness of an interface requires, in general, that they do not. For the point of an interface, such as the windows interface on a computer, is simplification and ease of use. We click icons because this is quicker and less prone to error than editing megabytes of software or toggling voltages in circuits. Evolutionary pressures dictate that our species-specific interface, this world of our daily experience, should itself be a radical simplification, selected not for the exhaustive depiction of truth but for the mutable pragmatics of survival.

If this is right, if consciousness is fundamental, then we should not be surprised that, despite centuries of effort by the most brilliant of minds, there is as yet no physicalist theory of consciousness, no theory that explains how mindless matter or energy or fields could be, or cause, conscious experience. There are, of course, many proposals for where to find such a theory—perhaps in information, complexity, neurobiology, neural darwinism, discriminative mechanisms, quantum effects, or functional organization. But no proposal remotely approaches the minimal standards for a scientific theory: quantitative precision and novel prediction. If matter is but one of the humbler products of consciousness, then we should expect that consciousness itself cannot be theoretically derived from matter. The mind-body problem will be to physicalist ontology what black-body radiation was to classical mechanics: first a goad to its heroic defense, later the provenance of its final supersession.”

While I find Hoffman's theory quite pregnant with all sorts of interesting possibilities, and appreciate him going so far out on a shaky limb to promote his consciousness first supposition (Shirley MacLaine notwithstanding), I still have troubling doubts about his idea of conscious agents. Nevertheless, in our class discussions, one of my brightest students, Brandon Gillett, who is quite conversant with computational devices and their limitations, pointed out that if Hoffman's theory was correct (even if only partially), there should be some telltale signs that our consciousness is a mask which betrays its real origination.

Gillett elaborated on his notion a few days later in a letter to me on this subject,

“Hi Professor,

One of the most popular metaphors for describing consciousness is to compare it with the user interface of a computer. The idea is that consciousness, like a graphical interface, is an illusion designed to mask the complexity of the underlying system. Just as how the icons on your desktop serve as a useful shorthand for simplifying an incomprehensible string of ones and zeros, our conscious experiences are a sort of shorthand for masking the immense convolutions of the human brain.

It is a useful illusion and nothing more. But if we're going to take this analogy seriously, it raises an interesting possibility. Consider what happens when a computer interface breaks down. Perhaps there is a bug in the code or the system has been infected by a virus. In any case, the interface is forced to contend with a situation which it was never built to handle. What happens in cases like this? Well, typically the interface reveals glimpses of lower order operations. Something that more closely resembles the computer's inner workings. Usually it will display snippets from the command line. Sometimes it begins writing out error codes or long strings of uninterpreted data. Essentially, the computer reveals aspects of itself which are more consistent with the true nature of its internal mechanisms.

Perhaps consciousness is subject to the same principle. Let's examine some of the situations under which a human brain finds itself confronting inordinate circumstances, situations which evolution has historically had little to no reason for preparing our minds to face. Psychoactive drugs, for example, introduce the brain to a chemical environment which is highly atypical. Under such circumstances, the brain begins to produce incoherent streams of sensory information. Much like dreams, these hallucinations can unearth repressed psychological patterns deep from within the subconscious. The experiences presented often take the form of archetypical images. Many of these metaphorical forms are universally reproduced among different individuals in a hallucinatory state. Is it possible that these archetypes are a glimpse into more fundamental processes of the conscious system?

There are other atypical circumstances capable of producing similar experiences as well: prolonged periods of isolation, mental illness, sensory deprivation, and any conditions which upset the equilibrium of the human brain-mind complex. Each, in their own unique way, can produce chimeric effects. These unusual phenomena may represent a breakdown of the conscious interface. Much like the interface of a computer, when consciousness breaks down it reveals its internal wiring to the user.

Just after Gillett presented his views in our in-class, Hoffman roundtable discussion, another particularly intelligent student, Vikraant Chowdhry, raised the intriguing issue of how “to figure out what the bare amount of information is needed to create a believable reality.” Chowdhry's point was to look at the actual mechanics behind how awareness “renders certainty.” As he too elaborated in an email to me,

“For example, human visual perception is heavily dominated by using contextual clues in order to piece together reality. We use shadows, past knowledge and patterns in order to build our visual field. This can be observed when optical illusions produce images that are not exactly like the real picture. So, would it be able to use imperfect processing in order to trick our brains into believing a reality without rendering the whole picture? Could one use shadows and other content and context laden clues in order to build a perceived full image without having to render every pixel? Indeed, sophisticated eye tracking could be used to drastically reduce the amount of processing that needed to build a reality. Since humans only see a very small portion of their whole visual field in high definition, can you just display the small area the eye is focused on the Virtual Reality screen while blurring out the rest of the screen. Dr. Hoffman talks about how reality is only what we can see at the current time. In other words, we wouldn't need to render anything that is behind a person (or away from his or her field of view), but only what would be directly accessible to them at the specific moment in time and space.”

At this juncture in class, I pointed out that John Carmack and Michael Abrash, both pioneers in VR gaming and currently Chief Technology Officer and Chief Scientist respectively at Oculus/Facebook, suggested that the greatest pathway forward in making virtual reality Matrix-like is precisely to make eye-tracking resolutions super clear. As David Heaney of elaborates,

“VR headsets can take advantage of this by only rendering where you're directly looking in high resolution. Everything else can be rendered at a significantly lower resolution. This is called foveated rendering, and is what will allow for significantly higher resolution displays. This, in turn, should enable significantly wider field of view. Foveated rendering relies on eye tracking. In fact, that eye tracking needs to be essentially perfect. Otherwise, there would be distracting delays in detail when looking around. Not all foveated rendering solutions are created equal. The better the eye tracking, the more gains can be found in rendering efficiencies.”
As we approached our dock, I literally couldn't stand up as the world was spinning every which way.

Since the most sophisticated virtual reality headset known to exist is our own brain, Hoffman's theory if true can perhaps be best tested by those situations where our filtering mechanism breaks down, such as in patients suffering epilepsy, schizophrenia, dementia, and other neural degenerations.

For example, back in 2004 I was given a relatively new drug to lessen pain after I had a root canal. It was called Vioxx, which is “a COX-2 selective nonsteroidal anti-inflammatory drug (NSAID) that is related to the nonselective NSAIDs, such as ibuprofen and naproxen.” At the time it was heralded by my dentist (who also happened to have an M.D.) as a new and benign pain reliever. What he didn't know and what later reports would reveal was that Vioxx was one very dangerous drug indeed. I had just taken second dose when I was with my oldest son Shaun, who was only four at the time, and we were in our electric Duffy boating around near our home in Huntington Harbour, when all of a sudden I was struck with an acute attack of vertigo, something I had never experienced before. As we approached our dock, I literally couldn't stand up as the world was spinning every which way. I managed to crawl up the dock with my son on my back as I slowly made my way up the plankway. I thought I was having a stroke since I hadn't then connected it to the pill I had taken previously. I was rushed in ambulance to the hospital and given meclizine hydrochloride which helps in stabilizing one's sense of balance.

During this experience, I had the keen realization that my perception of reality was extraordinarily fragile and was wholly dependent upon subtle structures within my inner ear that I was wholly oblivious to. Tiny molecules gone awry and I was plunged into a spinning kaleidoscope which felt endless.

Thinking back on this incident, it puts Gillett's point and Hoffman's philosophical conjecture into sharper relief. Is vertigo and other cognitive breakdowns a portal into better understanding the tentative nature of our day to day “user interface” reality? Do such anatomical declensions provide a glimpse into how nature conceals much more than it reveals, since parsing information is elemental in helping us to survive?

In the 1999 movie, The Matrix, déjà vu experiences are attributed to a “glitch” where something is changed in the computational coding and thus the digital rendering temporally malfunctions. Whenever we witness such defects on our smart phones or on our television sets, it reveals something about the inner workings of how these devices work. Ironically, we learn much more than we initially realize about the underlying programming and manufacturing of these technical systems exactly when they fail than when they work well.

Hoffman's bold theory, which still awaits further testing, may be likened to that pivotal scene in the Matrix where Morpheus offers Neo two kinds of pills,

“This is your last chance. After this, there is no turning back. you take the blue pill, the story ends [and] you wake up in your bed and believe whatever you want to believe. You take the red pill [and] you stay in Wonderland and I'll show you how deep the rabbit-hole goes. . . Remember, all I'm offering you it's the truth, nothing more.”

The metaphorical pill in our case (or, similarly, the pulling of the curtain in the movie, The Wizard of Oz) is to see reality as a scaffolding project, which depends on a nested series of illusions, each of which are designed to trick the user into believing that the present interface is the sum total of truth. Let's be clear, what Hoffman is proposing is a scientifically laden version of Alice in Wonderland, where the doors of perception are thrown wide open in ways that even Aldous Huxley hadn't imagined.

Make no mistake, Hoffman's conclusion, though not verified, is radical to the core:

What We Believe but Can't Prove

“But if we assume that consciousness is fundamental then the mind-body problem transforms from an attempt to bootstrap consciousness from matter into an attempt to bootstrap matter from consciousness. The latter bootstrap is, in principle, elementary: Matter, spacetime and physical objects are among the contents of consciousness.

The rules by which, for instance, human vision constructs colors, shapes, depths, motions, textures and objects, rules now emerging from psychophysical and computational studies in the cognitive sciences, can be read as a description, partial but mathematically precise, of this bootstrap. What we lose in this process are physical objects that exist independent of any observer. There is no sun or moon unless a conscious mind perceives them, for both are constructs of consciousness, icons in a species-specific user interface. To some this seems a patent absurdity, a reductio of the position, readily contradicted by experience and our best science. But our best science, our theory of the quantum, gives no such assurance. And experience once led us to believe the earth flat and the stars near. Perhaps, in due time, mind-independent objects will go the way of flat earth.

This view obviates no method or result of science, but integrates and reinterprets them in its framework. Consider, for instance, the quest for neural correlates of consciousness (NCC). This holy grail of physicalism can, and should, proceed unabated if consciousness is fundamental, for it constitutes a central investigation of our user interface. To the physicalist, an NCC is, potentially, a causal source of consciousness. If, however, consciousness is fundamental, then an NCC is a feature of our interface correlated with, but never causally responsible for, alterations of consciousness. Damage the brain, destroy the NCC, and consciousness is, no doubt, impaired. Yet neither the brain nor the NCC causes consciousness. Instead consciousness constructs the brain and the NCC. This is no mystery. Drag a file's icon to the trash and the file is, no doubt, destroyed. Yet neither the icon nor the trash, each a mere pattern of pixels on a screen, causes its destruction. The icon is a simplification, a graphical correlate of the file's contents (GCC), intended to hide, not to instantiate, the complex web of causal relations.”

“Perception is not like a window but more like a 3D desktop. Does that mean there is nothing outside the mind or that we perceive reality in an indirect way?”


[1] Donald Hoffman, Contribution to: John Brockman (ed.), What We Believe But Can't Prove: Science in the Age of Uncertainty, Harper Perennial, 2006. Reposted on

Comments containing links will be moderated first, to avoid spam.

Comment Form is loading comments...