An independent forum for a critical discussion of the integral philosophy of Ken Wilber

powered by TinyLetter
Today is:
Publication dates of essays (month/year) can be found under "Essays".

Andy SmithAndrew P. Smith, who has a background in molecular biology, neuroscience and pharmacology, is author of e-books Worlds within Worlds and the novel Noosphere II, which are both available online. He has recently self-published "The Dimensions of Experience: A Natural History of Consciousness" (Xlibris, 2008).



Another Response to DeGracia

Andrew P. Smith

I look on this newer work as clarifying the limits of science, rather than adding ones where before none existed.

I thank Donald DeGracia for his response. It's surprising but refreshing to find someone on this forum with an extensive familiarity with topics such as biological networks and information theory. Let me clarify at the outset that my reply to his original essay was for the most part not meant to be critical. But I want to say a little more about random variation and natural selection.

The Huang passage that DeGracia quotes is what I was referring to when I pointed out that Huang himself noted that constraints can be selected. DeGracia, commenting on that passage, says

Here, Huang is clearly suggesting a lesser role for variation by selection and a greater role for intrinsic constraints in the sculpting of new species over time.

Both Huang and DeGracia mean by “variation by selection” the notion that an initial variation starts a chain of events in which a slight reproductive advantage is gradually amplified over time by further advantageous variations. I have no problem with this. My point was that an intrinsic constraint can also be selected. That is evident in Huang's statement that such a constraint, such as a particular network:

may simply have been exploited—because they happen to be advantageous—rather than created by natural selection.

There are two key words here, “advantageous” and “created”. By created, Huang and DeGracia, again, mean a gradual process of accumulation of variations. But selection can also act on features that are not created by random variation, and this is what I meant when I said that variation and selection can act independently. Just because a particular network arises because of certain constraints does not guarantee it will survive. It's more likely to survive, as Huang notes, if it is advantageous. The word advantageous here is basically synonymous with “favored by selection”.

Again, the scale-free networks in the mammalian brain can serve as an example. We really don't know how much of the structure of the brain resulted from random variation and how much from constraints, but even if, for the sake of argument, we assumed they were entirely the result of constraints, they still might not have survived if they had not provided some advantage over other possible structures. If one particular type of structure was the only one possible, then as long as it wasn't detrimental to the organism, perhaps that would be sufficient. But if there were other possibilities, and one of those provided some advantage that scale-free structure did not, then scale-free structure might not have survived. The fact is that scale-free structure does have certain advantages, so regardless of how large a role constraints may have played in its emergence, once it did arise it could be subjected to selection. And I think it will be extremely difficult to determine how large a role selection played.

In fact, a constraint may frequently play the role of a large mutation—a sort of quantum leap in some variation. If it has an advantage, it may make it possible for further fine tuning, by small random variations, as Huang and DeGracia note, but it may also make it possible for further constraints to arise—ones that could not have arisen if the original constraint had not survived. This is very likely the case in many networks, as Barabasi has shown that they can have a hierarchical or sort of holographical structure, in which sub-networks associate to form larger networks with the same organization as each sub-network.

With regard to complexity, I have not anywhere gone into a detailed treatment of it, which as DeGracia's even cursory discussion of it shows is a very… well, complex subject. I'm humbled by the fact that even very sophisticated theorists like Seth Lloyd have trouble coming up with a single definition, and if they can't, I certainly can't. But for the sake of a general argument that “complexity increases during evolution”, I don't think a detailed definition is important. As I noted in “Does Evolution have a Direction?”, in the worst case, one could argue that my definition of complexity is inaccurate, and I would simply point out that whatever we want to call what I have been calling complexity is increasing during evolution. In other words, there is an evolutionary trend for something significant and identifiable to increase.

I mostly agree with DeGracia about randomness in the universe and the limits of science. But I stand by my original statement that scientific notions are largely validated by making successful predictions. The work of theorists like Chaitin and Wolfram, for all their brilliance, I don't believe has resulted in any practical applications, and in the end, these applications are what drives science.

This goes back to the original context, the articles of David Lane that I guess started all this. If we ask what it is that most distinguishes science from other approaches to knowledge, I think we always come back to this predictability and application. Wilber has argued that while mystics comprise an elite, and their insights require special training to understand and verify, the same is true of science. But science is also verified, for non-scientists, through its practical applications. Most people may not understand quantum physics, but they do appreciate that the myriad forms of technology based on it work. It's these everyday applications, predicated on successful predictions, that keep science from falling entirely into a “trust us, we know what we're doing” type of endeavor. It's in this very important sense that I say science is verified by making successful predictions.

In any case, prediction is not an all-or-none game. The fact that we can't predict with complete accuracy the weather, or human behavior, and very likely never will be able to, doesn't mean that we can't predict to some extent events in these areas. Science in fact has long been aware that it's playing a game of probabilities, where we commonly couch our predictions as well as conclusions with the disclaimer, at the 1% or whatever level. In this light, I look on this newer work as clarifying the limits of science, rather than adding ones where before none existed.  

Comment Form is loading comments...