Notes on Teilhard's Omega Point: Part 3, On Complexity, Consciousness, Thermostats and Panpsychism

Think of the following things:

A chair
A computer
A weed
A bacterium
A worm
A butterfly
A mouse
A dog
A chimpanzee
A human

Now think about taking a hammer to each item on the list, starting with the chair. At what point in the list do you start feeling morally queasy about randomly inflicting violence on each? I start to feel it at butterfly but it really cuts at dog. I'd be morally outraged if someone just randomly, for no good reason, killed a dog.

Generally speaking, our moral outrage correlates with the degree of consciousness we find in each entity. The greater the capacity for pain--physically or psychically--the greater the moral status.

Interestingly, this increase in consciousness (and concurrent moral standing) seems to be intimately related to the complexity of the entity. A worm is more complex than a bacterium and a mouse is more complex than a worm. On it goes to the most complex creature, man. Man is the most complex entity we know of in the universe and, interestingly, man also has the greatest range of consciousness. No one feels pain quite the way mankind does. In sum, there seems to be a rough correlation between material complexity and the range of consciousness.

In The Phenomenon of Man Teilhard calls this complexity/consciousness correlation the Law of Complexity and Consciousness. As he describes it (p. 60):

"Whatever instance we may think of, we may be sure that every time a richer and better organized structure will correspond to the more developed consciousness."

"The degree of concentration of a consciousness varies in inverse ratio to the simplicity of the material compound lined by it. Or again: a consciousness is that much more perfected according as it lines a richer and better organized material edifice."


In short, as material complexity increases so does consciousness.

The lingering question here is how are we to define "complexity"? And, is something "complex" by default going to be conscious? Or must there be a certain kind of complexity before we see consciousness arise?

David Chalmers in his influential book The Conscious Mind suggests that the "complexity" associated with consciousness is involved in information processing. Chalmers' view is very abstract and sophisticated and I cannot do it justice here. In fact, a summary on my part is likely to be simplistic to the point of distortion. Please consult his book for a fuller treatment (particularly Chapter 8).

Risking much in summarizing, Chalmers' basic argument is that we replace a "material" view of the cosmos with an "informational" view, taking information as primary over matter. In this view, one growing common in scientific circles, the laws of the universe are software that "compute" the next step in the "program." The universe moves forward in time as one large computation. The current state of the universe--T1--is taken as input by the "program" (the physical laws of the universe) which "computes" the universe at T2.

Chalmers speculates that when we see consciousness arise we see it associated with some kind of information processing. A dog or a bacterium take in information from its surroundings, manipulate it, and produce some kind of "output" (a behavior or an internal change of state, like a memory). Interestingly, whenever we see these "biological computers", these "organic difference engines," we also find consciousness. And, as Teilhard noted, the more sophisticated the computer, the more information it can process, the greater the consciousness. Thus, in the human brain we find both the most complicated computer known to man as well as the most exquisite form of consciousness. In short, given Chalmers' work, Teilhard's Law of Complexity and Consciousness can be refined as The Law of Information-Processing Complexity and Consciousness.

This Law has some interesting conseqences. To quote Chalmers (p. 294-295):

"...we can think about what might happen to experience as we move down the scale of complexity. We start with the familiar cases of humans, in which very complex information-processing gives rise to our familiar complex experiences. Moving to less complex systems, there does not seem much reason to doubt that dogs are conscious, or even that mice are...

Moving down through the scale through lizards and fish to slugs, similar considerations apply. There does not seem to be much reason to suppose that phenomenology should wink out while a reasonably complex perceptual psychology persists. If it does, then either there is a radical discontinuity from complex experiences to none at all, or somewhere along the line phenomenology begins to fall out of synchrony with perception, so that for a while, there is a relatively rich perceptual manifold accompanied by a much more impoverished phenomenal manifold. The first hypothesis seems unlikely, and the second suggest that the intermediate systems would have inner lives strangely dissociated from their cognitive capacities. The alternative is surely at least as plausible. Presumably it is much less interesting to be a fish than to be a human, with a similar phenomenology corresponding to its simpler psychology, but it seems reasonable enough that there is something there.

As we move along the scale from fish and slugs through simple neural networks all the way to thermostats, where should consciousness wink out? The phenomenology of fish and slugs will likely not be primitive but relatively complex, reflecting the various distinctions they can make. Before phenomenology winks out altogether, we presumably will get some sort of maximally simple phenomenology. It seems to me that the most natural place for this to occur is in a system with a correspondingly simple 'perceptual psychology,' such as a thermostat. The thermostat seems to realize the sort of information processing in a fish or a slug stripped down to its simplest form, so perhaps it might also have the corresponding sort of phenomenology in its most stripped-down form. It makes one relevant distinction on which action depends; to me, at least, it does not seem unreasonable that there might be associated distinctions in experience."


This may seem to be a crazy conclusion to reach, that a thermostat can feel, but I'm with Chalmers on this one. I think it is clear that consciousness degrades as we move down through more primitive forms of information processing. Either this degradation occurs discontinuously (it "winks out") or smoothly. If the former, one must at least give some reason for why consciousness should "wink out" as well as specify the rough location along the information-processing continuum where this is believed to occur. That seems to be a tall order, to me at least. Or, you can simply take the most parsimonious explanation: Consciousness degrades continuously from maximally complex (humans) to maximally simple (thermostats).

If we accept Chalmers' argument then a form of panpsychism results. As Chalmers says (p. 297, 298, 299), "If there is experience associated with thermostats, there is probably experience everywhere: wherever there is a causal interaction, there is information, and wherever there is information, there is experience. One can find information states in a rock--when it expands and contracts for example--or even in the different states of an election. So if the unrestricted double-aspect principle is correct [the information-processing/consciousness correlation--RB], there will be experience associated with a rock or an electron...this view has a lot in common with what is often know as panpsychism--the view that everything has a mind...I hope to have said enough to show that we ought to take the possibility of some sort of panpsychism seriously: there seem to be no knockdown arguments against the view, and there are various positive reasons why one might want to embrace it."

Teilhard, for one, did embrace a form of panpsychism. For Teilhard, as we saw with Spinoza and Chalmers, there is an intimate correlation between matter and consciousness. That is, where matter exists consciousness (however simple or degraded) must also exist. Phrased another way, consciousness is latent in matter. And, interestingly, this can't be considered a controversial or "crazy" point. It's fairly obvious that consciousness is latent, in some lawful way, in the evolution of matter. Just look at a snail. Or glance in a mirror. That's conscious matter staring back at you.

As Teilhard said (p.73), "pre-life has already emerged in the atom."

Next Post: Conclusion

This entry was posted by Richard Beck. Bookmark the permalink.