Sunday, May 18, 2014

On Human Intelligence and Natural Laws


What is the relation between Human Intelligence and Natural Laws ? A very basic observation is that human brains have the ability to use code, classify things as similar and dissimilar, etc. -- perhaps also coming up with meta-mathematical arguments for problems which don't yield to algorithmic solutions, like showing undecidability, etc. likely related to Penrose's notions. Of course, human brains are subject to external constraints like a) physical - energy conservation, material and physical limitations, b) historical - biological evolution. But those constraints are akin to constraints on "hardware". Then, the "software" part of the brain, where does it come from ? The novel abilities (empirically speaking) of using code to process information in a way that has nothing directly to do with how physical information (this refers to actual material-energetical physical entities like rocks, electrons, etc.) is processed in Nature according to natural laws. This is obvious, but several arguments bolster this already evident statement. That we can be arbitrary in our choice of code, that we can parse and process the paradoxical instances contained in the practice of human language, etc., that we can categorize things (dogs are different than cats) with classification as an end in itself in a sense quite different from how an electron can "categorize" protons different from neutrons based on its physical interactions. In short, the notion of purpose is self-evident to us in terms of its use. While it has no real place in what is commonly understood to be natural phenomena. Nature just evolves according to natural laws, it is a big giant computer processing physical information. In terms of computing, the human brain can process maps between objects and maps between maps, etc. in a multi-layered way which is instrumental towards intelligent behaviour. We should keep in mind that 1) the a big part of human brain is definitely a probabilistic engine that assigns likelihoods to "beliefs" and bases its worldview on them, 2) this is a continual process of change and the likelihoods of beliefs change according to data acquired by the sense faculties. This part is crucial to our survival.

If human intelligence could syllogistically follow from natural law, that would mean we can start with natural law as axioms and arrive at human intelligence as a theorem. Human brain computes, that much we can agree on. What about the standard digital computer. Of course, it is governed by natural laws. But its behaviour is not solely governed by natural law. There is the software which is programmed by a programmer. That can be thought of as the initial condition on which natural laws act. Now, it is known where the programs consisting the software of the digital computer come from. What about the same question for the human brain. Does the human brain more or less compute like a digital compute, albeit its algorithms might be differently sophisticated than that commonly used in digital computers but still reducible to a Turing machine ?  Let it be noted here that there is a thread of computing research related to neural nets that is inspired by the human brain; furthermore there is research on probabilistic models of computation and associated errors and bounds on them, as compared to deterministic computation.

Let's provisionally say that biological evolution has selected for certain structures, etc. that could mimic actions of useful programs like pattern recognition, image processing, etc. depending on what gave humans advantage for survival in early history. It is reasonable to accept that these program mimicking structures were complex enough, since they had to process complex information in the environment. Now the novel abilities of the human brain would have followed from the interaction between these complex structures existing in the skull. But how they followed seems not to be a reductionist question. There is no step-by-step way of going from natural laws of the electrons, etc. to this level of complexity. This is the main paradigm of complexity science. This is somewhat analogous to the myriad phases of matter when it becomes complex enough, and this wide variety comes from the same underlying natural laws that govern the "simple" entities that form the complex system.

Basically, the paper being written upon and the ink doing the writing can never understand what is being written. This is likely a category mistake since "understand" does not apply to paper and ink, but let's continue with that. To elaborate, we could use any combination of sounds, gestures, etc. to communicate that there is danger or that a dog is similar to other dogs. Natural law explains all those combinations. The historical accident of one particular choice can also be said to be arriving from natural law. Thus, natural law explains the biology/chemistry/physics of using sounds, gestures, etc. for communication and also the particular dynamical evolution that lead to the particular historical choice. The real question - for me - that remains to be explained is rather the fact that how did human brain evolve the faculty for abstraction, and an appreciation (biological or otherwise) that using code for communication and achieving desired ends, and classification enhances it in a significant way. That seems to require a leap in computing. To go back to the digital computer, the corresponding leap was in fact taken by the programmer and the computing scientists before him or her. How do we explain our faculty for algebra and logic. How do we think of propositions and the relations between them in the abstract. Yet in terms of experience, these faculties come to us either naturally or with some amount of training. At some point in history, human brains would have made this leap. How to explain that historical event is the key question for me.

Perhaps a guiding light in this direction would be the study of development of algebra within the field of mathematics; by looking at what Newton and Liebniz did for calculus I can say that their achievement is one of those moments where a proto-thought came into existence after substantial effort and then became formalized to be copied by the rest of species. My interest lies in these moments from a phenomenological and computing perspective. How does the brain accomplish such moments or periods as a computer. It could be a random walk in idea space guided by previous experience and training until the proto-thought is reached and is recognized for its potential again by previous experience and training. Human language is another thing that should be understood from a computing perspective (linguistics has made a lot of progress on the phenomenology of human language which sets guiding principles.) Hark also to Penrose's idea of a human brains having access to Plato's ideal world which exists independent of our natural world.

To clarify this is not a hardware question for me; we can comparatively easily imagine how the hardware side of this aspect of brain evolution would have taken place. This is a software/algorithmic question for me. We can also see that this is not a vacuous question. This is because there are many species who can not perform the rudiments of abstract thought, while humans can perform levels of abstraction that can not be considered low compared to these species. This qualitative difference in brains remains to be explained and is a valid scientific question. Can chimpanzees - who can count if taught - do algebra in the sense of x standing for something yet unknown ?

Does this imply that there is some ingredient to human intelligence that can not be accounted for by natural laws and concomitant phenomena ?

Or does this imply a Wolfram-esque idea or Conway's game of life like scenario where we are complex entities doing "intelligent" things where intelligent just stands for complex ways of processing information. And the only real meaningful (in the colloquial sense of the term !) thing to do is quantify and perhaps classify complexity.

The later is probably a better point of view to start.




WRONG Any computation would require some finite amount of information processing and thus finite amount of energy. Now we agree that there is a physical limit on how human bodies does what it does (including the process of starting with natural law as axioms and arriving at human intelligence as a theorem). Suppose A0 that "Natural law implies human intelligence". Then we could show that A1 is true where A1 is "Humans have the intelligence to show that Natural law implies human intelligence" or "Humans have the intelligence to show A0" can show using sufficiently simple logical steps. That needs an expenditure of some amount of energy. If we could show A1, then we should also be able to show A2 that "Huams have the intelligence to show that they are intelligent enough to show that Natural law implies Human intelligence" or "Humans have the intelligence to show A1". This requires an additional amount of energy. Thus we should be able to A(n+1) that "Humans have the intelligence to show An". To show An for all n would requires too much energy. WRONG --- because first we can stop with A0, second the number of humans doing the showing can diverge too with any loss of principle.

No comments:

Post a Comment