SYNAPTIC DARWINISM
[home]  our papers  ] Outline of our Theory ] About the Authors ] Links ]
This is the "Synaptic Darwinism" website of Paul Adams and Kingsley Cox, who work at the Kalypso Mind/Brain Center, and in the Department of Neurobiology at Stony Brook University.

Outline continued 1

Most people rightly take the view that the brain is very complicated and best left to the experts, rather like cosmology, hedge funds and string theory, although they hope to be allowed sneak glimpses. It is true the brain is very complex – and in a way our approach faces this issue head-on, by asking what allows things to get so complicated. But fortunately the core of our answer is very simple – simple enough to be followed by most smart laypeople. Indeed, in some ways the more you know about neuroscience, the less easily you will understand our idea.

Our idea is REALLY simple! Neuroscientists think, unsurprisingly, that the key to understanding, intelligence and the mind is learning, and we now know that learning is achieved by adjusting the strengths of the individual connections between nerve cells, the synapses. We go one tiny, but crucial, step further: we focus on the degree to which individual synapses can be independently

adjusted. The brain has colossal numbers of synapses (perhaps 1 quadrillion), and they can only be useful if each one can be individually regulated. A useful analogy is a computer – the very computer you are using to read this. It has between a billion and a trillion individual memory sites, and each one of those sites can be read or written without affecting the information stored at the other sites. It only takes 1 bit to go astray for the computer to crash. Indeed, the reason why computers have become so pervasive is that the technology behind ultrareliable and dense storage has advanced so fast, with the cost halving every year or so.

It is at this point in our argument that smart laypeople and neuroscientists part company. Laypeople can see that each synapse might count (perhaps literally), and they assume that neuroscientists have confirmed this, and have even discovered its machinery. Neuroscientists, on the other hand, instinctively think of the brain as unreliable, even sloppy, like everything else in biology. As long as synapses work pretty well, the fact that there are large numbers of them provides "redundancy" – the sort of safety net that committees, armies, teams etc provide. There is some truth here, but only some, and we think this view is itself sloppy, and is actually preventing neuroscience from making much of a dent on the mind-brain problem.

It is extremely helpful at this point to refer to one other situation in biology where extreme (perhaps miraculous) accuracy is essential – the copying of DNA. Our genome (and each egg and sperm) contains about a billion bytes of information – enough to fill a CD. This has to be almost exactly copied in order for reproduction to occur (indeed, that is what the word "reproduction" means). Occasional minor errors can be tolerated, but even 1 bad mistake can be lethal. This idea can be made more exact as follows. If a genome has 1 billion base-pairs (the building blocks of DNA), then each basepair must be copied with an accuracy exceeding 99.9999999%. No sloppiness allowed! Of course, the effects of inaccuracy do not show up immediately, because only 1 individual member of a large population will die or fail to reproduce, and the population itself is almost unaffected. This is the famous biological "redundancy". Any mistakes occur in a random manner, each affecting different parts of the genome, and while no individual mistake kills more than one offspring (and usually not even that), nevertheless mistakes can accumulate. This idea is so crucial to our argument, and although simple and certain, so counterintuitive, that we now detail it further.

Suppose that the error rate for copying a single unit of DNA is e. The probability that no mistakes will be made in copying the entire genome, length n, is therefore (1-e)n. (Sorry for this math: it simply says that the probability that your car is not defective is just the probability that no individual part is defective raised to the power of the number of parts: it’s why our cars spend increasing amounts of time in the garage!). But if the population is roughly constant, as it almost always is in biology (and in the brain), this means that for a good genome to survive indefinitely in the population (the minimum requirement for evolution to occur), we must have the relation n < 1/e. In words, the error rate must be less than the amount of stored information. The apparent advantage of a large population is illusory! In this argument the population size plays virtually no role (though the bigger the population the more accurate the 1/e rule). The reason is simple: mistakes accumulate from generation to generation! The fact that the mistakes are being made in different parts of the genome makes no difference – mistakes are mistakes!

continued.....

Document made with KompoZer