Whilst I was re-reading the threads on BioLogos about Signature in the Cell, I chanced upon an exchange between Rich and Dennis Venema (from this time last year) on a thread about whale evolution. Rich had linked to a video by Richard Sternberg . In this he suggested that the large number of big hurdles evolution needed to overcome, in a short time, to cause the change from terrestrial mammal to whale, seemed to be mathematically implausible given known rates of mutation. Rich was bemoaning the fact that, although no detailed genetic transitions had ever been proposed even hypothetically for such an evolutionary process, Neodarwinists are still supremely, or even belligerently, overconfident that random mutation and natural selection undoubtedly explain them. I can’t disagree with Rich’s analysis.
Dennis Venema then suggested that one might attempt such a detailed study only if the entire genome of both precursor and descendant were known, and the differences small. He considered that this might be modelled by the chimp and human, whose genomes have been sequenced and which famously differ by only 2%. Although the exact results of all these changes are not yet known, he said, comparing the sequences as a whole showed that there were no changes that could not easily be explained by the accepted processes of evolution. You can see the changes, and you can see how small they are. An unstated implication of this, of course, is that evolution is actually quite easy – tweak 2% of an ape genome and you can produce a bipedal, intelligent and self-aware being capable of writing about the human genome.
Dennis’s final post, in reply to a query about genetic switches, is telling: “Yes, the evidence seems to point mostly to regulatory changes. The general point is that small changes to an already complex system can have large effects – especially if the changes affect regulatory genes that act early in development.” This, of course, is standard EvoDevo stuff and helps explain what these small genetic modifications do. But does it adequately explain how it comes about?
In the good old days when the cell was considered a bag of protoplasm, evolution was thought to be targeted solely at new proteins. 2% of altered genes meant 2% of altered proteins which, in some ill-defined way, meant man rather than ape. It reminds me of Victorian patent medicines, where you might add a bit of bromide to calm things down, a bit of squill or a touch of arsenic. Depending on the balance, you used it for gout or the vapours – or you could add a bit of everything and cure the lot. On this model, the relationship between DNA changes and phenotype changes is arithmetical. You only have to produce 2% of successful mutations, and (crudely speaking) you modify 2% of our 25000 genes.
As soon as you introduce the EvoDevo concept of switches, though, you’re in a different ball game. The level of complexity, and therefore the margin for error, escalates. Suppose you have just two levels of control – say switches and exons. If a switch controls n genes, then the number of changes if you delete or alter the gene is not 1, but n. If there are two levels of control, say a switch acting on n controllers of n genes, then the number of changes to get right (or the number of things to go wrong) is n2. The increase in complexity at each level is logarithmic. Since it seems that the number of levels of control in the genome is extremely large, and that one “switch” (perhaps “patch” might be a better descriptor) can affect maybe hundreds of other sites on the genome, an apparently simple mutation of the genome is actually a very much less probable event than it would be in the 1 gene = 1 protein scenario. One would therefore expect a logarithmically increased amount of time for each successful mutation to become fixed. Five million years for 2% of the genome begins to look a rather short time.
Dennis Venema’s proposal that the DNA sequence itself is sufficient to vindicate RM & NS therefore needs to be questioned: it depends entirely on the the complexity of the mechanisms that sequence is controlling, and by many orders of magnitude.
The sound card I use for recording has the ability to patch flexibly between 10 different audio components, each with between 2 and 42 channels. The possible operations are very simple, and “evolvable”: to create a link between any two points, or detach either end and re-attach it somewhere else. Not much new information needed there. Except that (as I found to my cost when I found the help file too technical for me) the number of possible wrong connections is almost infinite in comparison with those that will actually do anything. The chances of getting to a “selectable” result by chance are, effectively, nil. Whether it’s easy or hard to code new functional protein, it’s definitely much harder to fit such a protein into a multi-layer control system.
The other unstated problem lies in Dennis’s agreement that what exists in the chimp is “an already complex system”. Not only does chance tinker with it at its peril, but it is necessary to account for how the system became that complex in the first place. If controlling my software at random is impossible, how much harder would it be to design it randomly?
Simple changes can only produce large effects because the information necessary for those effects is already in the system. Pressing the button marked “automatic launch sequence start” adds 1 bit of information only to the system – but that 1 bit is not what makes it possible to get to the moon. Evolutionists, as Rich points out, have not been able to prove the actual viability of any detailed inter-species transition. As far as I can see from the exchange on Biologos, they haven’t even begun to consider the maths implicit in the modern understanding of cell biology.