The Demise of the Random?

I’ve just come across this overview paper by James Shapiro. If the evidence he gives is as presented, it really does seem to me to present a potentially fruitful 21st century view of evolution.

The weak point of Neodarwinism has always been its reliance on random mutation as the ultimate source of variation. Indeed, for many decades after it was first suggested, mutation was downplayed as a better mechanism was sought for – that’s because all the experiments with mutation showed a zero rate of positive return (here is an amusing illustration of those results). Mutation really won the Neodarwinian day, it seems, mainly by default – and maybe by the big popular push it later got from “The Selfish Gene”.

Genetics has now been able to show that there is, indeed, a drift caused by random mutation – or at least by random copying errors. Most are small and deleterious – genetic diseases etc. Some are cosmetic (such as Y chromosome changes – great for dating Y Adam but of little importance biologically). A few are useful, like sickle cell disease. But it remains questionable how much that can cause the big changes between species, like H. erectus to H. sapiens.
It is also an essentially gradualist mechanism – small useless mutations accumulate over time, somehow without screwing up the whole organism. Some mitigation of that problem comes from the discovery of the large amount of built-in redundancy in the genome. Mutation could work on large areas of the non-functional code which, by a final mutation, then become functional and so susceptible to selection pressures. But of course that means you’ve still got to build a working gene, or set of genes, randomly. It’s the equivalent of having to use the mutation generator to create another valid sentence before somebody clicks the “select” button. Can that happen?
Also, there’s now a lot of evidence that big steps in evolution can happen quite rapidly – “punctuated equilibria” etc – which mutation can’t, in my opinion, explain well. If evolution were truly gradualistic, it ought to be rare to find two fossil examples of the same species. In fact you more often find many, with little or no change, and then a sudden transition to a new form.
Turning now to Shapiro, he  describes a more sophisticated genetic code that, when environmental stress occurs, as part of its function actually generates new permutations of gene patterns, feeds the results into the germ line, and so generates a greatly increased number of variant offspring for natural selection to work on. He lists “genetic” stresses that have actually been observed, which include that of mating between partly dissimilar individuals – in evolutionary terms, that represents problems finding a suitable mate owing to population loss from, say, natural disaster.

As others have, he points out that the genetic code (in which is included protein and cellular interaction with DNA) is hierarchical, like a computer. There are control genes, switch genes for eyes, clusters of genes for whole organs and so on. So a physiological mechanism like Shapiro’s wouldn’t actually chop up the DNA randomly, as radiation would, but in an organised way, mixing up the working gene clusters into new combinations. It might be the equivalent of deliberately moving half your “chemistry” folder documents into your “evolution” folder – increasing the chances of integrating ideas in a useful way you didn’t think of before. But you’d only want to do that if the old ideas weren’t working anyway.
Such a mechanism would actually generate a lot of working organisms rather than a lot of monsters as mutation would. Most would be less successful in the new environment than the original organism, but some would be more successful and – hey presto – a new species in just a few generations, or even one.
That’s neat, and consistent with the big picture. If true, it is the future. I’m not well enough up on the genetics to know whether the evidence he presents is “persuasive” or “laughable”. One reason for not liking it would be this: if such a mechanism exists, then how did it evolve? It’s easy to mimic the anthropic principle to say “well, it wouldn’t exist unless it had evolved”, but that’s circular. At the very least the hypothesis seems to me to undermine the “selfish gene” concept of evolution: the gene is deliberately ripping itself apart so the organism – a different organism, at that – can survive. But The Selfish Gene was a metaphysically loaded picture even of the neoDarwinian synthesis, in my view.
There’s obviously survival value in such mechanisms – but I have to wonder whether evolution could truly produce them. After all, they’re not needed until the meteor strikes, or the ice forms, so why would they not disappear in the interim? Also, if they turn out to be necessary for significant genetic change, then they were necessary for their own development (just as DNA is necessary to produce cells that make DNA). You’d have to revert to random mutation as the only mechanism for evolving the complex system evolved to compensate for mutation’s inability to evolve complex systems!
It’s back to the origin of life mystery – which might be one reason for the theory not to become popular: better a mechanism that’s comprehensible but inadequate than one that brings unanswerable questions back to the heart of evolutionary theory.

About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Creation, Science. Bookmark the permalink.

3 Responses to The Demise of the Random?

  1. Not sure why you question the evolvability of greater variance under stressful conditions. Stress occurs all the time, such as when your tidal pool dries out. No need to wait for an asteroid.

  2. Jon Garvey says:

    Hi David

    Very fair point. I was beguiled by mention of mass-extinctions.

Leave a Reply