I want to expand a little on why I have conceptual problems with standard Neodarwinian evolution as a more-or-less complete explanation for the origin of the species, touching again on optimization, which I dealt with recently in the context of formal causation.
For those new to the site, or the forgetful, let me reiterate that these problems are not about the godlessness of evolutionary accounts or the impossibility of “natural” causes being sufficient for the creation of life. Classical Christianity, with its strong doctrine of special providence, has no problem coping with complete chains of efficient causes (though it may, on other grounds, prefer God to act directly in some cases – to Aquinas, for example, causes beyond nature were an important way of showing that God creates freely and not by necessity).
No, the issues are about whether the current ToE is secure enough to be a good account. At least in part, the freedom to question that comes from not being bound to natural selection as the only possible option, from a prior commitment to naturalism. So if you’re sitting comfortably, then I’ll begin.
Remember that Darwin’s theory was not intended to explain variation alone, but adaptation. What that means is that his initial observation from decades of field-study was the superb congruence of species to their roles. A previous explanation for this universal observation, which he challenged, was special creation by an infinitely wise creator (he did little to contest, and even accommodated, alternative explanations for adaptation like Lamarckianism). The analogy he chose for this was selective breeding by skilled breeders. And, finally, his proposed explanation was natural selection of constant, limitless variations in effectively infinite time, simulating such intentional breeding.
The stock-in-trade of field naturalists is still the same superb adaptation that impressed Darwin. Overall fitness cannot easily be quantified: it is a relative attribute, and so there is always the possibility that a fitter organism for any one environment may occur in the next generation. That’s why optimization is such an interesting phenomenon (see the links I put here and another new example here). In optimization function can be measured against the fixed standard of the limits of physical laws, or as in the last case against the best that human intelligent design can achieve. Of course, a sub-optimal design might actually be fittest in any given situation (for example, speed of reproduction might outweigh engineering perfection as a priority), but it is hard to claim that theoretically maximal performance does not represent fitness.
However, the trajectory of evolutionary theory (in keeping with the demise of C S Lewis’s Myth of Progress!) is that evolution is a tinkerer or a bodger, doing just enough to ensure survival, but no more. “Bodging” and “optimization” don’t self-evidently fit together, especially when some of the work on the limitations of natural selection is examined. I now want to glance at some of the research I’ve stumbled across over the last year or two in that regard.
First, let me comment on the limitations of recombinational breeding. Although some ultra-conservative population geneticists maintain that evolution doesn’t require mutation at all, even in Darwin’s time his comparison of evolution to livestock breeding was severely criticised, because the constraints were well known to all breeders. I mentioned that in a medical context on the optimization thread, but it’s a commonplace even in the popular press that the more extremely one selects cattle or dogs, the less fit overall they become. Although TEs will sometimes talk about the morphological differences between chihuahuas and great danes as evidence for evolution, the fact remains that they are all mere varieties of the subspecies Canis lupus familiaris: after 10,000 years (or possibly as much as 100,000) of selective breeding they remain inter-fertile with wolves, and more prone to genetic disorders. We have long passed the stage when “Just give us a few more centuries” is a sufficient answer to these problems.
Mutation entered evolutionary thinking through Hermann Muller’s 1927 discovery of X-ray mutation, if you don’t count Darwin’s “sports”, and his hypothesis that beneficial mutations might have evolutionary significance. It became steadily more entrenched as the basis for long-term evolution in the contemporaneous Modern Synthesis (“mutations replenish the gene pool”), and was given a theoretical treatment in population genetics. Mutation would overcome the objections of the livestock breeders that evolution could not produce new features. Meanwhile, plant breeders were doing it for real, with very different outcomes.
As shown in this paper by Wolf-Ekkehard Lönnig, who was the lead scientist and group leader of plant mutation research at the Max Planck Institute (1992-2008), in plant mutation breeding work over 40 years, approx 1:25000 mutations proved of some benefit. A realistic equivalent figure for animal mutations would be 1 in 100,000-400,000 beneficial mutations.
Remember that this refers to optimal conditions involving selective breeding from mutated organisms. In the wild, most benefical mutations will be lost before fixation, on standard population genetics models. And of course, in many cases speciation of animals is believed to happen in relatively small and isolated populations. But that’s not all:
“The larger the mutant collections are, the more difficult it is to extend them by new mutation types. Mutants preferentially arise that already exist.” In other words, the number of mutants with new phenotypes asymptotically approaches a saturation line in persistently large mutation experiments.
So 40 years of lavishly-funded mutation experiments were more than enough to show not only that there were in reality (as opposed to in population genetic modelling) few possible beneficial mutations, but that most of them had been found during that short time.
It may also be pointed out in this connection that – as far as the author is aware – neither plant breeders nor geneticists have ever reported the origin of any new species, or just any new stable races or ecotypes either surviving better or at least as well in the wild in comparison with the wild-type, in which the mutation(s) have been induced (Lönnig 1993 2001 2002a 2006, Lönnig and Becker 2004).
A new paper by Ard Louis’s team adds significantly to this. I’ve said in the past that population genetics is limited as a model for macroevolution by its simplifications and assumptions, and this work by Louis addresses one aspect of this by modelling the known fact that some mutations occur much less frequently than others. Positively, as in the abstract, the paper shows that this bias can “steer populations to local optima”. More pessimistically, though, he concludes:
We explicitly showed how phenotypes with a high local frequency can fix at the expense of locally rare phenotypes, even if the latter have much higher fitness. Taken together, these arguments suggest that the vast majority of possible phenotypes may never be found, and thus never fix, even though they may globally be the most fit: Evolutionary search is deeply non-ergodic. When Hugo de Vries was advocating for the importance of mutations in evolution, he famously said “Natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest”. Here we argue that the fittest may never arrive. Instead evolutionary dynamics can be dominated by the “arrival of the frequent”.
Louis is a Christian and an ardent defender of a purely Darwinian type of theistic evolution. But his press release is candid in suggesting that significant rethinking of evolutionary assumptions may be necessary in the light of this work. On the one hand, it seems evolution need not search the whole space of possibilities. On the other, it becomes less plausible for evolution to explore new functions at all since natural selection is a strong stabilizing force for existing (and not even the best) configurations.
Fitness doesn’t have to be 100%, of course, even if one could measure it. But does “frequent and faulty” fit either with what is so often seen in nature and what inspired Darwin, or with the many examples now known of optimization? I have a problem squaring the circle.
Natural selection has theoretical numerical limits, too. Susumu Ohno’s 1972 paper, in which he coined the unfortunate term “Junk DNA”, was actually based on the theoretical conclusion that only a maximum of 30,000 genes could be subject to selection at any one time without the mechanism being swamped by the vastly more common deleterious mutations, extinction being the eventual result.
Until now, that has agreed quite well with the 20K coding genes thought to be present in the human genome, and even (just about) with the additional number of definitely functional genes gradually added via ENCODE and other research. But even without the possible function of far more of the genome, as ENCODE predicts from the level of transcription, Ohno was unaware of the sheer extent of alternative splicing, gene overlap, and other types of multiple coding.
This is especially important given the emerging importance of genes as control switches rather than as protein blueprints. However robust such complex systems are, it is inevitable that each gene is representing a vastly greater number of phenotypic variables that are under, at least, purifying selection. And of course, the real point at issue, long term, is adaptive selection to account for the swallow’s wings working out of the box, the hummingbird’s wings outperforming drones, or the insect’s wings mimicking exactly the leaves it rests on.
If, as now seems the case, genes are involved in the coding of an average of half a dozen proteins, then the fitness of half a dozen separate phenotypic features must be affected by any one rare beneficial mutation. How often would it be beneficial or neutral for all half-dozen traits, given that the overall rate of frankly deleterious mutations is 70%?
Ohno’s work led directly to Kimura’s neutral theory – the saturation of selection alone meant that the vast majority of genetic changes must be unselected. Current near-neutral theory still evokes heated argument, but it seems to be preferred over strict adaptationism by a majority now. As Eugene Koonin writes in his “state of the union” paper:
According to the neutral theory, a substantial majority of the mutations that are fixed in the course of evolution are selectively neutral so that fixation occurs via random drift…
For “substantial”, one should in the light of the plant mutation paper really read “vast”.
Of course, the neutral theory should not be taken to mean that selection is unimportant for evolution. What the theory actually maintains is that the dominant mode of selection is not the Darwinian positive selection of adaptive mutations, but stabilizing, or purifying selection that eliminates deleterious mutations while allowing fixation of neutral mutations by drift.
In other words, Darwin’s adaptive selection is a mere bit-part player in evolutionary change. Furthermore, other research predicts that of all beneficial mutations, the slightly beneficial escape adaptive selection, and the highly beneficial swamp it at the expense of other traits – for example if only white polar bears ever survive, all lesser beneficial traits become invisible to selection. Back to Koonin on neutral theory, though:
Subsequent studies refined the theory and made it more realistic in that, to be fixed, a mutation needs not to be literally neutral but only needs to exert a deleterious effect that is small enough to escape efficient elimination by purifying selection—the modern ‘nearly neutral’ theory. Which mutations are ‘seen’ by purifying selection as deleterious critically depends on the effective populations’ size: in small populations, drift can fix even mutations with a significant deleterious effect.
As I’ve already pointed out, in higher animals speciation can only take place in relatively small populations, because most species only exist in small populations, especially if speciation is allopatric – remember those population estimates of 10,000 or so throughout the dawn of human history. Did most of the changes leading to rational humanity, then, come from random drift? Or did they arise as Stephen Jay Gould’s spandrels? It seems to make little difference, because the queue for adaptive selection capacity remains just as long once they have arisen, if we want to claim that adaptive selection is the final common path for evolution. And if it is not, then our only possible designer-substitute has gone: the watchmaker is not blind, but dead.
Koonin’s paper also speaks of gene and genome duplication “sounding the death knell” for Darwinian gradualism. The fossil record has always cast doubt on it, as Darwin acknowledged, and Gould’s punctuated equilibria work emphasises that, observationally, in most cases macroevolution occurs below the resolution of palaeontology – involving a thousand generations, maybe, of that few thousand individuals. This paper provides confirmatory genetic evidence that most speciation events do not occur by gradual accumulation of change, or even by the kind of “accelerated gradualism” usually invoked (with a suitable flourish of the hand) to account for punc eek:
78% of the trees fit the simplest model in which new species emerge from single events, each rare but individually sufficient to cause speciation. This model predicts a constant rate of speciation, and provides a new interpretation of the Red Queen: the metaphor of species losing a race against a deteriorating environment is replaced by a view linking speciation to rare stochastic events that cause reproductive isolation.
So we have a low population, in a short time window (or even a single stochastic event, on this evidence), achieving speciation of the magnitude of, say, the change from H. erectus to H. sapiens. That’s either a lot of adaptive selection for a limited-capacity genome, or an extremely fortuitous set of near-neutral changes getting fixed non-selectively. This paper suggests that, in some cases at least, it may be the latter… once they find room in the selection queue.
But do moderately deleterious, neutrally-fixed mutations really achieve “sophisticated designs” of scorpion burrows? OK, all these things no doubt fit together somehow, but whether they do so by orthodox Neodarwinian mechanisms I have my doubts. On the one hand the bar of adaptive perfection has, at least in many cases, been raised by the study of optimization. At the same time, the capabilities and capacities of selection have been chipped away bit by bit, and the populations and times available have shrunk since Darwin’s easy assumption of infinite variation, ample geological time and limitless natural selection. If there isn’t an issue of plausibility there, I’m not sure why not, apart from wishful thinking.
Just one more citation along these lines. Everything we’ve looked at so far depends above all on DNA. And according to this paper the code itself is close to optimal. OK, 1 in a million codes may be better, so it’s not (on present knowledge) fully optimized. The writers say that the biases in the code suggest it’s been subject to selection … so not to neutral drift, then?
This is not an example of evolution doing just enough to get by. It must be one example of adaptive selection not being swamped by the other demands of the metabolism of the first DNA lifeforms, but proceeding smoothly to optimization. But tell me, just how does a genetic code mutate or vary when it’s what’s encoding all the other genes in the first place?