Old views on biology tested empirically

With biology nowadays so focused on evolutionary theory (“nothing makes sense” etc – Dobzhansky) it’s easy to forget that the predictions of older theories about the living world can still be tested against the wealth of modern data. Sometimes, they do surprisingly well: sometimes they don’t.

For example, since the time of Plato and Aristotle, a strong theme in what came to be known as biology was “essentialism.” This held that there was something about the form of a creature – or perhaps a small group of similar creatures – that was irreducible, and defined the kind.

A typical example is the dog, which would be considered to have an essential “dogness” in its many varieties. On the other hand, a cat has a completely different essence, which would mean, amongst other things, that cats do not interbreed with dogs. Of course, experience shows that, physical impediments aside, any kinds of dog will interbreed. But the theory would predict that, barring some kind of magical transformation or, perhaps, a Frankensteinian intervention, there would be strict boundaries around these essential natural kinds.

For most of history this was a relatively casual and self-evident assumption. It began to be formalised by the classifiers like Linnaeus, and it should be noted that the fundamental biological boundary-marker then was not the species, but the genus – a term coined in this sense around 1600 and, of course, having the etymological sense of origins, race, stock, etc. In the absence of greater knowledge, the definitions of such terms were inexact, at least as regards their correspondence to the assumed essences, but one has the feeling that genera were considered to be the normal extent of a particular essential kind.

By the eighteenth century, the idea of deliberately “improving the stock” by selective breeding of both plants and animals became a serious obsession with husbandmen. And true to the theory of essences, it was universally found that producing extreme varieties always came up against the impassible barrier of the essential nature of the creature. Even later exceptions amongst plants, such as fertile hybrids and polyploidy, tended to stretch, rather than break, these limits.

And so we find even now that intensive breeding of dogs for size means that Great Danes have many genetic problems and short lives (four years, I’m told, is average). Intensively bred cows produce huge milk yields, but always at the expense of multiple other problems making them entirely dependent on the farmer and the vet. It’s a commonplace that domestic animals which become feral tend either die or to revert quickly towards type.

An interesting modern finding is that attempts to correct genetic errors by gene modification lead to auto-immune responses against the implanted gene that, one would have thought, ought now to have been recognised as part of the animal itself. But something in the creature recognises what is itself, even when that self is dysfunctional.

It was this universal finding that made many biologists reject Darwin’s theory at first, and particularly to criticize his central analogy of selective breeding. They knew that the limits of breeding had already been known for a good while. One hundred and sixty years on, concrete evidence for “transformism” is actually still thin on the ground. The fossil record shows overwhelming discontinuity, and of the few genuine examples of transformation sequences known in Darwin’s time have reduced following more careful research.

At the other end of the scale, laboratories have also failed to dissolve the “essence” barrier, as Lenski’s long-running experiments on E. Coli demonstrate over hundreds of thousands of generations, producing only adapted E. coli despite the enthusiasm of BioLogos‘s Dennis Venema a few years ago that speciation was on the horizon for this work.

In summary, although we have no more idea of the nature of essential forms than Aristotle did, the empirical evidence for essentialism remains strong. Not so much for the newcomer, transformism.


But what of the existence of multitudes of similar forms, and of their changes over time? They may not provide direct evidence of transformism, but surely they support it?

Well, not as much as you’d think. Another key element of pre-evolutionary biology, certainly since the early modern period if not before, was the central philosophical Principle of Plenitude. This followed logically from the universally accepted mediaeval theories of philosophical theology, and particularly from the “Great Chain of Being” that was believed to stretch from God down to the humblest forms, mankind being somewhere in the middle.

From this the idea grew the conviction that the infinite and omnipotent God would surely have created every possible form, and even that it would be unjust and improper for such a God not to create a possible being. From this one could make the prediction that every possible form was out there to be found. It is important to realise that it was the Principle of Plenitude that inspired the early classifiers like Linnaeus, and not any sense of genealogical relationship. The orderly classification of animals, plants, and minerals was a means to make predictions about what missing links ought to be out there to find.

Therefore the presence of similar separately-created species or genera was entirely to be expected. What was problematic was the absence of the full panoply of forms. Whilst the world remained largely unexplored, this was simply a problem of incomplete sampling (remember Darwin’s similar expectation that the fuller exploration of the fossil record would uncover his predicted intermediates). But as early modern sea-faring and science began to produce the suspicion that not all possible creatures do exist on earth, two discoveries came to the rescue in a way that is seldom appreciated today.

The first was the Copernican revolution, whose most interesting outcome, as far as many were concerned, was the realisation that stars are in fact other suns, about which might be orbiting a myriad of habitable planets like ours. Here, then, was a near-infinite stage for the display of God’s creative activity, and even seventeenth century Puritans like Richard Baxter rejoiced in the possibilities. Interestingly the biggest gaps to be plugged by these putative worlds were those between men and angels, more than between the animal and vegetable (and mineral) species. The heavenly position of these worlds was entirely appropriate to this bias.

The second big support for plenitude came from the discovery of deep time through the new science of geology in the eighteenth century. This was of course accompanied by a more rigorous study of the fossils found in the rocks, which revealed a multitude of forms unlike those alive today.

I’m not sure how much hold the Principle of Plenitude still had on biological minds by then, but my point is that deep time and the extensive fossil record are corroborative of its predictions. Certainly by the time of Darwin, the favoured “theory of creation” had for decades been the Gap Theory, which in its basics acknowledged a series of creations before that described in Genesis, each with its own flora and fauna.

We still know nothing about life-forms on other worlds, but as far as deep time is concerned, it has actually failed to confirm the Principle of Plenitude. This is because although hundreds of millions of fossils have been collected and classified by palaeontologists, quite apart from the possibly billions picked up by ordinary people on the sea-shores and quarries, the count of species is still not much higher than the 250,000 estimated by Donald Prothero a couple of decades ago. This is compared to 1.7 million living species that have been described, and possibly millions more that have not.

All over the world, the discovery of new fossil species is following a “collector’s curve” which enables one to know that there are not substantially more to find. And so the old Principle of Plenitude has not been disconfirmed, because the fossil record, like the living world, shows discrete forms and few, if any, transitional sequences. But it has not been proven, because there remain so many gaps between the species.

It might still prove to be true, though, if only we were able to visit every planet in the universe. Unfortunately we cannot, and so it remains only a theoretical possibility – but unlike modern theories such as the multiverse, it is one entirely consistent with existing knowledge.

The same, however, cannot be said for Darwinian gradualism. This, like the Principle of Plenitude, also predicts an infinite number of gradations of form that have not, in any degree, been found in the living world or in earth’s fossil record. On average, each of those 250,000 or so extinct and distinct species is represented by hundreds of fossils. If gradualism were true, one would predict many millions of species, with each represented by very few fossils. Unlike the Principle of Plenitude, though, Darwinian gradualism cannot escape by travelling to hypothetical extra-terrestrial worlds. It is a theory of life on earth produced mindlessly, not of life across the universe created on some divine principle.

And so fossil evidence both fails to prove, and actively disconfirms, Darwinian theory, which places the older Principle at a distinct evidential advantage. The latter is still quite likely to be wrong, if the assumption that God must create all that he is able to create is mistaken. But I’ve been repeatedly told by scientists in the past that one must accept the current best theory until a better one emerges, so with regard to the number of species found on earth, Plenitude still rules over Darwinian gradualism, OK.

So we reach the paradoxical conclusion that Darwinian evolution (in particular its reliance on transformism and gradualism) deals less well with the evidence than either of the two main strands of pre-evolutionary “natural philosophy” – the theory of Substantial Forms, and the Principle of Plenitude. As ever in science, though, further research may well provide new evidence that changes things.

Does my conclusion disturb you? If so the reasons may have more to to do with philosophy than scientific evidence. If you’re an Aristotelian, you may be pleased. If you’re an Epicurean you won’t be, because as far as beliefs excluding final causality go, Darwinian evolution is really the only game in town, and its disconfirmation rather scuttles the whole fleet of naturalism.

But maybe you’re neither, and have no commitment either to gradualist evolution, or to substantial forms, or to the Principle of Plenitude. That’s fine – in that case you’ll be willing to be scientifically agnostic on life’s undemonstrated origins, and have your opinions overtly guided by the metaphysics of your worldview. That, at least, is the honest way to do it.

Avatar photo

About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Creation, Philosophy, Science, Theology of nature. Bookmark the permalink.

1 Response to Old views on biology tested empirically

  1. Levi says:

    We may also speculate on the role that beliefs play (or the “filters of consciousness”) in moving a society from one reasonably defensible hypothesis to another, less reasonably defensible one, and how those beliefs arise and are cultivated in the first place, such as to effectuate cultural, social, political and scientific revolutions. Such speculation is connected to the previous post on pseudoscience.

    My suspicion is that humans are culturally predisposed – and influenced by – to fall for a kind of narrow and binary “pragmatism” of mind, holding that, if a theory appears to “work,” it is sufficient to accept it as true, and negate all previous or competing theories, irrespective of their merits. The “workability” of a theory though is hardly ever disinterested science, nor are its “results” intended to be as defensible 50 years after the day the Nobel Prize is awarded. “Work” is a term of causality which may aptly apply to the action of natural phenomena in time and space. But in human affairs or by virtue merely of the presence of the human observer, as your previous article indicated, causal events are imperfectly perceived, or we, by our own short-comings, are perfectly deceived. Thus a theory that “works” may include, giving to a society the kind of theory that *it needs* at that time and place, either because it reflects the *desires* of that society, or because it *exculpates them*. In this sense, it may be a kind of priestcraft.

    This is because, it seems to me, that competing theories are as much attempts to discern the true nature of things as the reflection of an unseen cultural battles being waged by foreign conceptions upon orthodox ones. This is the natural interaction of men and their ideas, especially in circumstances of a plurality of paradigms in one society (Christian, Jewish, Muslim, Chinese etc.). In the end, once the theory has won over the elites, society falls in line behind it. The truth is cast out, censored and slandered as passé quackery, and its proponents, ancient and contemporary, are mocked or shunned.

    There are an infinite number of ways to err, and only one way that is true, to paraphrase Aristotle, a man whose influence suddenly entered eclipse coincidentally around the time that princes were liberating property from the ‘tyrannous’ ownership of the Church, and the commercial men and usurers were gaining ascendancy over the ‘absolutist’ kings.

Leave a Reply