Support for the suggestion in my last post, that we are likely to be missing significant biological truths by not recognising Aristotelian formal and final causation, comes from a philosophical direction in a recent article by Thomist analytic philosopher Ed Feser .
Feser addresses fellow-philosopher John Searle’s claim that the widespread use of computational concepts like algorithms, information, software and so on in natural science (and especially biology) are invalid because computation is necessarily an observer-related matter, not a feature of the real world.
To most, this might seem an unlikely claim. This article , as just one small example, explores the practicalities of using DNA for general compuational work, which surely depends on hi-jacking its existing ability to compute “genetic algorithms” in order to code living organisms.
Feser, though, presents strong arguments that, given the mechanical conception of nature espoused by modern science, “Searle’s critique of computationalism is ananswerable.” However, it may be rescued (ie, the common-sense impression that such computation is a real feature of nature may prevail) if, and only if, Aristotelian concepts of form and finality are re-admitted to the discussion.
Early in the argument, he mentions the great, though frequently unrecognised, conceptual problem which has arisen over the “laws of nature” once the divine lawgiver of the early-modern scientists was removed from the picture:
But if we abandon both the Aristotelian apparatus of immanent formal and final causes and the early modern conception of God as artificer, it seems we are left with neither an intrinsic nor an extrinsic source of the order in the world, and thus with no source of order at all.
The importance of this is that there can be no particular significance in any sequence of efficient causes:
That is, of course, exactly what we find in Hume, for whom all events are inherently “loose and separate.”
This seems to be illuminated by the realisation that scientific accounts of causation, in effect, dispense with the reality of the entities that produce them. David Chalmers, seeking to show that information processing is at the root of reality, demonstrates this truth:
Following Bertrand Russell, Chalmers notes that physics does not tell us the intrinsic nature of the fundamental entities it posits: “Physics tells us nothing about what mass is, or what charge is: it simply tells us the range of different values that these features can take on, and it tells us their effects on other features.” Having mass or charge, like carrying syntactic information, is simply a matter of being in one of several states in a space of different possible states that might generate various outcomes at the end of causal pathways leading from those states. Now, if the fundamental entities of physics are essentially characterized in terms of their effects, and if to be information in the syntactical sense is just to have certain characteristic effects, then what physics gives us (Chalmers proposes) is essentially an informational conception of its fundamental entities.
In other words, there is no “aboutness” within nature: informational state A simply gives rise algorithmically to informational state B, neither of those states being significant of anything particular. Nothing new ever happens:
A key property of computations is that you will not get more information out of them than went into them. As [John] Mayfield puts it: “Algorithmic information shares with Shannon information the property that it cannot be created during a deterministic computation. The information content of the output can be less than that of the input, but not greater. Thus, algorithmic information conforms with our intuitive notion that information cannot be created out of thin air.”
Alex Rosenberg is a philosopher of biology and an eliminative materialist, but the latter is a tough discipline for he recognises:
Molecular biology is . . . riddled with intentional expressions: we attribute properties such as being a messenger (“second messenger”) or a recognition site; we ascribe proofreading and editing capabilities; and we say that enzymes can discriminate among substrates. . . . Even more tellingly . . . molecular developmental biology describes cells as having “positional information,” meaning that they know where they are relative to other cells and gradients. The naturalness of the intentional idiom in molecular biology presents a problem. All these expressions and ascriptions involve the representation, in one thing, of the way things are in another thing. . . . The naturalness of this idiom in molecular biology is so compelling that merely writing it off as a metaphor seems implausible. Be that as it may, when it comes to information in the genome, the claim manifestly cannot be merely metaphorical, not, at any rate, if the special role of the gene is to turn on its information content. But to have a real informational role, the genome must have intentional states.
If intentionality has no place in science, one must mentally consider all such processes as “really” simply the meaningless manipulation of information considered in the strictly Shannon sense of bit-processing.
And so, as I briefly paraphrase Feser’s statement of John Searle’s argument, a computer likewise is simply performing operations with digital bits: it only computes because of the meaning that we humans attach to the bits put in, to the output, and the processes our program performs on it. And so a “natural computer,” such as a genome which man did not even design or program, can even less be said to be computing anything.
That might seem to be unproblematic for a materialist who, in any case, claims that biology is undirected and purposeless. But does that actually hold up? After some discussion of a well-known philosophical contruct called “quaddition”, Feser asks us to consider a completely purposeless embryological development program:
For instance, we can imagine what we might call a “quembryo” program that, when the genome runs it, produces the same results that the embryo program does except that the embryo does not develop eyes. Now, consider a human embryo that never develops eyes. Should we say that the genome that built this embryo was running what Rosenberg would call the embryo program but that there was a malfunction in the system? Or should we say instead that the genome was actually running the “quembryo” program and that there was no malfunction at all and things were going perfectly smoothly?
Now, someone might reply that an error producing a blind human could, conceivably, have adaptive advantages (perhaps his parents are troglodytes not needing their eyes). Such errors, it is claimed, are how evolution progresses. But actually, it would be improper to talk about “errors” at all, and there would be no explanation for the fact that all kinds of factors in embryology appear to be coordinated with the very aim of producing a human being conforming to some specification – one with eyes, barring the failure of one or more of the controls.
In brief, Feser concludes that Searle is absolutely right to deny that computation, such as “DNA algorithms,” can be said to be more than an artifical construct put on to nature by human observers. And that entails that whatever science we do in unravelling computational patterns of cause and effect would be no more than the weaving of fictional tales around the disconnected and meaningless processing of bits.
Not many of us, however, have the ascetic capacity of an Alex Rosenberg to eliminate all significance from natural events – and neither does Rosenberg, in practice, for his eliminative claims include human thought, too, making the meaning of what he thinks to write on eliminativism, or anything else, illusory too. In other words, it is self-contradictory.
The way out of this rat-maze is simple, as Feser says. Simply give up on the attempt to exclude teleology from nature, and especially from living things. He is quick to point out that what he means here is the internal teleology by which organisms have their own aims and goals based on their substantial forms. As always with Feser, he rejects the Paleyan (or Cartesian) concept of a living creature as a mechanical artifact composed of accidentally connected molecular parts. Instead, like the “target morphology” in the scienctific research mentioned in my previous post, the goals of creatures are a global feature of what they are. Remember in my last post how the final form of an animal appears to be “known” both to the organism as a whole and single cells.
Thus we are able to identify things like normality and error. Real computation can be allowed to exist in the natural world. Our human embryology can be legitimately viewed in terms of a programmed process with the goal of reproducing after our kind. And hence the congenitally blind baby can be justly viewed as the result of a failure in that program, and so be seen as fully human (not a new type of “Quuman being”) and, perhaps, deserving of remedial treatment if possible to correct what is defective.
This acceptance of form and natural teleology (as Thomas Aquinas pointed out) does not preclude the providence of God over even the errors. There are unsighted people who regard their blindness as an advantage – but if so it must be seen as a special and unusual kind of advantage, for in the land of the blind, human survival would be unlikely. Teleology, then, should be seen as a necessary feature of the natural world, not as an indicator that God would make everything turn out perfectly. The mystery of suffering, and of providence, still remains:
As he went along, he saw a man blind from birth. His disciples asked him, “Rabbi, who sinned, this man or his parents, that he was born blind?” “Neither this man nor his parents sinned,” said Jesus, “but this happened so that the works of God might be displayed in him.
It is simply more natural and rational to include genuine formal and final causation in our study of nature, whether that be simply in the activities of daily life, or in working out the hugely complex computational features of the genome. However, as Feser points out (and Thomas Aquinas before him), teleology does have inevitable theological connotations. It is as difficult to account for teleology apart from God as it is to account for evidence of intentionality without teleology.
In the end, the desire to exclude God seems the only reason for the intellectual contortions required in eliminating teleology, and the far more common intellectual fudges in assuming it in practice whilst denying it in theory.