I’ve commented before on “reading serendipity” – how things one happens to read consecutively bring together disparate ideas one would not have associated otherwise. In this case it started with a C S Lewis essay to which I was pointed by reading a quotation in an article. The essay in question is Bluspels and Flalansferes, which like the excellent book Studies in Words arises from Lewis’s professional life as a philologist.
The basic case is that the criticism, by certain positivist types in Lewis’s day, of those using “rhetorically metaphorical” (ie normal) language for scientific matters are blind guides. “It is impossible,” one of them had written, “thus to handle a scientific matter in metaphorical terms”. But Lewis points out that all language is irretrievably metaphorical. All that “rigorous” scientific language does is to replace one set of metaphors with another. He cites Owen Barfield:
On the contrary, [Barfield] maintained, “those who profess to eschew figurative expressions are really confining themselves to one very old kind of figure” — “they are absolutely rigid under the spell of those verbal ghosts of the physical sciences, which today make up practically the whole meaning-system of so many European minds”.
Lewis goes on to describe how metaphors arise, first in the primitive state of language, and then in explanations of new things. He shows, in fact, that it is often impossible to explain something to others, or even to ourselves, without constructing metaphors relating the new idea to something familiar.
He goes on to discuss what happens when people forget they are using metaphors, and in particular the situation in which people can think they are talking about something, when they are in fact just using an empty metaphor now drained of meaning, as if it meant something.
For example, to the primitive founders of language, to use the word “breath” for “spirit” (eg Hebrew “ruach” or Greek “pneuma” or even Latin “spiritus” – think of “spirit of turpentine”) may have been completely, or almost literal. We now use the word in a “supernatural” sense without even thinking of its derivation – but do we have any better understanding of it than they did? Perhaps we even have less, in that at least their concept was rooted deeply in real, physical life, whilst ours is a matter of nebulous speculation, or even disbelief.
In fact Lewis uses the word “soul” rather than “spirit” in the essay, and its Latin equivalent “anima” – which is another word for wind or breath anyway. And then he goes on to speak of how the wise and scientific have replaced such primitive ideas as “souls” with “literal truth”:
If we turn to those who are most anxious to tell us about the soul—I mean the psychologists—we shall find that the word anima has simply been replaced by complexes, repressions, censors, engrams, and the like. In other words the breath has been exchanged for tyings-up, shovings-back, Roman magistrates, and scratchings.
If we inquire what has replaced the metaphorical bright sky [the etymology of Latin “deus“] of primitive theology, we shall only get a perfect substance, that is, a completely made lying-under, or—which is very much better, but equally metaphorical—a universal Father, or perhaps (in English) a loafcarver, in Latin a householder, in Romance a person older than. The point need not be laboured. It is abundantly clear that the freedom from a given metaphor which we admittedly enjoy in some cases is often only a freedom to choose between that metaphor and others.
Lewis’s surprising conclusion – speaking as a self-confessed rationalist, as he explains – is that although reason is the arbiter of truth, the only way to approach meaning is through metaphor:
Those who have prided themselves on being literal, and who have endeavoured to speak plainly, with no mystical tomfoolery, about the highest abstractions, will be found to be among the least significant of writers: I doubt if we shall find more than a beggarly five per cent of meaning in the pages of some celebrated ‘tough-minded’ thinkers, and how the account of Kant or Spinoza stands, none knows but heaven. But open your Plato, and you will find yourself among the great creators of metaphor, and therefore among the masters of meaning. If we turn to Theology—or rather to the literature of religion—the result will be more surprising still; for unless our whole argument is wrong, we shall have to admit that a man who says heaven and thinks of the visible sky is pretty sure to mean more than a man who tells us that heaven is a state of mind. It may indeed be otherwise; the second man may be a mystic who is remembering and pointing to an actual and concrete experience of his own. But it is long, long odds. Bunyan and Dante stand where they did; the scale of Bishop Butler (and of better men than he) flies up and kicks the beam.
I find that fascinating – especially the idea that “sky” gives a better concept of heaven than something “deeper” – when one considers how much, 80 years after Lewis wrote, the tough-minded, no-nonsense “science is literal truth” approach is insisted upon even in science-faith discussion. One such example is contrasting the inappropriateness of such metaphorical concepts as divine design with “solid science”, such as the “fact of gravity”. Well, that leads to my next reading, which came through the letter box as I read Lewis’s essay, in the form of the Cambridge University Alumni magazine, CAM.
Some of the articles in this termly magazine are really interesting, like the one Stuart Conway Morris did a while ago on unexplained emergence in evolution – and of course the even better one one I co-authored on the University Folk Club. The one I turned to in this edition was by Peter Taylor Whiffen, on the implications of the discovery of gravitational waves this year. It’s a pretty good overview of the state of the science, and I think you can read it here by scrolling to page 38-39 and clicking “full-screen”.
But in fact the relevant quote for my purpose here is this, following on the bit about how the new discoveries may open the way to understanding gravity:
Although it was the first force to be described mathematically (by Isaac Newton in 1687), we still do not know how it really works – the best modern description is the general theory of relativity. We know what it does, but not what it is.
Well, that’s true. Thinking along Lewis’s line, the word “gravity” itself just means “heaviness”, which gets us no further than Aristotle’s “heaviness” (baros), except to translate it into Latin. Aristotle uses it as a quality of the natures of, well, heavy things, which directs them towards what is lower. Newtonian physics describes it as a “force”, which being interpreted is a “strength” – which is just to describe heaviness by a less precise word, except that one might look at a black hole and say, “The Force is strong with this one,” with about the same comprehension as when Daarth Vader says it about Luke Skywalker. Whereas “heaviness” was what Aristotle’s natures did naturally , gravity as a “force” is what they are forced to do by an invisible… something.
Well, in the nineteenth century, with Faraday’s terminology, gravity could become a “field” – perhaps one with waving corn forming ripples in the breeze – until the luminiferous ether in which such waves might travel was disproven, leaving it as an invisible, intangible, field in a vacuum, influencing us at a distance just as, in Newton’s time, the stars were superstitiously thought to do astrologically.
Now, of course, as the article says, we have general relativity, which tells us that gravity may be explained as a “deformation of the space-time continuum”. And, apart from an improved model for calculations, that is about as much superior to Aristotle’s “heaviness” for our understanding as “state of mind” is to “visible sky” as a description of heaven. Do you have any mental concept of a “continuum” (from Latin, a “hanging together”), or how space and time might hang together in such a way, or how such a hanging-together of whatever space and time are could actually be bent?
Now I don’t regard this proliferation of metaphors as a problem in itself, and certainly not as a failure of science. It is certainly a failure of science’s claim to look intently behind daily experience and find the literal truth about reality, if that were its claim. Replacing “heaviness” with “deformation of continuum” is just replacing an instinctive metaphor with an unimaginable one. But that’s not a criticism, because none of us knows what gravity is, in itself – any more than we know what “matter” or “energy” are – and we probably never will this side of glory, if then.
All science has described are less intuitive metaphors with predictive potential (especially when they are mathematical metaphors, for those differential equations are indeed metaphors). And that isn’t a problem either, until (as is, sadly, close to universal in our time) science is treated not as a way of making metaphors that model reality in a useful way, but as the definitive description of reality itself. And that description is absurdly believed to be hard and dependable, as opposed to all those vague things that can only be described in metaphorical terms, like souls, or heaven, or God, or design. Yet oddly, the only thing we know by experiencing it directly is “mind”. And that, according to the positivists, is an illusion.
But as Lewis showed back in the 1930s, not only may metaphors assist our hugely limited understanding: they are actually the only kind of understanding, through the power of human imagination, we will ever have even of the perishable world around us, quite apart from things of eternal import. It’s all similarly incomplete knowledge, but it’s all good, until you forget that it’s all also human imagery.
What do you know?
Jon, do I get credit for sending you that quote? =)
This is more to probe here than you know. Read up on “idols of the theatre” in Novum Organum, and its relation to the Platonic cave. One of the great challenges of science is keeping firmly in mind that our way of describing the world is not the world itself.
DNA is “like” language, computer code, etc. Still, it is also very much “unlike” these things. Gravity is “like” a field, but also very “unlike” one too. Reasoning about entities using their description alone always leads us astray, but this is one of the idols to which we are particularly subject as creatures of language.
You ceratinly get credit for the Lewis citation, Joshua: I didn’t mention your name simply to prevent spilling the beans on unpublished stuff – and I did use the substance of his essay, rather than copy the quote you use.
On your substantive point, I’ll say a bit since it refers to my next post on DNA. You’ll know from my writing on Polanyi etc that I take very seriously the idea that our perception of the universe, even through science, is never the universe itself. I think, though, we must be aware of the distinctions between a “metaphor”, an “analogy” and an “instance”.
To speak of the “arm of the Lord” is a metaphor, because the Bible tells us the Lord has no form, and therefore no arms: it is a metaphor for “strength”. Likewise one could poetically refer to a punching human arm as a “steam-hammer” – another metaphor. That was the subject of the OP – all language is metaphorical. In that sense, gravity was never seriously considered to be anything like a field of wheat – but that lack of reference can be easily forgotten and the “field” concept reified as if it has a concrete existence.
But to speak of God’s “strength” is itself an analogy for human strength, just as God’s “mind” is analogous in its function, but in nothing else. Analogies overlap with metaphors, but are supposed to have a significant correspondence with what they describe. So analogies (like models, which is what they are) always break down at some point – whereas metaphors don’t, because they were only ever aids to understanding meaning. Thus to call the insect compound eye analogous to the human eye is not metaphorical, but does warn us not to expect exact correspondence.
The third category is instance. An arm may be said to be an instance of a machine, speaking of the strict mechanical definition thereof. One may, and physiologists and anatomists do, think of the arm as a system of levers, and calculate their mechanical advantages and so on exactly as if they were metal rods and weights. Such calculations are eessential in the design of, for example, prosthetic limbs, which presuppose by their very existence the truth of the correpondence . The fact that arms are more than levers doesn’t alter that – they are living organs, defence, manipulators and much else besides, but they are true mechanical machines as well.
Now Yockey – and many others – argue in detail why DNA is to be regarded as an instance of a true code, and specifically not an analogy, still less a metaphor. I can’t post pics in comments, but Yockey’s point by point correlation between Shannon-information and DNA transcription is in the well-known figure from his book here.
Perry Marshall, as a computer engineer, further describes the correpondences between the levels of semantic information in the form of both human language and computer codes, and the multi-level encoding of DNA (largely discovered since Yockey first developed his ideas).
Clearly DNA is as different from computer code as a human arm is from a school mechanics experiment, but nothing I have read (even by you!) has convinced me that, given the definitions of “codes” in the information theory literature, DNA is not an instance of one, rather than an analogy for one.
And the analogy of “metaphor” for it breaks down as soon as it begins to obey the laws of information entropy, and so on. Eugene Koonin, I think, is aware of that more than most, since it is the problem that has exercised him most over many years, as he seeks to account for the code naturalistically and can’t.
Thanks for the conscientious ness.
THough, keep in mind that everything follows the laws of information entropy. Everything. So is everything a code?
Joshua
There’s a very long thread on Marshall’s site where such objections are dealt with (repeatedly). He seems from the url mainly to be addressing atheists!
His summary:
* Code is defined as the rules of communication between an encoder (a “writer” or “speaker”) and a decoder (a “reader” or “listener”) using agreed upon symbols.
*DNA’s definition as a literal code (and not a figurative one) is nearly universal in the entire body of biological literature since the 1960’s.
*DNA code has much in common with human language and computer languages
*DNA transcription is an encoding / decoding mechanism isomorphic with Claude Shannon’s 1948 model: The sequence of base pairs is encoded into messenger RNA which is decoded into proteins.
*Information theory terms and ideas applied to DNA are not metaphorical, but in fact quite literal in every way. In other words, the information theory argument for design is not based on analogy at all. It is direct application of mathematics to DNA, which by definition is a code.
In fact he’s offering megabucks for any example of a natural code, defined on the relevant page, so if everything is a code, one should be able to win the prize with a snowflake or a tornado.
But here’s Garvey’s original contribution: a code can be hijacked by humans to convey a message – any message – it wasn’t originally carrying: intercept a German Enigma message, and you could retransmit it with your mother’s birthday greeting. Get clever with Chinese ideograms and you could use them as English letters for a coded message.
DNA has been used in this way non-genetically: I believe Craig Venter puts some signature or copyright information in his artificial genomes. Others have seriously suggested DNA as the ultra-compact computer memory medium of the future. In theory it could be done, because the system itself is already (like Enigma and Chinese) semiotic.
But how would one, in that way, send a message on a snowflake, or a tornado, or some other example of natural complexity, whether simply ordered (like the snowflake) or complexly disorganised (like the tornado)?
As far as DNA being usable to store things like messages. Sure, that’s true, but who cares?
The same is true of a paper and pencil, sand (which microprocessors are made from), and just about anything we can manipulate. If we can manipulate it, and it stays put, and we can read it, it can be used to store information.
Particularly important, the way how DNA is used to send messages between humans is totally different than how DNA is used in living systems. All these examples show is that humans can repurpose things (including DNA, wood, and sand) for storage of information.
No surprise. It does not tell us anything about if DNA is used as a code in a living systems. Other than as a very weak analogy, it is not a code.
I know Perry very well. He is a nice guy and well intentioned. He is wrong here on DNA and RNA. Not really sure where even to start. We’ve talked at length about it, and I am not even sure he disagrees with me when all is said and done. That challenge is from a long time ago, and his views have shifted (it seems).
I only read Marshall’s book last week, and it’s not him you have to refute, but the sources behind him, of whom Yockey is the most immediate (whom I read years ago). But there are ramifications going throughout the literature, including (as I note in my new post) the fact that Ernst Mayr’s definition of “teleonomy” is based on coded information.
You also need to persuade Sy Garte, who still has authorial status here, who the last thing I heard considers DNA a true code.
It’s not important if DNA codes and human codes differ – what counts is whether DNA fits the definitions that bring it within information theory – and one needs to show exactly at which point Yockey is (objectively) wrong on that.
I don’t know Marshall (unlike Sy), but if he’s changed his view, it didn’t get into the paperback edition of his book, which came out only this year. He’d also be a bit dishonest to keep his prize for a natural code open, if he no longer believes DNA is one.