# The bell curves! The bell curves!

After the last post it occurs to me that change ringing is quite a clear illustration of the combinatorial problem in evolution. This is, essentially, that the number of variables involved in any form of genetic evolution based on random variation is so vast that they quickly outstrip the search resources of the universe.

So to ring a complete set of changes on a six bell peal is quite a nice exercise, taking around 12 minutes on the assumption that each change lasts three seconds and none is repeated. Add two more bells to make the octave, and suddenly it’s a marathon 11 hour session – rather gruelling for all concerned. As I said in the previous post, a twelve bell change (the largest peal one usually encounters) has never been rung, because on the same basis it would take 45 years. Add a thirteenth bell and it’s 120 years. And if St Peters in Rome, or somewhere, decided that only a two octave peal would match its prestige and sacredness, then were they foolish enough to try to ring the changes they’d still be at it 41,000 years later.

It’s that kind of consideration that leads to questions about the feasibility of evolutionary changes requiring multiple random mutations: selection doesn’t prevent the possibilities having to be reached in the first place, and the search time is exponential. But that is only a problem if one says that mutations are caused “by chance”. And as I’ve said on previous occasions, that’s meaningless, because “chance” doesn’t cause anything.

This view is endorsed by statistitian William Briggs, whose blog discusses such things at length. Take, for example, the basic coin toss example. The cause of a result is not “chance”, but the series of physical manipulations of a coin starting from identical initial conditions and the reading off of the result, which in total might be labelled A. If A produces heads once, it will always do so. The variation comes because in practice, A is never repeated, but becomes a series of slightly different operations A1, A2, A3, … An. The design of the whole system allows only two outcomes, heads or tails, and we select that system because the outcomes are in practice unpredictable and roughly equal. So, Briggs asks:

What is the real definition of chance? Since probability (also not real) is just a measure of uncertainty, chance is a synonym for unknown cause. Try it in the [following] sentences … and you’ll see that they usually become tautological or nonsensical…

“How can you tell whether this deviation was caused by chance something unknown?” Well, it’s when we don’t know what the cause was. “The difference between two groups is statistically significant if it can not be explained by chance a cause we don’t know about.” Say that three times fast. “A difference among samples that is due to chance an unknown cause is called sampling error.” This one works, except for the unfortunate word error, which implies a mistake has been made.

Briggs adds a postscript to his article:

Update In comments some expressed a habit of saying “by chance”. Break it. To say a thing comes to be (known) by something is to say it is either caused by it, or is part of the cause of it. Webster by: “With, as means, way, process, etc.; through means of; with aid of; through; through the act or agency of” Chance.

If pressed for a cause, admit ignorance up to the point of probability and then say, “But my uncertainty in the thing is expressed by this & such probability.”

Example 1: Something caused the coin to land heads; I don’t know enough to say what; but given the evidence we discussed, the probability of a heads is 1/2.

Example 2: Something caused the observed minor differences between patient groups; I don’t know what; but given the evidence of the study, the probability of a difference is only 1/2.

So to speak of “random variations acted on by natural selection” is to say no more than “variations by causes unknown acted on by natural selection.” That is a theory, but a very partial one (the first half being an admission of the lack of a theory), even before one dilutes the effects of selection by drift or other mechanisms. And certainly it’s a theory without the metaphysical baggage of undirectedness that’s been loaded on to it since Huxley and before. “Once unknown mechanisms create new variations in life-forms, those that survive best reproduce more.” Yeah, right. That’s cool.

But what if we are more careful in our phrasing of the theory? “Variations random with respect to fitness acted on by natural selection.” You’ll perhaps see the problem here: if “chance” simply means “of unknown cause” (which is all it can properly mean), then “variations of unknown cause with respect to fitness”, if it means anything at all, means that you can’t say anything about teleology with respect to fitness: if you could, you would know the causes (which wouldn’t include fitness) and they wouldn’t be random any more.

Briggs gives us an out, of course, to keep the theory of “random variation” vaguely scientific: “But my uncertainty in the [causes of variation] is expressed by this & such probability.” A shame, then, that nobody attempts to give the probabilities except the Wistar Conference mathematicians and ID people like William Dembski. It’s enough to drive you mad.

## About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Philosophy, Science. Bookmark the permalink.

### 17 Responses to The bell curves! The bell curves!

1. Lou Jost says:

1. There are truly random events, according to standard quantum mechanics.
2. In chaotic systems like coin tosses, minute variations in initial conditions can cause large changes in the outcome. The Heisenberg uncertainty principle places fundamental limits on our ability to specify exact initial conditions, so in a chaotic system, the outcomes do have a fundamentally random component as well.
3. “So to speak of “random variations acted on by natural selection” is to say no more than “variations by causes unknown acted on by natural selection.””
That’s an oversimplification. We do know something about how mutations come about (and also how some mutations are repaired). Many (though not all) of the things that cause mutations are also fundamentally random in the quantum mechanical sense. UV light and x-rays are good examples.

Some think quantum-mechanical randomness is due to our ignorance of underlying mechanisms. There are strong arguments against this view in the physics literature. The arguments are much stronger now than they were when Einstein et al wrote their famous EPR paper arguing for the incompleteness of quantum mechanics (“God does not play dice…”).

• Lou Jost says:

Jon, I see your source Briggs is one of those who think QM uncertainties are due to our ignorance. He does not elaborate.

• Jon Garvey says:

Lou, I must disagree on these points.

(1) Quantum mechanics has not shown there are truly random events – it is only the conclusion of those interpreters of QM who (contra Bohm, for example) assume that ignorance of deeper levels of reality entails their absence. The question, as they say, remains open as quantum theory itself has not settled it, which is all Briggs argues. His point (if I remember) is that quantum non-locality requires the whole universe to be excluded as a potential cause of quantum events before that can be settled, unless there were some theoretical proof, which there isn’t.

The fact that there are arguments in the literature for true randomness does not (yet) make it so. Even if it did, biologists would have to demonstrate conclusively that all mutations are solely dependent on such events to claim lack of causation rather than ignorance of causation. That has not, AFAIK, even been attempted.

My other two points (below) argue that quantum indeterminancy is in any case of minor or no relevance to the issues in my post: (a) coin tossing and (b) genetic variation.

(2) It would seem intrinsically unlikely that quantum uncertainty is sufficient to affect a macro-event like a coin flip, so Heisenberg is irrelevant here. In fact, at least a couple of pieces of research suggest that the chaotic element im coin tossing is pretty coarse, and so potentially avoidable: see here and here. The second paper demonstrates the building of a machine that can reliably produce heads, which confirms that the randomness of manual tosses is only epistemic. QED.

(3) Ionising radiation may affect individual nucleotide bases according to quantum uncertainty, making damage to them just as random as those quantum events may or may not be [see (1)], but it remains to be seen if this affects things at the macro-level.

You’ll remember (having read it) the work cited by J Shapiro in his book that the actual damage caused by ionising radiation is a prime example of a cellular “natural genetic engineering” stress response, having specific and repeatable features. It is therefore an unwarranted extrapolation to say that any resulting mutations are the result of individual quantum events: it is the whole cell that responds to a total radiation doasage.

In any case, ND theory doesn’t suggest that some mutations may be random, but insists that all mutations are random. The fact that a man may be struck by lightning does not support a theory that all death is random.

• Lou Jost says:

“In any case, ND theory doesn’t suggest that some mutations may be random, but insists that all mutations are random.” Nope– in ND, “random” means “random with respect to whether the particular outcome will be beneficial”. The fact that an evil man was struck by lightning doesn’t mean that lightning is not random with respect to the ethical behavior of its victims. ND randomness doesn’t have to be due to QM, that was not my argument; I argued only that some mutations are QM random. Experiments by Muller in the 1920’s or 1930’s showed that many radiation-induced mutations are single, localized quantum-mechanical events.

But the main point of my comment was that there are strong arguments that QM randomness is fundamental and not due to ignorance. I’ll go through those arguments here, once and for all, but it will take me some time to gather good accessible non-technical links. It is an issue that keeps coming up, so it’s worth laying out the case for fundamental randomness. We aren’t talking about logical proof, of course, since this is an empirical question, but we can show that Bohmian hidden variables would lead to some serious paradoxes.

• Lou Jost says:

I’m still downloading the coin tossing papers–I am in a place with slow internet. If they show what you say, then the coin tossing is not an instance of truly chaotic behavior.

• Lou Jost says:

Got the Diaconis et al paper you cited. It comes to the opposite conclusion from the one you claim. “For tossed coins, the classical assumptions of independence with probability 1/2 are pretty solid.”
Also, that paper’s calculations just assume the coin is completely classical, hence begging the question of the effect of QM.

The second paper finished downloading while I was writing the above. As in the first paper, it is a theoretical analysis which assumes classical physics. It therefore does not address the question of the role of QM in the motion. Its conclusion is that classical coin tossing is not chaotic, for some initial conditions.

You said about the second paper “The second paper demonstrates the building of a machine that can reliably produce heads, which confirms that the randomness of manual tosses is only epistemic. QED.” I could find no mention of such a machine or any empirical tests of that kind in the paper. It is all done in a computer using classical mechanics. As far as I can tell (and I admit I haven’t read every word), the illustrated machine was used only to establish empirically some of the parameters needed to run the computer simulations.

A more relevant article is an old one I read during graduate school, which showed that billiard balls exhibit QM indeterminate behavior:

American Journal of Physics — February 1967 — Volume 35, Issue 2, pp. 102
How Determinate is the “Billiard Ball Universe”?
D. J. Raymond

Department of Physics, Stanford University, Stanford, California

Abstract: It is generally supposed that distinctly quantum-mechanical effects do not show up on the macroscopic level. However, it is shown here that the uncertainty principle imposes a drastic limit on the predictability of the detailed motion of a collection of colliding hard spheres, or “billiard balls.”

© 1967 American Association of Physics Teachers

• Jon Garvey says:

Lou

The second paper linked is the Diaconis paper. I can’t be responsible for which your computer downloads first (maybe it depends on quantum randomness?). The Diaconis paper’s introduction describes the author’s own coin-tossing machine and, assuming he’s as honest in that description as one expects in a scientific paper (and what would be the point of falsifying a result that simply demonstrates standard physics?), reports that it can be set up to give the same result 100% of the time. Ergo, initial conditions are not affected by the uncertainty principle, even within the parameters of a simple mechanical contrivance.

There are lectures on YouTube in which he also uses this result as the classical physical foundation on which to study the maths and physics.

The conclusion that coin tosses are physically determined is therefore empirically demonstrated, and in the paper, just as I said. The rest of the paper examines the unexpected slight bias in human coin tosses.

The first paper (apart from anything else) introduces the significant chaotic element if the coin bounces on the ground. But that is only epistemic uncertainty. It would be brave to continue to argue for the role quantum of uncertainty in coin tosses in the light of Diaconis’ introduction, but that’s the only point at issue between us in that regard.

• Lou Jost says:

Jon, you’re right, sorry. Yes, a coin precisely tossed in the air without bouncing is predictable. The billiard ball paper I mentioned suggests that its the angular uncertainty of bounces is what magnifies small uncertainties.

• Lou Jost says:

Jon, you say “Quantum mechanics has not shown there are truly random events – it is only the conclusion of those interpreters of QM who (contra Bohm, for example) assume that ignorance of deeper levels of reality entails their absence.”

You can see by the discussion of Bell’s Theorem that you’ve been unfair to quantum physicists. They do not just assume that ignorance of deeper levels of reality entail their absence. On the contrary, they have shown that the mere existence of an extremely broad class of deeper levels of reality would lead to contradictions with experimental results. The only kinds of deeper realities that are consistent with experiment are even more paradoxical and problematic than the indeterminism they are meant to purge.

2. Lou Jost says:

Returning to the reasons why most physicists think that there can be no hidden variables underlying QM (contrary to the seemingly-unconsidered opinion of Jon’s source, Briggs), a good popular summary of the issue is conveniently available in Wikipedia:
http://en.wikipedia.org/wiki/Bell%27s_theorem
The punchline is that local hidden variable theories are incompatable with the predictions of QM and experimental results. It might seem implausible that we could rule out ALL such theories without even having to enumerate them, but this was the genius of John Stuart Bell.

“Bell’s theorem is the most profound discovery of science.”–Henry Stapp. I agree with that assessment.

I’ll grant that this is still an active subject with lots of differing opinions. But you guys shouldn’t be too heartened by these outlier opinions. Attempts to preserve hidden variables generally involve assumptions that many of you would like even less than indeterminism. For example, one way to allow local hidden variables is to deny the existence of free will.

I should add that in my grad school days I really wanted these hidden variable theories to be true. I corresponded a bit with David Bohm and almost went to London to study under him.

• Jon Garvey says:

Lou, neither I nor Briggs have expressed support for local hidden variables, so I hope your treatment will rule out quantum non-locality, which is the position that I hold and, as far as I can see, he does too (after consideration).

If you can show non-locality is inconsistent with QM, I reckon that’s a Nobel Prize for you. Otherwise you’re working hard to answer a point that hasn’t been made here.

• GD says:

Jon,

An interesting paper that is relevant to your discussion (and also shows the outlook adopted by some atheists) is Josephson and Pallikari-Viras, “Biological Utilisation of Quantum NonLocality,” Foundations of Physics, Vol. 21, pp. 197-207, 1991.

This quote more or less put this in context:

“in Bell’s own words, if nature behaves in accordance with the statistical predictions of quantum mechanics then “there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote”. Experimental results, while not being totally conclusive, are such as to point towards this conclusion being valid. The existence of such remote influences or connections is suggested more directly by experiments on phenomena such as telepathy (the direct connection of one mind with another) and psychokinesis (the direct influence of mind on matter), both of which are examples of so-called psi functioning or psychic phenomena.”

I suggest that telepathy and psychokinesis belongs to non-scientific outlooks – even if these invoke many universes, or just “airy fairies” ….. oops is that my lack of English ….???

• Jon Garvey says:

GD

It would seem to be a certain kind of researcher who would drag telepathy and psychokinesis into a paper on quantum non-locality. On the other hand, like fairies, they’re only non-scientific for empirical reasons: they’ve not been caught (and because they’re seen as non-scientific, are not much sought).

The more evidence for non-locality there is – and there have been some interesting results on non-local entanglement in the last few years – the more there seems to be a rationale for psi phenomena, which would justify research. After all, it would be relatively easy compared to building a hadron collider half the size of Europe. And we already have good evidence that biology is quite capable of utilising quantum effects practically, witness the robin’s navigation system.

By contrast, the speculations about the multiverse look currently to lack any hard science that could be employed to provide empirical evidence for a while to come, short of a luckier break in astronomy than occurred last year..

The question is why those interested in the first, like the indefatigable Sheldrake, should have their TED talks censored as “unscientific”, whereas thse interested in the second are regarded as in the scientific “mainstream”. That would be a question for the sociologists, I think!

• Lou Jost says:

Jon, you and Briggs both stated the belief that the apparent randomness of QM is due to physicists’ ignorance of the underlying realities. In other words, you claim that hidden variables (which could be gods if you want) exist. I gather from your comment here that you do believe in hidden variables but that they are nonlocal. If you want to say that, great. It will be interesting to follow the consequences of that belief.

I don’t know why you added that last bit about showing that non-locality is inconsistent with QM. I stated the reverse.

3. pngarrison says:

There have been arguments among the electron pushing enzymologists about whether proton tunneling is involved in any enzymatic reactions, and some papers that claim evidence for it. I was not a hard core enough chemist to get into that stuff and I don’t know where it stands now, but it could be a route by which DNA polymerases and other enzymes that modify DNA could have an element of QM uncertainty concerning modification of a specific nucleotide in a single cycle of the enzyme’s activity.

My impression is that radiation damage is normally a relatively minor source of natural mutations. Errors by the main DNA polymerases and especially the repair polymerases, some of which are rather error prone, are the main sources of mutation. I wouldn’t be surprised if QM uncertainty was applicable to these reactions as well. The most common mutations are caused by spontaneous cytosine deamination, which I would guess has exponential decay kinetics – the same as radioactive decay. Maybe Lou knows something about that – I can’t recall seeing any papers dealing with it, but I haven’t gone looking.

Vogelstein’s paper a few days ago where they showed a pretty good correlation between number of stem cell divisions and incidence of specific cancer types is consistent with the idea that errors in replication are the main source of carcinogenic mutations. The several fold higher rate of point mutation per generation in males indicates the same for germ line mutations.

• Lou Jost says:

Preston, I agree that replication errors are an important source of mutations. I think those could conceivably have a QM component, but I don’t know….

4. Jon Garvey says:

A nice piece by Ed Feser on the argument against quantum causality here.

Covers many of the issues, and some of the philosophers and scientists, raised over the last year or two here, but I particularly liked the rather obvious point that failure to find one kind of efficient causation leaves the other three kinds of causation completely unaffected: material, formal and final causation are all clearly present in quantum events such as radioactive delay.

I refer interested persons to that thread, since this one has no necessary connection to quantum events and I’ve no real interest in diverting it in that direction.