Learning from uncertainty (according to William Briggs)

Uncertainty – the Soul of Modeling, Probability & Statistics by William Briggs (Springer, 2016). The world really does need a book on the philosophy behind probability, and this is it.

When I was doing medical sciences at university, I was an assiduous student (not that it showed in the exams), attending even 9am lectures six days a week for the first year or so. Accordingly, I did not miss the short series on statistics, knowing how important this is to science. Unfortunately, I found much of it went completely over my head. But I took notes, and after the series ended, pored over them on several evenings to gain insight, but to no avail. I even took the unusual step of writing them up neatly, in case that got the juices flowing. It didn’t.

So when my lazy fellow-medic John, who’d slept in for all the lectures, asked to borrow my notes, I said he was welcome, but that they would be unlikely to help him, being incomprehensible even to me. A few days later he returned them, and to my amazement praised me for explaining statistics so clearly that he had gained an excellent grasp of the subject. Since he went on to be a serious medical researcher with over forty papers to his credit, I suppose it was true. It’s nice to have been equipped to help someone’s career…

Over the years, frequent exposure to probabilities, standard deviations and so on in the medical literature gave me a working familiarity with the science, though I can’t say I ever used it. Perhaps the most positive gain in understanding was in being able to follow the arguments used by wiser heads to show how badly statistical tools were used in much research, even in major studies that were translated into national health policies at great cost. That was scary.

Statistician William Briggs is one of those wiser heads, and I state upfront that his blog helped form the views I have written about quite extensively here on randomness, the limitations of scientific certainty, and so on. All this, and more, is encapsulated in this book, which despite its over-pricing and rather abysmal proof-reading is, in my view, absolutely essential reading for anyone working in the sciences, anyone trusting in the sciences, such as followers of BioLogos and comparable sites, and pretty well anyone who’s interested in what statistical methodologies are used to establish. Since that includes most evolutionary theory, most medical trials, most climate change prediction, most economic planning, most social policy and more, perhaps that should include everyone.

But scientists most of all, because Briggs’s claim that the majority of those using statistical tools make grave and fundamental errors in doing so (even when they accept that “them others” do)  is certainly borne out by my experience interacting with people here and at BioLogos. Several working scientists and computer people have expressed incomprehension and disbelief when I’ve said that chance or probability can never be a cause of anything. When one considers that even in Evolutionary Creation, “chance” is more often than not posited as “the cause” of mutations precisely in order to show that they are “natural”, rather than “designed,” it is by no means a trivial or marginal matter. Others fail to appreciate that scientific models are not data, and nor do they generate data – which means, as Briggs points out, that unless and until they are calibrated with reference to the real world, they remain only more-or-less useful human inventions.

Briggs’s treatment builds on a basic philosophical understanding of causality (which is Aristotelian), and of logic. Since probability is a branch of logic, and logic is solely about the relationship between propositions, then all probability is conditional on the premises we feed in – not on some absolute reality “out there”. There are no such things as unconditional probabilities.

Probability is also the relationship between sets of propositions, so it too cannot be physical (p69).

This is fundamental, and (Briggs says from long experience) seldom fully appreciated in scientific work. Yet probability is not subjective, either: once propositions are chosen (by human induction!) the probability relationships between them are entirely objective and logical, and therefore true. Yet they are not “real”:

Mathematical equations are lifeless creatures; they do not “come alive” until they are interpreted, so that probability cannot be an equation. It is a matter of our understanding (ibid.)

Let’s pick out a few snippets from the rest. One of the most surprising conclusions is that, in the end, induction is a more fundamental source of knowledge than deduction:

…people trust mathematicians when they say something is so. But we must never forget that the proofs are true in relation to the premises used. And those premises are true only because of earlier premises in the chain of proof, and so on down to the axioms which everybody believes true conditional on their intuitions (induction). This is what makes for a necessary truth (p21).

This relationship between ontology (what actually is) and epistemology (what we know) really needs to be firmly grasped, together with the realisation that in the end, mathematical statistics notwithstanding, we only have epistemology, and knowledge exists in our fallible minds, not in the world:

If you are certain that “Every proposition is subject to uncertainty” then you speak with forked tongue. Certainty and truth therefore exist. But we must understand that truth resides in our intellects and not in objects themselves, except in the sense of existence. That being so, probability also does not exist physically; it also resides in our intellects and not in things themselves (p2).

Another truth which is relevant to recent discussions in the “origins debate” is that probability does not become meaningless unless one can put a mathematical value on it. Here again there is a relationship of probability to the foundational nature of intuition, despite the belief that it is the business of science to overturn our intuitions with hard, counter-intuitive, truths:

… not all probability is, and not all probability should be, quantifiable. Besides the idea of “subjective” probability, which in the next chapter I prove is not viable, there us no way to quantify reasonable doubt. But that, of course, would not bar lawyers and judges under the sway of scientism to invent some tedious criteria… “Your honor, my opponent’s formula showed a p-value of less than ) 0.0001, but as you know, the state in Sanity vs. Scientism decided guilt beyond reasonable doubt must have a p-value smaller than 0.00009. Therefore my client is entitled to be acquitted” (p65).

Tedium aside, it would also be true that the apparently scientific and mathematical p-value would certainly produce many miscarriages of justice compared to the jury’s inductive decision. That’s because attempts to quantify the unquantifiable, because “something must be done”, actually cannot.

I experienced that directly in challenging the concept of “QALY”(“quality added life years”) in medicine. You cannot, in the nature of things, quantify “quality of life” for people en masse. But I was told that in order to base medical decisions only on firm scientific evidence, such a metric simply had to be devised. Spot the error? Simply the crass scientism of believing that all life-changing decisions can, or should, be made by scientific criteria. Yet the ouputs of such models were made into global health policies to which I, as a state health professional, was required to comply and to inflict on my patients, who were all real people.

There is much more in the book, some of which applies the logicak approach to actual statistical theory at a mathematical level only really accessible to those familiar with it – but of course, those sections are particularly valuable in showing scientists how to harden up the findings of real science. For the rest of us, the final chapter, The Goal of Models, explains a lot of what is wrong in the use of probability in the real world – including major areas like health policy and climate change, and offers the solution in terms of how statistics should be done to avoid these errors.

Briggs doesn’t mention the population genetics used in modelling past macro-evolution (such as human origins), but it’s hard to think of a science that is more dependent on statistics, and where “verification” depends so much on comparison not with real-world data (which is hard to come by given the sparse fossil record) but with outputs from other models.

The price of these suggested, truly predictive, methodologies, as Briggs readily admits, is more mathematical work instead of dragging statistical tools like p-values off the shelf (“p-values – Die! Die! Die!”), and more importantly a severe reduction in the level of certainty that science appears, but only appears, to give us. The world is a more uncertain place once probability is properly understood:

My experience talking to folks about the predictive methods is that it is a hard-sell. People like the over-certainty provided by classical approaches. Decision making is easy because the software is designed to produce “significance”; and folks don’t like the mental effort and emotional turmoil that comes in being less sure. Those two reasons alone account for how classical statistical methods have become so widespread (p243).

But since that over-certainty is nothing but scientism, perhaps it would be no great loss.

Jon Garvey

About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Philosophy, Politics and sociology, Science. Bookmark the permalink.

Leave a Reply