William Demski’s two chapters on Determinism (short) and Contingency and Chance (longer) are useful in delineating ideas often used loosely, and the latter in particular presents some very helpful ideas.
Determinism is examined mainly in physical, rather than theological, terms. It’s useful (as he does) to distinguish local determinism (the universe is set up so that everything in it must be as it is) from global determinism (the universe must be as it is). The theological possibility that physical laws don’t determine reality, but that God’s free actions might is only dealt with in a passing reference to the Augustinian-Calvinist tradition (and in my view somewhat misleadingly), because it’s the scientific context that interests him most here.
Positing a locally deterministic universe, in the absence of God, inevitably leads to a regress in which the physical nature of the cosmos itself is an unexplained fact. In that sense contingency never goes away – it just get pushed back to the nature of reality. Of the three possible responses by materialists – to disengage and say that global contingency is just a brute fact, or to try to explain it, or to argue for an underlying global determinacy, the last has always been the most popular from classical times until now.
A surprising number of ancient philosophies argued that every event recurs in an endless cycle (eg the Great Year of Renaissance times), which is directly echoed in the recent (but now discounted) suggestions of an endless cycle of big bangs and big crashes. But the same aim is at work in suggesting inflationary bubble universes and eternal multiverses, and of course the quantum many worlds interpretation. It is helpful to realise that these are all, in the end, attempts to reclaim a deterministic universe by making all possibilities inevitable – thus eliminating contingency (and especially design, since remember information is to do with the elimination of all possibilities except what is actualized.
As D. points out, though, although it makes a “more comfortable setting” for materialism it is not a conclusion of science but a pure metaphysical prejudice. And as has been pointed out many times it actually cuts the roots of science away, since it makes any actual scientific observations and causes arbitrary and local – in some other time or place, any and every other observation might have been made instead. He compares this to solipsism, which similarly avoids all logical contradictions at the cost of making existence meaningless and unintelligible:
Witness: “I saw the defendant with a smoking gun.” Attorney: “No, you saw a cleverly disguised space alien with a smoking gun.”
And so we turn to contingency, which also can be seen in local or global terms. Globally, theism accepts the contingency of the universe as an act of God’s choice. Materialism, however, always has the pull towards determinism through the infinite multiplication of entities. If, for example, Lawrence Krauss were right is saying the universe arose from a random fluctuation in the quantum vacuum, then presumably it could and would happen again, and again, leading to a deterministic multiverse containing all possibilities.
Despite his theism, though, Dembski demurs from the too-ready acceptance of cosmic fine-tuning arguments for God on mathematical grounds. I find this interesting as an unexpected example of an IDist being more cautious than TEs, who regard global fine tuning as a theistic argument free of the taint of “God of the Gaps”, since it refers to events outside the created order.
But as D. reminds us, by definition all we can ever know from science is the conditions inside our world. We have no idea of what, outside the world of our physics, it takes to form a world, and so we can’t know anything about “extra cosmic” statistics. A fine tuned fundamental value may be exceedingly unlikely – or it may be absolutely inevitable. We have no way of knowing, and so D. personally regards fine-tuning arguments as no more than “suggestive, as pointers to an underlying intelligent or teleological cause. But I see no way to develop them into rigorous staitstical inferences precisely because their possibilities cannot be grounded in any observed process.”
This careful caveat shows something of the confidence he has in statistical arguments within the universe. And this depends partly on taking a different view of what contingency and chance are from the rather vague concept of “undetermined” used in, say, “free process” theologies of creation. In these, because contingency is seen as a way for “spontaneity” to occur alongside essentially deterministic natural law, but independent of God’s design, it has to be seen as, in the end, causeless. D. cites an atheist defender of ID arguments, Bradley Monton, as a supporter of this view of absolute chance or tychism. Even if causelessness is coherent in a theistic universe (or any universe, come to that), it is hard to see how any advantage accrues from it either for creation or God’s glory: “leaving things to chance” is the watchword of poor governance in every human activity, so how can it be wise in God’s?
As I’ve touched upon in other posts, D. takes a much more credible line with contingency when he defines it as “the realization of one possibility to the exclusion of others according to a probability distribution.” He goes on to show how all such events can be seen as ontologically orderly, though epistemologically contingent to us.
So an event like flipping a coin is, scientifically, as deterministic as any other – Maxwell’s omniscient demon (or God, of course), by knowing the limitation to two outcomes, the boundary conditions of the system, and the exact forces exerted in any one toss, would know every outcome in advance. We, ignorant of the boundary conditions, see only the tendency towards a 50:50 statistical distribution of results. And in deciding things by a “random” coin toss, we’re simply designing a protocol that will generate one of the two results in a way we can’t predict.
It’s possible, he says, to deal mathematically with anomalies in such a “random” distribution. One can predict that there will be episodes in which quite long sequences of coin tosses will show, for example, all heads, or alternate faces, thus deluding one into attributing a wrong pattern to the system.
As he says, given enough time you will generate a digital sequance representing Shakespeare’s works. But because you can deal with things mathematically (and not resort to “causelessness”) you can relate the possibilities to the realities of the size and age of the universe: the Shakepeare sequence is too unlikely to happen even in many universes like ours.
In fact, these things must be considered even in relation to repeated scientific observations, and hence laws. There is ultimately no way to be sure empirically that the whole scientific enterprise has not been built on an aberrant run of results within a different probability distribution (a lucky run of heads, as it were). And that means that science is always inherently probabilistic: D. points out that Hume’s suggestion that a past pattern might not continue into the future was wrong even in assuming the past pattern of observations established a statistical relationship:
Indeed, it’s in the nature of probability distributions that they not only permit deviations but also guarantee violent deviations.
D. deals with “fixed” natural laws as special cases of the same thing, in which the probability distribution is represented as “0” or “1”. But mathematically, that is no guarantee in itself that the pattern is real, or will not show exceptions, especially when the law represents,as most do, the statistical aggregate of individual events (eg molecular interactions).
He goes on to what I consider one of the most significant arguments in considering divine action in nature, and that is that:
When intelligence acts, it has probabilistic side-effects. Accordingly, chance is no longer sui generis but becomes a probabilistic side effect of intelligence.
He gives the example I have already cited in a previous post of someone writing, intelligently, in English, whose documents will inevitably approximate to a predictable distribution of characters determined by the language. So although design might be revealed by deviation from statistical expectation (he cites the example of Ernest Vincent Wright, who wrote an entire novel omitting the letter “e“) design can also explain the expected probabilities themselves, and that is equally true if the design is done through “natural law” as if it is done through “chance”.
Of course, one might argue that some chance is associated with the probability distributions of design, whilst some is the result of unintelligent mechanistic processes. But this, he suggests, is an unsustainable position. Materialism always swallows up intelligence into “the motions and modifications of matter.”
On the other hand, from the vantage of even the most generic theism (and I include here deism, process theism, panentheism and pantheism in additon to ethical monotheiesm), intelligence becomes a fundamental and irreducible feature of reality that has a say in everything. As a consequence, intelligence becomes interwoven throughout the fabric of reality, making it impossible to sever chance from intelligence or rule out that chance is the byproduct of intelligence. So the logic of one’s ultimate metaphysics pushes towards one view or the other, towards chance as devoid of intelligence or towards chance as as expression, albeit indirect, of intelligence. Thus, I would say, the generic theist may regard chance as, in every case, a byproduct of intelligence.
This gives mathematical status to theistic evolutionist David Wilcox’s thoroughly Reformed view that chance is God’s creative signature. But it also renders dubious the idea that one can coherently speak, as a theist, of biological variation being “random with respect to fitness”. There is simply no way to exclude teleology, given that the priorities of the intelligent agent (be that natural teleogical law, ultimately willed by God, or external teleology from God himself) are opaque.
Yet of course that does not prevent one from treating, say, a particular species as an outcome of intelligence, ie as information, regardless of the process through which it came, and dealing with it probablistically. The two aspects are quite separate.
Another consequence of this view of contingency is that it negates the idea that intelligent agents would be constrained to follow underlying probability distributions in their involvement with the world: this is to follow the logic of materialism to put the cart before the horse. Rather, the probability distributions are themselves the products of the free actions of intelligent agents (and remember, “intelligent agents” here can refer to teleological agents in nature as well as external intelligences).
In this way Dembski has given a coherent explanation of contingency at the fundamental physical level – one which I see as key in differenting a theistic understanding of reality from a materialist one. Given the role assumed for chance in the modern scientific worldview, that is indeed important.