There are probably not many readers here who would fully endorse Brownie McGhee’s dictum that “Blues is Truth”. Whilst you might well agree with me that the blues is a music very expressive of the human condition, you’d probably consider that its truth lies in a fairly restricted sphere. You might even wonder if its practitioners always bare their souls honestly, rather than playing what they know their audiences will pay for. Then again, though Leadbelly said “There never was a white man had the blues” it owes its continued popularity, and even existence, to the efforts of a generation of white, British Grammar School kids like Eric Clapton and Brian Jones, who never knew what a mojo is, quite apart from how to get theirs working.
I mention this because scientism, in both its hard and soft forms, similarly trades on the idea that science is unique amongst human activities in being not just useful and worthwhile, or even noble, but “truth” in some absolute sense. Even when the actual weaknesses of actual scientific enterprises are highlighted, this “scientific exceptionalism” is still maintained.
I saw a small example of this in one of the videos included on Rupert Sheldrake’s website, which I was browsing because there’s some interesting stuff there. This one was a discussion at the Hay-on-Wye book festival, in which Sheldrake and others were suggesting various reforms that would benefit the scientific programme. One contributor was a philosopher of science who adapted the famous aphorism about democracy to capture the audience’s attention. “Science,” he said, “is a very bad way of acquiring knowledge … until you consider all the other alternatives.” By way of qualification, he added that science, for all its problems, is self-correcting, to which there was a murmur of assent from the panellists.
Now I don’t want to assert that science is not self correcting, though there is much even in recent events that shows its self-correction to be far from universal. One could cite the non-reproducibility of a ridiculously high proportion of experiments, the lack of professional incentive even to try to replicate them, or to publish negative results. One could point to the lack of double-blind studies outside of medicine (representing a touching confidence in the lack of theoretical bias in researchers), and in medicine to whatever factor it is that makes the vast majority of published direct drug comparisons favour the products of the companies financing the research. One could add deliberate frauds and a spate of retracted papers, careless or biased peer-review and the big-picture issues like the entrenched defence of ruling paradigms by those with power.
None of these overturns the self-correcting nature of science, because they are all what one would expect to see in any human endeavour. It’s quite legitimate, in contrast, to point to the courage of the scientific community in overturning cherished notions, for example in accepting the Big Bang Theory despite the unwelcomeness, to many, of the theological implications of a non-eternal universe.
But the challenges do require one to embrace the relative nature of Science’s self-correction. Ideally, to try and minimise error one would inquire what elements in science have yielded to, or resisted, correction, and why. Paradoxically, to take self-correction as an element of faith is the one thing most likely to hinder it – for the price of self-correction, like the price of freedom, is eternal vigilance. But then, in all human endeavour the correctness of what is known is only known because it has yet to be corrected!
The aspect of this “believism” that I want to examine here is the idea that the self-correction of science is unique, and so sets apart the reliability of scientific knowledge from all other epistemological sources. The murmur of approval from Sheldrake’s panel suggested an uncritical acceptance of that as truth. But in fact, it takes little effort to show that virtually every human pursuit, done well, relies on self-correction. It takes very little more to show that a false belief in the infallibility of ones mechanisms for ensuring truth is the surest way to bigotry and even tyranny.
Take, for example, genealogy, which pngarrison and myself have mentioned here recently. Apart, perhaps, from recent DNA studies, genealogy is not a science, but a rather nicely constrained form of history. Unlike most history, one aims at true/false answers from the records, or from witnesses. In that way it is comparable to science in answering only the easier kind of questions. In my own family history study, there was a point at which I was set an excellent example by an American third cousin of my wife’s, whose own research was marked by obsessional referencing of every piece of data. I adopted it myself, as a result of which it’s possible not only to correct my mistakes, but to see how they arose.
“Proper” history is far more dependent on interpretation of sources, and so famously prone to bias. The questions it asks are about meaning, and so intrinsically more difficult than those of genealogy or science. To me Edward VI was an enlightened benefactor of my school. To Evangelical historians he was the King Josiah of the English Reformation. To my (Catholic) Oxford History of Britain he was “the Boy Bigot”. But those biases, like those of every other academic discipline, are subject to the scrutiny of referenced sources, enabling one to check factuality, assess partiality and draw ones own conclusions on the truth.
The usual “skeptical” claim on this matter is that the worst offender (as in everything, of course) is religion, which unlike science never corrects itself but takes everything on authority. On any measure this is nonsense. If one takes liberal theology it spends its whole time correcting the errors of every previous generation on the basis of reason, philosophy, and even science itself (sadly sometimes in last year’s uncorrected version). For conservative scholars, the Bible is indeed regarded as a final source of authority – the theological counterpart to science’s natural world, the object whose truth is being pursued. But the perception of that truth is subject to rigorous self-examination.
Any well-taught preacher questions the biblical text every week to ask what it is really teaching, as opposed to what he’s always assumed, or what he planned to say: I myself seldom prepare a passage for the pulpit without realising I took too much for granted last time around. In theology itself, every tradition has some version of principles like the “Wesleyan Quadrilateral”, in which Scripture is primary, but understood through tradition, reason and experience (and here I indulge in some correction of my own, for the Quadrilateral is often nowadays misrepresented as alloting equal weight to all four).
Historically, what else but critical self-correction was involved in the great Ecumenical Councils, in which hundreds of church leaders from across the world met to hammer out Scripture’s true meaning when, in some cases, almost the whole Church had lapsed into error? One iota of difference is crucial when the iota in question makes “homoiousios” out of “homoousios”, as it did in the Arian controversy. The Arians had the Emperor on their side – the reformers only the deep teaching of Scripture. The Church corrected itself.
But actually, every skilled community engages in self-correction, and always has done. The traditions of herbalists, goldsmiths, stonemasons and iron-forgers have only developed as corrections and improvements were made to existing techniques. Science is only up there among the best of them because of its foundation of good recording and open publication (once you get past the pay-wall) together with its ethic, never under more threat than now, of honesty and self-criticism. In these it is prominent but not unique, and the ability for science to right its errors depends less on its methodology than on its traditional culture and the individual character of scientists – which are not scientifically-founded issues at all.
Likewise it is human failure more than the actual discipline in question that entrenches error and resistance to correction. In theology, get yourself an infallible Pope, or the Assured Results of Critical Scholarship, or reliance on popular writers with insufficient historical perspective, and you’re in potential trouble.
Turn your trade guild from a guarantee of skilled best practice to a repository of privilege, and products and prices are likely to get worse rather than better. And of course, in genealogy if your guiding principle is that you’re descended from Richard III your references will simply reflect the blinkered direction of your research.
But what about science? The criticisms it currently faces show that it has many potential points of attack for the commonest pitfalls. It attracts a huge amount of money. It is now closely allied to and supported by government (unlike religion), and to financial corporations. It has great societal power, which can devolve to individuals who either deserve it, or who manage to gain it anyway. And most dangerous of all there is a widespread assumption that it is infallible, which makes it a good tool in the hands of those who want to be shown infallible themselves.
Another link on Rupert Sheldrake’s website is to a piece by philosopher Ted Dace about the vigorous campaign to discredit Sheldrake as a scientist. He sums up my last point on the danger of science’s infallibility in a couple of somewhat polemic sentences:
Rather than admit to their credulous commitment to the metaphysics of mechanistic reductionism and their fear and trembling in the face of real science, pseudo-skeptics cultivate the delusion that they are its foremost defenders. By narcissistically identifying themselves with science, they imagine that anything at odds with their own belief system is therefore contrary to science [that seems oddly familiar – JG]. Much like a cult, they reinforce each other’s confusion and sense of righteousness in the face of an implacable and unreasoning enemy, all the while imagining their efforts at maintaining collective self-satisfaction amount to some kind of noble undertaking.
Come to think of it, the Inquisition was someone’s idea of a means of correction, too.
This was a tongue in cheek blog post a few years ago.
Is NIH a cult?
http://www.michaeleisen.org/blog/?p=1217
Preston
It is a cult, but a self-correcting one.