More on the human limitations of science (especially regarding politics)

My attention was drawn to an important, but rather predictably neglected, 2004 article How science makes environmental controversies worse, by Daniel Sarewitz (Environmental Science & Policy 7 (2004) 385–403). It’s essential reading.

The article has a lot to say regarding the climate change controversy, by a writer who has contributed to US government reports in that field, but it takes a much broader look at the way that scientific research informs – or more often confuses – the politics of environmental issues, and why that should be.

He begins his demonstration with an example that, scientifically speaking, ought to be a simple matter of counting beans, but which in the end had to be resolved by purely political means: the 2000 US election result in Florida. The closeness of the count between Bush and Gore, on which hinged the presidency, would at first sight seem to have been an elementary problem for science to settle. Sarewitz shows why it did not, why a political solution was necessary and effective, and why permitting more science would probably have made the problem worse.

Because Sarewitz is fully cognizant of issues in both the philosophy and sociology of science, the article actually overlaps with a good number of such issues I’ve dealt with over the years. One is the fact that science can never, ever, be divorced from its human commitments to politics, religion, ideology, metaphysics and so on: the pretence of its objectivity is a myth.

Another is the matter of the inherent uncertainty of science, which counter-intuitively has a tendency to increase, rather than the reverse, the more research is done by more people in more disciplines. This escalation of science is, of course, precisely what happens in any matter that becomes politically controversial. At one point he includes a diagram showing how the higher the political stakes, the more scientific and institutional players there are, and the more uncertainty will result:

[W]hen the costs and benefits associated with action on a controversy begin to emerge and implicate a variety of interests, both political and scientific scrutiny of the problem will increase, as will sources of uncertainty, as shown by the climate sensitivity and nuclear waste cases. Moreover, when political controversy exists, the whole idea of “reducing uncertainty” through more research is incoherent because there will never be a single problem for which a single, optimizable research strategy or solution path can be identified, let alone characterized through a single approach to determining uncertainty. Instead, there will be many different problems defined in terms of many competing value frameworks and studied via many disciplinary approaches…

By a number of examples and arguments, Sarewitz incorporates, but goes far beyond, the usual simplistic level of debate about “poor science,” political bias, financial interests and so on to show that uncertainty and disagreement are inherent to the way science itself is organised, and the higher the stakes, the more this is manifest. The multiplicity of disciplines within science is one such factor:

My point is not to excuse conscious manipulation of facts, or to deny that some research simply is not of a very good quality. But the elimination of these two problems would have little if any effect on the phenomenon I am describing. The problem is not “good” versus “bad” but “ours” versus “theirs.”

His argument casts all kinds of light on a whole range of phenomena in the discussions of scientific matters I’ve seen over the years, from the exaggerated claims about a professional consensus on the catastrophic results of anthropogenic climate change, to the ubiquitous accusations in biological discussions that “X simply doesn’t understand evolution” (usually made against those with expertise in a different branch of biology from themselves).

One simple reason he gives why “more science” doesn’t by any means necessarily mean “less uncertainty” I explored in a general way a number of years ago. The idea that science is a body of knowledge that grows incremetally (and thus progressively becomes more clearly defined and complete) is a fiction. In fact, given the richness of the phenomena nature “throws at us,” science is more like a cloud of separate asteroids in three dimensional space, whose individual characteristics often cannot be related to each other at all. A classic case is the incompatibility of quantum theory with relativity: our author deals in detail with more environmental and specific examples.

He labels this phenomenon, particularly in the multi-disciplinary context, “excess of objectivity”:

As an explanation for the complexity of science in the political decision making process, the “excess of objectivity” argument views science as extracting from nature innumerable facts from which different pictures of reality can be assembled, depending in part on the social, institutional, or political context within which those doing the assembling are operating. This is more than a matter of selective use of facts to support a pre-existing position. The point is that, when cause-and-effect relations are not simple or well-established, all uses of facts are selective. Since there is no way to “add up” all the facts relevant to a complex problem like global change to yield a “complete” picture of “the problem,” choices must be made. Particular sets of facts may stand out as particularly compelling, coherent, and useful in the context of one set of values and interests, yet in another appear irrelevant to the point of triviality.

But his analysis of the problems inherent in science, with particular regard to environmental policy decisions, goes beyond this to the close connections between the way science is organised, and the way human action is organised. For example, he describes the controversy between oceanographers proposing a new way of estimating total global warming called ATOC, and biologists fearing the that the sonic impulses used could harm marine animals:

The benefits of performing ATOC, as understood and articulated by physical oceanographers, had no bearing on the well-being of marine mammals, as understood by biologists. To put it bluntly, but perhaps not too simplistically, oceanographers’ values were represented by the conduct and outputs of oceanography; biologists’ values were not.

Could scientific orientation be related to the values that one holds? Science divides up the environment partly by disciplinary orientations that are characterized by particular methods, hypotheses, standards of proof, subjects of interest, etc. My point is certainly not that disciplines are associated with monolithic worldviews and value systems. But, while some see a grand unification of all knowledge as an inevitable product of scientific advance, thus far the growth of disciplinary scientific methods and bodies of knowledge results in an increasing disunity that translates into a multitude of different yet equally legitimate scientific lenses for understanding and interpreting nature.

And so he goes on to draw a conclusion that severely relativises the polemics on all sides, including Greta Thunberg’s “Just listen to the scientists,” alarmists’ mantra that “denialists are funded by big oil,” and the latters’ suspicions of a globalist socialist agenda amongst alarmists. Even if connections like this exist (and they do),

This alignment of disciplinary perspective and worldly interests is critically important in understanding environmental controversies, because it shows that stripping out conflicts
of interest and ideological commitments to look at “what the science is really telling us” can be a meaningless exercise.

Even the most apparently apolitical, disinterested scientist may, by virtue of disciplinary orientation, view the world in a way that is more amenable to some value systems than others. That is, disciplinary perspective itself can be viewed as a sort of conflict of interest that can never be evaded. In cases such as the Mexican corn controversy, it might be most accurate to look at the scientific debate not as tainted by values and interests, but as an explicit—if arcane—negotiation of the conflict between competing values and interests embodied by competing disciplines.

These problems are in part a reflection of the diversity of human values and interests, but they also reflect the richness of nature, and the consequent incapacity of science (at least in this stage of its evolution) to develop a coherent, unified picture of “the environment” that all can agree on. This lack of coherence goes by the name of “uncertainty.”

Sarewitz writes a whole section on this question of scientific uncertainty, and the reluctance of the scientific community even to define it, let alone to grapple with it. He gives an account of a government paper to which he himeself contributed, on the uncertainties associated with climate change prediction. He describes how, after various stages of peer review, rewriting and editing:

In the final report all discussion of uncertainty was removed. Even the word “uncertainty” was stripped from the title… The multiplicities of meaning and use of the word “uncertainty” remain (unacknowledged) in the report, as does the promise, both explicit and implicit, that more research, and better models, will reduce uncertainties [against which proposition Sarewitz has been arguing at length in the current article]. Absent, however, is any discussion of these issues.

There is much more, including highly significant historical examples, which you should read. But in offering his solution, if I may oversimplify and paraphrase, his answer is the opposite of the technocratic idea that politics should become more objective by more reliance on science. Rather, he suggests that, fully recognising and declaring the human biases and commitments of scientists upfront, environmental issues such as global warming policies may be better resolved by emphasing ideological commitments and working through them politically, rather than obscuring them by waving the “Science” banner.

His conclusion makes much sense to me:

Any political decision (indeed, any decision) is guided by expectations of the future. Such expectations can in turn be less or more informed by technical knowledge, but the capacity
of such knowledge to yield an accurate and coherent picture of future outcomes is very limited indeed. Ultimately, most important decisions in the real world are made with a high degree of uncertainty, but are justified by a high level of commitment to a set of goals and values.

Such past political acts as the passage of civil rights legislation, the reform of the US welfare system, or the decision to invade Iraq were not taken on the basis of predictive accuracy or scientific justifications about what the future would look like, but on the basis of convictions about what the future should look like, informed by plausible expectations of what the future could look like.

People can only make sense of the world by finding ways to reconcile their beliefs with some set of facts about how reality must operate. So politics can isolate values from facts no more than science can isolate facts from values. The nature of this interaction has been a central subject of science studies scholarship.

From these brief discussions I hope to have made clear that there is no reason why environmental controversies must be highly “scientized.” Even if science brings such a controversy into focus (for example, by documenting a rise in atmospheric greenhouse gases), the controversy itself exists only because conflict over values and interests also exists. Bringing the value disputes concealed by—and embodied in—science into the foreground of political process is likely to be a crucial factor in turning such controversies into successful democratic action, and perhaps as well for stimulating the evolution of new values that reflect the global environmental context in which humanity now finds itself.

Science does not thereby disappear from the scene, of course, but it takes its rightful place as one among a plurality of cultural factors that help determine how people frame a particular problem or position—it is a part of the cognitive ether, and the claim to special authority vanishes.

There’s the rub, of course: is either society or the scientific priesthood ready to relinquish science’s claim to special authority, which is actually a religious claim? I venture to suggest that it will only do so by finding, or re-commiting to, another religion.

Jon Garvey

About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Philosophy, Politics and sociology, Science. Bookmark the permalink.

Leave a Reply