There is still no substantive response to the evidence I gave at Peaceful Science for the misrepresentation of walrus deaths in a David Attenborough film. But “T_aquaticus” has taken it upon himself to apply the “climate denialist” insinuations of people on the thread (I have not denied climate change there), and to sigh, in that exasperated tone that scientistic types always use when they think they’re dealing with people who read as little as they do. He writes: “I’m guessing that no amount of information is going to budge you?”
Well, the obvious answer, if I accept the charge that I don’t buy drastic anthropogenic global warming predictions (though he has insufficient evidence for that from what I’ve written at Peaceful Science – one reason I’m posting this here: it shows how little evidence actually matters to these guys) is that it’s the quality, not the quantity, of information that “budges” intelligent minds.
In this case, the kind of information I would need includes that which would show me that all I have learned about scientific computer modelling in the last 50 years is incorrect, since everything that supports global warming appears to contradict both my scientific training and what I have studied for myself.
Strictly speaking, a scientific law is itself a mathematical model. Knowing in the abstract that e=mc^2 enables you to predict accurately many real events. But “modelling” as it is usually understood is performed where situations are too complex for mathematical prediction, when there are too many independent variables or, particularly, where processes are chaotic (like the weather, for example). In that case one can try to construct a model algorithm with simplified assumptions, run it through a computer, and see if it predicts what actually happens. The old adage is that all models are wrong (because they ignore many bits of reality), but some models are useful (because they work anyway).
In engineering it’s relatively easy: make your aerofoil according to the model, and if in a wind tunnel it performs as predicted, the model may well be right, so you keep using it. If it doesn’t perform as expected, you change the model, perhaps by adjusting some assumption, or perhaps by recognising new theoretical issues. You don’t redesign the wind-tunnel.
But it’s important to appreciate that models are all about utility, not truth. The classic joke about predicting milk yields from a model that assumes cows are spheres operating in a vacuum covers the truth that such a model might actually be very good, as long as it makes useful predictions in the situation it was designed for. But woe betide you if you use it to design cowsheds.
Computer models are used to predict the future of the solar system because with multiple bodies, Newton’s theory operates chaotically and cannot be accurately predicted. Thus, by building the model around your best astronomical data and Newton’s laws of gravity, you run the simulation many, many times, and predict that there’s only a 2% chance of the solar system becoming unstable before the sun dies. The problem is that you can only validate the model against actual outcomes (data) over the scale of a few years, and whilst that’s good enough to fly a space mission lasting 20 years, say, the model may become wildly inaccurate over cosmic time scales, and there is no way of knowing. In other words, models never generate data, but they process data and must be modified by data. Them’s the rules.
Apart from the effects of chaos, your model will also be affected by the very fact that it is algorithmic, whereas reality is not, and that your computer is at root digital, and approximates each and every calculation. But there may also be unknown factors – Newton’s laws might simply be inadequate at long time-scales.
For example, there’s some interesting recent work to suggest that the simple numerical relationships between orbital periods of some bodies in the solar system are not coincidental. Such relationships turn out to be very common in exo-planet systems, and one must assume some kind of resonance phenomenon exists, which I don’t think is yet fully understood. If that is true, and it is not included in ones model, all your long term projections might be entirely wrong, yet you would have no way of knowing. Your 98% simulations of the survival of the system for ever may, factoring in the new “law,” turn out to mean a 98% chance of solar-system breakdown in the next few thousand years.
Having established those principles, let’s look at climate. The present climate models were developed half a century ago, ignoring some important factors known even then (like the role of the sun!), and some discovered since, that might influence climate. When David Roberts claims that climate change is “simple, serious, and solvable” it is hard to take him seriously on his very first heading. That climate scientists don’t call him out on it is itself a mystery.
Not one of these models (despite the simplicity of the problem!) has predicted the rise in temperatures actually seen in the data since 1990 – notably, the ongoing flattening of temperature over the last 20 years or so, and even the pre-industrial indications of previous climate changes. Scientifically, then, I would ask for information that explains why the data does not simply prove model failure, as it would in any other science I know.
In practice, the IPCC has always retrospectively altered the data, in all manner of disparate ways, resulting always in an uncanny match to the existing models. This is not only putting the cart before the horse, but is against how models can validly operate. The original models were, or should have been, based on the actual procedures used to gather the initial data. If they didn’t predict the extension of that dataset, because the data gathering was allegedly erroneous in some way, then a new model should have been developed from the revised dataset, and then revalidated by new data. Instead, the models have been assumed right – and it’s tempting to believe that the data has been deliberately adjusted to fit them, because no model is that robust. But hey – what do I know, I’m just a medic, not a research scientist.
A second kind of information I need is based on the fact that IPCC uses (I think) 73 different models – which accounts for those spaghetti-like diagrams of the warming trend in the literature. The actual reason for this multiplicity is clearly political – each of the contributing nations has its own model, and the IPCC is a UN project and cannot be seen to prefer one over another.
This, however, leads to obvious nonsense when one applies the concept of models outlined above. At most, one model out of the 73 will be validated by the data: 72 will inevitably be a less good match, and therefore wrong. Good science would, therefore, jettison the 72 and retain only the best for the future – and would have been able to do so back in the 1990s, if any of the models performed well. For example, the IPCC’s “fix” for the temperature “pause” squeezes the data until it is in the same ball-park as the lowest of the models. But for some reason, they have used the pessimistic models thereby proven erroneous to make their latest predictions, when these should have been ditched altogether. Why, please?
In fact, there is no intrinsic reason why even one model should be right: even the massaged data actually fits none of the models that closely, and the raw data simply contradicts the whole set. I want information as to how that can be good science.
Related to that is the even more absurd practice of producing a line of prediction based on an average of the models employed by IPCC. Scientifically, at best one model can possibly be correct, and the others inevitably wrong. There is absolutely no way that taking an average can produce anything but a wrong prediction. Imagine you run an investment company where most of your staff (like most real investment companies!) get results from their predictions little better than chance. But you have one guy who appears to be clairvoyant, because every prediction he has ever made on the stock market has proven right. Why on earth would you then make your investment decisions based on a ballot of all the staff? You’d sack them and double the pay of the clairvoyant, wouldn’t you?
The last piece of information I’m after (which leads into some of the other, non-modelling, problems I have with the climate change agenda) is how it is that the scientific section of the IPCC reports has, over the years, been consistently down-grading the range of predictions of temperature change, which essentially means affirming a lower sensitivity of the climate to CO2 than was first anticipated.
The scientist-compilers have, correctly, based this on the emerging new data. (Though they have not, as I have discussed above, ditched the failed models and re-addressed the theory to include the newer evidence that CO2 feedback at current levels has a negative, damping, effect rather than a positive feedback leading to the kind of runaway “tipping point” we are threatened with. That has never occurred in the world, even when atmospheric CO2 was four or five times present levels, as it has been through most of the time since the Cambrian). But though, as I say, the IPCC scientists’ predictions of temperature rise have become progressively more modest with each report, the final policy summary produced by the political leadership of IPCC has, conversely, become progressively more alarming.
I assume it is this that has led to such a flurry of “climate emergencies” and so on in recent weeks, with the latest conference and scariest report ever upcoming. So I guess the final bit of information I’d like from people like T_aquaticus is how models can make one prediction to scientists (in little-read sections of the reports), but something much worse to those informing the politicians and taxpayers of their multi-trillion dollar planet-saving duties.
It seems that “listen to the science” actually means ignoring the data, and even ignoring the IPCC scientists who alter the data on dubious grounds, and instead listening to the politicians who produce the policy report… or even, to the ideological activists who exaggerate even the UPCC politicians’ words.
I need to add that amongst these ideologs I am forced to include those claiming to support science, who can nevertheless only parrot memes about “denialists taking oil money” rather than actually exploring from public sources (as I have) where big oil actually puts its money. Funnily enough, that information has budged me significantly, though not in the direction some people would prefer.