As Australia now has a carbon tax it is worth reviewing just what this policy is supposed to be based on.
The proverbial person in the street may think they know the answer already – that solid scientific projections show that temperatures will increase sharply unless we reduce emissions worldwide. Those same people may have a vague image of scientists plugging a few numbers into a well established computer system that then produce the frightening numbers that keep on getting into the news.
In fact, the temperature projections and the various frightful consequences we often hear about are the result of four interlinked forecasting systems of immense complexity that only partially depend on science. In production order these are:
Emission forecasting systems - These involve forecasting economic growth, combined with guesses about technological innovation and population trends decades into the future, to estimate levels of future emissions.
Concentration projections - Another set of models, simple compared with the others, turns those emissions scenarios for greenhouse gases into concentrations in the atmosphere.
Temperature forecasting - The most discussed and most complex part of the chain, these turn the concentration scenarios into temperature scenarios.
Damage forecasting - This disparate collection of systems are used to forecast all sorts of dire results, including changes in rainfall patterns, increases in sea levels, and so on and on. One part of work in this area is to make dollar estimates of the damage to be caused by climate change and then discount back to present-day values to see whether there is an economic case for limiting emissions now.
All of these forecasting efforts are unprecedented. Astronomers have been able to forecast the position of planets and comets on time scale of decades, but the calculations involved are trivial compared with atmospheric models. In fact the only real precedents for any of this is to be found in business, which relies a lot on forecasting of everything from foreign exchange prices through to traffic patterns on new bridges, albeit typically only a few years out rather than decades.
After much trial and error the tiny discipline of forecasting (which is a business, not a science subject) has formulated two basic questions, among others, to ask about forecasting systems: what assumptions are involved in making the forecasts? And does the system have a successful track record? Anyone who thinks that the questions are unjust or ill-conceived are welcome to argue with those who study forecasting. A good place to start looking is www.forecastingprinciples.com.
The answer to the first question for our multi-tiered forecasting system is quite a number, but two that stand out are assumptions about the economic growth rates of very poor nations (in the first tier), and the amount of water vapour in the atmosphere as temperatures increase (in the third tier). Another curious point is that another two assumptions, both involving economics, are contradictory. The extreme emissions scenarios at the beginning of the process assume that now very poor nations will become very rich and populous in a few decades, and so generate more emissions, while the estimates of economic damage at the end of the process assume that many nations with coasts near the equator will become populous but remain poor (and so suffer more from storms and rising sea levels and so on).
We will not discuss the assumptions here although it is worth noting that in my experience the bulk of the climate scientists who bitterly defend greenhouse theory have no idea that these assumptions exist, let alone the activists. They have certainly not been properly checked or examined by an independent body, which cannot be the IPCC. But the most important question of all is whether this chain of forecasting systems has a successful track record.
If scientists are ever challenged on this they will point to what amounts to back-testing exercises. That is, they have run the systems for past years and decades to generate known temperature results. A variation on this argument is to look at all the influences on climate over a certain period and rule out all factors that could have caused a known increase in global temperature, except the increase in carbon dioxide. So carbon dioxide must be to blame. This is back testing under another name, with the assumption being that all the influences on climate have been identified.
The work of Professor Richard Muller, a professor of physics at Berkeley University in the US, who was converted to the greenhouse case after carefully analysing ground based temperature records, is essentially another back testing exercise. His work, which received some attention of late, is certainly of use, particularly in confirming that the ground instrument network does show an increase in temperature over the past century or so, albeit not a very significant one. Work by mainstream climate scientists on that point has been so sloppy to date, that there was confusion that point. (Not so for the satellite measurements of temperatures in the upper atmosphere which show a similar, if less dramatic, increase.) Dr Muller also linked the increase to variations in CO2, but as he admits his analysis is no more than an indication.
For as those who study forecasting systems point out, any fool can foretell the past, the real trick is to say something useful about outcomes unknown at the time the forecast was made. So here, finally, we come to the $64 trillion question, what can we say about forecasting success of these models? Activists and climate scientists will try to brush aside that irritating question by saying that the forecasts are endorsed by experts, but those who have studied forecasting will just as quickly point out that the expertise of those making the forecast is of no relevance. The only way to tell whether a forecasting system (or string of them in this case) are of any use is to run them, and see how well they do at forecasting an unknown result at the time of the forecast.
This issue of the forecasting accuracy of these systems has been discussed recently in the literature with Robert Fildes and Nikolaos Kourentzes, a professor and PhD respectively at the management school of Lancaster University in England who examined one forecasting model used by the Hadley Centre in England, a bastion of the global warming consensus. Their paper in the journal Forecasting (2011, but only recently surfacing in the debate) entitled 'Validation and forecasting accuracy in models of climate change' (abstract is here http://ideas.repec.org/a/eee/intfor/v27y2011i4p968-995.html ) contains an invaluable discussion of climate modelling techniques. The academics conclude that there might be something to global warming but they point to the problem of back testing among others in verifying climate models. Their efforts drew quite a fair response from the respected climate scientist Noel Keenlyside, which includes a defence of the back testing approach. Readers can make of all that what they will, as I'm not going to discuss it here, but it is the first I am aware of that any climate scientist has attempted to defend the approach on which the "proof" of the climate consensus rests.