Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

The Map and the Territory: the problem with models

By Don Aitkin - posted Wednesday, 18 December 2019


This essay is based on a peer-reviewed article by a statistician and a mathematician (who call themselves 'physicists' at the end) in an economics journal. Its title is 'Escape from model-land'. It's been about for a couple of months, but there'll be many readers for whom the article is new and important, so what follows is my summary, which is informed by my own experience. .

The authors start their abstract with what seems to me a great and often unrecognised truth.

Both mathematical modelling and simulation methods in general have contributed greatly to understanding, insight and forecasting in many fields including macroeconomics. Nevertheless, we must remain careful to distinguish model-land and model-land quantities from the real world.

Advertisement

Let me give an example from my own work. I was interested in the extent to which local issues and candidates affected election results. Votes are counted first at polling booths, then they are aggregated at the sub-division level, then at the divisional level (the electorate), then at the State or Territory level, and finally for the whole country. I could use a simultaneous-equations model to estimate the average effects at each level, and I did so. That process gave me an estimate of the various 'effects', really local ones, wider local ones, divisional ones, and state ones. So when you said that Labor had won, say 47 per cent of the vote, you were looking at a lot of separate contributions, both positive and negative, to that outcome.

And although I was able to give numbers to these 'effects' those numbers were estimates only. More, these effects were highly simplified versions of the real world, where people voted as they did for all sorts of reasons. Not one of them was interested in these 'effects' of mine, but they were affected by how rusted-on their party loyalty was, whether they knew any of the candidates, what they thought of them, what issues in the campaign had resonated with them, if any, how grumpy they felt that day, and so on. My model was the 'map', while the all-too-human reality of election-day was the 'territory'. I found the map/territory analogy in the article and think it is a most useful one.

The authors offer what they call 'a short guide to some of the temptations and pitfalls of model-land', a map of which they provide. It has some amusing descriptors. They are concerned that simulations and models are too frequently used to inform policy, when the modellers do not properly explain the limitations of their work, and the policymakers do not understand the limitations anyway. What we then get is policy-based evidence, rather than evidence-based policy. 'Climate change' is one of the areas the authors single out for attention. In their view (and it is mine also) whether or not models are useful for policymaking has to be determined by looking at whether or not the models can explain the past properly, and whether their predictions about the future prove to be correct, 'never based solely on the plausibility of their underlying principles or on the visual "realism" of outputs'.

In model-land models are tested against one another, simulations against other simulations. This process:

promotes a seductive, fairy-tale state of mind in which optimising a simulation invariably reflects desirable pathways in the real world. Decision-support in model-land implies taking the output of model simulations at face value (perhaps using some form of statistical processing to account for blatant inconsistencies), and then interpreting frequencies in model-land to represent probabilities in the real-world.

One of the problems in so doing is the zero probability of what they call 'the Big Surprise' - an event which often occurs in the real world but not in model-land.

Advertisement

The authors are scathing about something that we see again and again in climate science.

For what we term 'climate-like' tasks, the realms of sophisticated statistical processing which variously 'identify the best model', 'calibrate the parameters of the model', 'form a probability ensemble from the ensemble', 'calculate the size of the discrepancy' etc are castles in the air built on a singe assumption which is known to be incorrect: that the model is perfect.

Mathematicians thrive in model-land, they say, and they can show also that interesting solutions will not hold in the real world. Mathematicians also ask for greater computational power, which has become increasingly available.

But bigger computers don't necessarily lead to greater accuracy in real-world outputs. The authors argue that where the outputs are in the short-term, say tomorrow's weather, and predictions are easily tested, then crunching large numbers may give you a better handle on the variables. But in 'climate' (conventionally, the average of thirty years' worth of weather) testing predictions about the future may need to wait for another thirty years, or even more. Jumping to conclusions here on the basis of what models and simulations say will most likely led to bad real-world policy decisions.

The authors have done some predictive work themselves in the domains of weather, energy pricing and nuclear stewardship, and offer some advice to potential users. For example, they use a 72-hour accumulation of knowledge to decide whether a humanitarian crisis is likely after a severe weather event. They warn against using the 'best available' model unless it is also arguably adequate for the purpose. And they ask a set of questions that ought to be answered and supplied every time a model is put up as a solution to a real-world issue. Among them:

...is it possible to construct severe tests for extrapolation (climate-like) tasks? Is the system reflexive; does it respond to the forecasts themselves? How do we evaluate models: against real-world variables, or against a contrived index, or against other models? Or are they primarily evaluated by means of their epistemic or physical found-ations? Or, one step further, are they primarily explanatory models for insight and under-standing rather than quantitative forecast machines? Does the model in fact assist with human understanding of the system, or is it so complex that it becomes a prosthesis of understanding in itself?

There are at least two ways to escape from model-land. One is repeatedly to challenge the model to make out-of-sample predictions and see how well it performs. This is possible where what we are dealing with is weather, or weather-like issues. Here the forecast lead-time is much less than a model's likely lifetime. You could in principle keep using the model to forecast today's weather a year from now, but you'd probably do better just to predict that it will be rather like today's weather.

In climate-like issues you'd find it useful to employ expert judgment, which is what the IPCC did in its last Assessment Report. Here we also need to consider uncertainty, something that Judith Curry has written about for several years. This is not the uncertainty of the expert judgment, but the uncertainty that exists between model-land and the real world.

This is a most interesting paper. The authors stress that their aim is not to discard models and simulations, but to make them more effective. They conclude:

More generally, letting go of the phantastic mathematical objects and achievables of model-land can lead to more relevant information on the real world and thus better-informed decision-making. Escaping from model-land may not always be comfortable, but it is necessary if we are to make better decisions.

To which I say, 'Hear, hear!'

  1. Pages:
  2. 1
  3. 2
  4. All

This article was first published on Don Aitkin.



Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

5 posts so far.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Don Aitkin has been an academic and vice-chancellor. His latest book, Hugh Flavus, Knight was published in 2020.

Other articles by this Author

All articles by Don Aitkin

Creative Commons LicenseThis work is licensed under a Creative Commons License.

Article Tools
Comment 5 comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy