See Comments down arrow

The iron law of modeling

20 May 2020 | Science Notes

At a time when COVID-19 models are being lambasted for vastly overstating the mortality rates from SARS-CoV-2, we begin to wonder whether it’s some kind of iron law that every model offered up for policy guidance exaggerates the risk it is trying to analyse. Maybe there is a selection bias process at work, in which no one will pay attention to the model unless its outputs are lurid and terrifying, so modelers oblige with various tweaks and nudges to get the scary outcomes. It’s certainly true of climate forecasts. And not just of temperature. In the piece noted above on global greening, Michaels references a previous Climate Etc. post in which economist Ross McKitrick shows a graph from a 2019 study comparing observed increases in atmospheric CO2 to the forecasts issued regularly since the 1970s. It shows that since 1980 every model has over-predicted the amount of CO2 that would accumulate in the atmosphere as a result of fossil fuel use. So maybe it’s actually a lead balloon law of scary model predictions that they fall back to earth with a dull thud.

The graph McKitrick displays contrasts a sequence of model projections of atmospheric CO2 with actual measured levels since 1970. And it’s not pretty, in the usual way:

Reality is the dark black line. Fiction, otherwise known as the model projections, are the coloured lines rising too quickly above it. After about 1980, every model but two predicted more CO2 accumulation in the atmosphere than was observed. (One old one only ran from 1970 to 2000 and the other one that didn’t—the green one going flat after 2000—assumed all CO2 emissions would cease at the start of the 21st century; the same model run under the assumption that emissions would continue is one of the ones that overstate CO2 accumulation.)

It’s to be expected that models of complicated systems will make mistakes. But the initial distribution should be more or less random and the feedback mechanism of measuring model outputs against actual results and finetuning algorithms should reduce the magnitude and skewed distribution of errors over time. When instead they keep making the same mistake in the same direction for 40 years running, it’s not mere error. It’s bias.

The upper-end models of temperature increase, and even of the increase in CO2 meant to cause runaway temperature increases, are the ones that frighten policymakers into demanding ever-more stringent policies in the hope of getting us down to the low end of CO2 accumulation forecasts, and in consequence bring more grant money to the modelers to give the politicians more of what they want. It’s a closed circle impenetrable even to data that is readily available to the participants.

In consequence, the modelers never get around to telling policy makers that we are already at the lower bound, and all indications are that we are going to remain there, so the extra stringency isn’t needed. The problem is not in our computers but in our modelers.

3 comments on “The iron law of modeling”

  1. The real law of computer modelling is that all models should be considered guilty until proven innocent.
    Incidentally, we don't need to worry about climate change or anything of that nature. I have a computer model that says that the human race will be supplanted by giant mutant cauliflowers by the year 2100. And models are always right, yes?

  2. The exaggerated risk model process works for both the modeller and the policy maker. the modeller gets more money to come up with better models and policy maker gets to say how well his policies worked to limit the risk. I for one am tired of be managed by models that scare the pants off people.

  3. Your observation of bias over time is one explanation for the diversion between prediction and reality, with which I can agree.
    However, I wonder if the efficacy of these models might also be affected by the starting month when the model is first applied. I believe that the starting conditions are not the same in each month, and this could be easily checked by running these models another 11 times to see of the error between prediction and reality narrowed for some months. I offer this just as a suggestion as I do not know what the outcome could be.

Leave a Reply

Your email address will not be published. Required fields are marked *