×
See Comments down arrow

Some not hot models

20 May 2020 | OP ED Watch

The extreme response of many governments to COVID-19 was driven by… computer models. We’re “following the science“, they told us (wherever it happens to be going today). “Experts say” they all say. What are you, a pinhead? Unfortunately the models turned out to have been a load of dingos’ kidneys, with significant consequences not just for the climate debate but for public faith in our institutions scientific and political. To limit the damage Benny Peiser calls for making a “Red Team” approach to science habitual, in which devils’ advocates are paid to poke holes in hot new research, question supposed consensus, critique standard models and challenge anything else that’s going to be used to silence criticism, override democratic mechanisms and impose drastic measures in haste based on fake certainty.

Some may argue that the pandemic models were actually good, because we responded to the alarming scenarios warning what would happen if we did nothing. Unfortunately the models also had quarantine scenarios and they didn’t come true either. In some cases the differences are so big as to be laughable despite their tragic economic and social consequences. Like New Zealand being predicted to have 14,000 coronavirus deaths and having 19.

The models were often poorly constructed and nobody thought to look. Thus on his “Rational Optimist” blog Matt Ridley gives them a thorough pounding starting with “Professor Neil Ferguson of Imperial College “stepped back” from the Sage group advising ministers when his lockdown-busting romantic trysts were exposed. Perhaps he should have been dropped for a more consequential misstep. Details of the model his team built to predict the epidemic are emerging and they are not pretty. In the respective words of four experienced modellers, the code is “deeply riddled” with bugs, “a fairly arbitrary Heath Robinson machine”, has “huge blocks of code – bad practice” and is “quite possibly the worst production code I have ever seen”.”

Interestingly, the same Prof. Ferguson already had a surprisingly spectacular history of failed disaster predictions: He said there’d be hundreds of thousands of deaths from mad cow disease, tens of thousands from swine flu and hundreds of millions from bird flu. Instead, Ridley notes tersely, “The final death toll in each case was in the hundreds.” Yet this dubious track record, with its serious real-world consequences, did not make Ferguson unpopular with the “experts say” crowd. In the contrary, he was treated as authoritative this time as well. And his new program, whose code was apparently altered before being released, predicted 40,000 deaths in Sweden by May 1, off by a factor of 15.

The problem with models is not that Neil Ferguson is a nitwit even if this particular one was a shambles. More fundamentally it’s that we expect too much of them and too often. Four centuries ago Galileo uttered one of the foundational maxims of modernity: that mathematics is the language in which the universe is written. Ever since, we’ve sought increasing precision in algebraic representations and predictions not just of chemistry or optics but of the economy, social behaviour and yes weather. And despite major advances in the hard sciences over centuries, including computers now being scarily better than the best humans at games like chess that we once thought relied on a kind of intuition and judgement machines could not exhibit, we’ve continued to trust this approach in other areas where the results have been distinctly unimpressive. Like predicting the stock market. In fact mathematizing the humanities into “social sciences” has arguably moved us backward not forward. Little is now heard of “cliometrics.” And the models have proved unhelpful, deceptively so in fact, even on non-linear processes in the hard sciences.

As Anthony Watts reminds us, not for the first time, Judith Curry talked about the “uncertainty monster” back in 2011. There is just too much we don’t know, about the data and about the relationships between various factors. But the public and the elite expect mathematical certainty and indeed demand it even though it’s not available. And when they don’t get it, they don’t seem to notice, on everything from disease to the revenue impact of some bill.

What the models do, and the politicians welcome, is sacrifice accuracy and honesty for certainty. Including, Merrill Matthews notes, a CBO estimate of the cost of Obamacare that was wildly wrong in the long run but politically useful for Democrats in the short run; of course they’ll be back for more. Or Katharine Hayhoe solemnly intones that “almost 40% more rain fell during Hurricane Harvey than would have otherwise” and it’s a rare brave soul who says “It is impossible to make such a calculation.”

Nowadays it’s very hard to beat a model except with another model. It’s very hard to convince people that when it comes to climate, nobody knows what’s going to happen including us. They end up sneering “You’re not a climate scientist” and going for a refill of snake oil. And there’s always some salesman waving a jug of the stuff. But this time we noticed. This time the models did enormous economic and social harm while getting the disease wrong.

If it turns out we did ourselves enormous harm by magical thinking about the humming boxes with the lights and the megaflops tended by egghead sages, what ever will the implications be? It depends in significant measure on whether those who habitually use models to silence critics are willing to admit that it’s not a sensible way to proceed.

The alternative is a bit unnerving. Curry’s paper began by quoting Voltaire: “Doubt is not a pleasant condition, but certainty is absurd.” So can we learn to live with doubt? Or must we persist in being absurd even when it hurts as badly as the excessively severe and protracted lockdowns have done and are continuing to do?

2 comments on “Some not hot models”

  1. Yes, Neil Ferguson was spectacularly wrong on every attempt he made in the past. The question we have to ask is: How did such a spectacular failure become the chief medical advisor to the UK government? Can't government bureaucracies ever pick winners, in any field? And likewise: How did Ferguson progress through the ranks of academe and become so prominent as to be a candidate for the government bureaucrats to select for this task? Is peer review in academe really that bad? Alas, the answer to both questions is yes, yes governments and universities really are that bad at picking winners. They are abysmal. Use your own common sense and you will be better off and further ahead. In fact, doing the exact opposite of what government "experts" advise is probably as likely to work out for you in the long run.

  2. Spectacular scientific failure is spectacular political success, what does science have to do with politics other than revealing that hot air rises to the top? Politics is not science, never is, and never will be, the two are exact opposites. Politickers are driven by belief and agendas that serve themselves, banking, corporations, insurance, monarchy, and, underneath as One, Elitists, reserving religion for the dumb-downed masses; science is driven by the search for truth, still putting its boots on, 100 jesus-morning sun laps behind, common knowledge. Politics is language manipulation to control the cattle subjugated to the Lyar 6000 years ago. Politicians are hot air balloonists who accept only hot air providers as companions, every provider a winner, humanity the loser, as it is written, by them in their world famous Big Books.

    Oh, I remember, we weren't supposed to talk about their fundamental model, politically incorrect, scientific taboo.

Leave a Reply

Your email address will not be published. Required fields are marked *

searchtwitteryoutube-playfacebook-official