×
See Comments down arrow

We know nothing, with great certainty

03 Sep 2025 | Science Notes

It’s a funny thing about climate models. They are essentially worthless at predicting, um, climate. And yet they remain extremely popular with people who like the dismal world they predict. For instance Bloomberg Green just wrote that “The Ghost of Hurricane Katrina Haunts Catastrophe Modelers/ Catastrophe modeling has improved in the last 20 years, but experts say risk assessments are still plagued by blind spots.” Oh really? The models don’t understand or predict? Then why do we keep getting bombarded with spuriously precise attribution of extreme events and the dreadful future that awaits if we don’t ban carbon? As Willie Soon and Kesten C. Green wrote earlier this year, it’s not enough for a model to be tweaked to fit known data. “Evidence that the model provides out-of-estimation-sample forecasts that are more accurate and reliable than those from plausible alternative models, including a simple benchmark, is necessary.” And if yours can’t, don’t tell us it’s a great guide and getting better every day. And don’t tell us what it predicted. Go back and make it work worth a darn, then come back and bother us.

In this regard consider also the Bloomberg Green headline “Life-Threatening Heat Domes Are Confounding Forecasters/ Scientists are working to find atmospheric warning signs that can provide a longer lead time to forecast where and when extreme heat will hit.” Leaving aside the predictably tiresome hype in the story about mass death and the end of civilization as we know it, which didn’t happen, the key point is that the forecasters have no idea what’s happening or why or what will happen next. Which isn’t an advance on scapulimancy, now is it?

And another thing, again picking on Bloomberg Green. But since they do this stuff for a living, they invite comment. They also blared:

“How hot can a heat wave really get? Before June 2021, scientists thought they knew. That’s when one of the most extreme heat spikes ever observed hit western North America, leaving at least 1,400 people dead. Lytton, British Columbia, smashed the 84-year-old Canadian heat record on June 26, reaching 46.6 C (116F). And it smashed that the next day by 1.3C. And smashed that the next day by another 1.7C. And the next day, Lytton burned to the ground. When a team of climate scientists assembled days later to analyze the heat wave, they found that the local historical weather data offered a paradox: Their standard approach for estimating a heat wave’s rarity concluded that the new records were too extreme to occur in the region where they actually did. They were in a sense ‘impossible’ even though they actually occurred, as three American scientists put it earlier this year.”

Again, the nub of their gist is that the “settled” science was wrong and the models were useless. But they seem to be saying the opposite, that scientists now know everything including that they need more money:

“They adjusted their method to accommodate the new reality (and use that approach still), but noted that ‘follow-up research will be necessary to investigate the potential reasons for this exceptional event.’”

So to continue to summarize in plain language, their new method is no good either. Moreover, this exceptional event actually was exceptional and has not recurred despite the rush to judgement. Indeed, as we pointed out at the time, the thing about that heat dome was that standard analysis didn’t just show that it was essentially impossible without man-made warming, it was essentially impossible with it. But sometimes really weird things happen and, if weather is involved, they’re usually very bad news.

Still, how often can the modeling zealots write a story about how the models totally fail to understand or predict as a vindication of the theory that designed the models to vindicate it?

There seems to be no limit. The same defects were equally present, and evidently equally obscure to the usual suspects, when we get a headline like Climate Cosmos’s “Why the Climate Emergency May Be Deeper Than Feared”. It would take some doing given the amount of fear being sown. And we’re tempted to make some fairly obvious unkind cracks about being force-fed doom by:

“a versatile author and digital content creator with a focus on travel, climate awareness, and contemporary home design. With a background in media and storytelling, Marcel merges creative insight with editorial precision to deliver content that informs, inspires, and drives engagement.”

But soft. He may not be a climate scientist but it’s OK if you’re an alarmist to substitute storytelling and creative insight for knowing what’s going on. The trouble, mind you, is that you’ll write stuff like:

“This relentless pace of warming has not only outstripped older scientific models but has also forced scientists to rethink the possible speed and scale of future impacts.”

And then someone will say in that case the models were rubbish and the science wasn’t settled. But you didn’t tell us. Or that in fact the models overpredicted warming not underpredicted it. But again, mere data. Unlike:

“Scientists now project sea levels could rise by up to 1 meter by 2050 if current melting trends hold.”

Or not, since a “could” followed by an “if” isn’t a scientific result and these “scientists” are not named. But if you asked an actual sample of scientists whether sea levels could rise by a meter in the next 25 years, instead of the 75 mm or so at current rates, we suspect they would not endorse it. Home designers might. Except the ones designing seaside homes including for rich climate alarmists.

2 comments on “We know nothing, with great certainty”

  1. I refuse to believe that 1400 people died as a result of a "heat spike" in western North America four years ago!If someone can debunk my statement with evidence,please do.I welcome any such correction.

  2. The problem with extreme event attribution (EEA) is that you need a very long series of events indeed to test the reliability of your EEA system. A simple example of this:
    Tossing a coin, the probability of getting 10 heads in a row is 1 in 2 to the power of 10, or 1 in 1024. However, to test this empirically you would need to show that if you made coin tosses in blocks of 10, then in every 1024 such blocks (for a total of 10,240 individual tosses) there would on average be 1 and only 1 block with 10 heads. However, since you are dealing with purely random events you would need to make many such trials and average the results. Let's say you would need 1000 such trials to get a reasonably stable result, which would amount to 10,240,000 individual tosses.
    Now suppose you want to test an EEA system to show that your system can distinguish between a purely random event and one caused by climate change. Good luck with that!

Leave a Reply

Your email address will not be published. Required fields are marked *

searchtwitterfacebookyoutube-play