×
See Comments down arrow

Climate models: A glimpse under the hood

05 Jul 2023 | Science Notes

Or behind the curtain, or wherever they’re telling you not to look because it’s messy or distracting. But Willis Eschenbach, a computer programmer with many decades of experience, just did anyway. He obtained the source code for “Model E”, NASA’s own super-intelligent whiz bang climate model and published a report for Net Zero Watch in which he discusses some of the weird kludges, fixes and ad hoc cheats that make it run. After describing some of the more glaring errors Eschenbach notes: “This is what the climate modellers mean when they say that their model is ‘physics-based’. They use the term in the same way Hollywood producers do when they say a movie is ‘based on a true story’”.

The first really weird thing is that the model is written in the ancient programming language Fortran. The reason is that it was first written many decades ago, and it has sprawled in size over the years as more and more pieces get added on instead of the modelers at some point going back and starting from scratch using cleaner assumptions and functions and a more modern coding system. But it’s a dangerous way to program, because as one bit gets added here, another bit over there no longer works right, and ever messier kluges are employed to fix short-run problems at the cost of making the underlying mechanics more and more obscure, ad hoc and opaque even to the programmers.

Eschenbach gives the example of polynyas, pools of meltwater that form on top of the sea ice in polar regions. These objects play an important role in determining how much of the sun’s energy gets reflected back to space rather than warming the surface. But in an earlier version of the model, polynyas could form and remain liquid regardless of how cold the air was. So instead of going back and undertaking a proper fix of the underlying physics, the modelers just added an ad hoc rule limiting how many days the melt pool could form. But in the latest version of the model the crude limit on melt days is gone, replaced by a crude limit on the overall reflectivity regardless of the number of melt days. Why? Well, not because it better fits the physics as we now understand it, that’s for sure.

As for temperature, which you hope these models can get right given their purpose, NASA’s Model E like many climate models sometimes wanders way out into left field and generates temperatures outside any reasonable range. Does that claim sound harsh? Actually it’s what some of the coders themselves wrote as comments right in the code, in the spot where they added a fix to stop the temperature from wandering around so much:

“This routine makes sure that the temperature remains within reasonable bounds during the initialization process. (Sometimes the computed temperature iterated out in left field someplace, *way* outside any reasonable range.) This routine keeps the temp between the maximum and minimum of the boundary temperatures.”

But when you fix a problem like this one by telling the program not to go there even if it wants to, it means you’re ignoring the fact that its representation of the world is obviously wrong in order to convince people its representation of the world is right. Which is cheating.

It also appears in the part that determines wind speed. Same problem, same fix, same warning in the code. It’s what some people mean by “settled science”.

Another problem in climate models is that one part of the model estimates how much energy comes in from the sun and other parts determine how much gets sent back out to space. They are all supposed to agree, but they don’t. Which would be OK except that they can come up with such different numbers that the virtual planet either fries up or turns into an ice ball. Rather than fix whatever the problem is in the underlying representation of climate physics, modelers just take the energy imbalance and spread it all over the globe like icing.

Eschenbach renders the following verdict:

“Bottom line? The current crop of computer climate models (which should really be referred to as ‘climate muddles’) is far from being fit to be used to decide public policy. To verify this, you only need to look at the endless string of bad, failed, crashed-and-burned predictions that they have produced. Pay them no attention. They are not ‘physics-based’ except in the Hollywood sense, and they are far from ready for prime time. Their main use is to add false legitimacy to the unrealistic fears of the programmers.”

Pay no attention to that Mann behind the curtain, or his pals at the computer keyboard, or the digital ravings of their GIGO machines.

5 comments on “Climate models: A glimpse under the hood”

  1. As a former software engineer and senior manager, I find Eschenbach's observations to be unsurprising. I have no doubt that the climate model code has grown over the years like cottage plumbing. That's how big, complex code bases tend to evolve. It's simply too much work to rewrite them, and attempts to do so often end up in failure -- at huge expense.

    One criticism I could make of this write-up, though, is the implication that models would be better if written in a "modern programming language" instead of the "ancient programming language FORTRAN". FORTRAN is, indeed, one of the oldest programming languages but it's also very effective for scientific and mathematical applications. It remains in use for physics and climate modelling because it still provides the fastest numerical computation of any high-level language, especially when parallel processing is needed (such as in vector arithmetic). Supercomputer CPU's are actually optimized for FORTRAN. (I know this because I have been privy to briefings from IBM chip engineers who said it.) Numerical computing, whether it's in Python (NumPy) or C or FORTRAN, relies on libraries of prewritten functions that have been around for decades, are finely tuned, and have been proven to be correct. Much of that code was written in FORTRAN, and calling it from a FORTRAN program is also more efficient than from other languages. When the problem requires a lot of compute power on supercomputer-scale computers, every efficiency counts.

    So, the fact that the models are written in FORTRAN isn't really an issue, other than the fact that it makes it harder for each new generation of programmers to understand it because FORTRAN is now a niche language that's not taught in schools.

    FORTRAN aside, the rest of the article is spot on.

  2. I'm prepared to be corrected on this - but surely a model should reflect real world data and observations, and make assumptions based upon these (subject to regular data additions - say temperature data) rather than the other way around? Or has 'The Science ' changed since I studied it 50 years ago?

  3. I have seen this in real life. If the data, or field measurements don't return the "right" conclusions, revert to the data that was assumed. So, when the model doesn't perform as anticipated, the temptation is to say that the model is correct, but the data must have a problem. So, the data is changed or limited to exclude outliers, etc. Well, if measured correctly, data is data. That this was written in FORTRAN, one might wonder if they lost a punch card. A complete redo using a format like system dynamics, backed up by centuries and centuries of data, might be a better choice.

  4. Isn’t part of the issue that output from various climate models is taken as ‘data’ and treated as if it was actual real world observations and not just numbers (temperatures) pumped out of a computer generation? I remember reading something about that several years ago. Isn’t another issue with the models the fact that calculated values represent very large areas, thousands of square kilometres thereby leading to potentially large errors? The list goes on but one other thing I read about was the butterfly effect within the programming. Extremely small changes in the inputs leads to very large changes in the output after the numerous calculation interactions the models create. Aren’t models created to help understand the potential effects or changes to a result based on specific changes to an input, not to create output that is considered to be a prediction of future reality. There is a big difference between stating emphatically ‘the oceans are going to boil’ than to say ‘Houston, we have a problem’.

  5. It's pretty clear to me that the models and data have been tampered with for years,if not decades,by climate alarmists like Mann,NASA,NOAA, and others.It's all about power and control.

Leave a Reply

Your email address will not be published. Required fields are marked *

searchtwitterfacebookyoutube-play