×
See Comments down arrow

Climate fingerprinting done right

19 Nov 2025 | Science Notes

Or at least done less wrong. A new paper in Climate Dynamics by University of Guelph professor Ross McKitrick called “Consistent climate fingerprinting” is a follow-up to a paper of his that we reported on back in 2021. In the earlier paper he exposed a mathematical error in the IPCC’s “Optimal Fingerprinting” methodology that, he said, caused it to yield biased and unreliable results. And since that method has used by the IPCC for 25 years to tie climate change to greenhouse gases and underpins their longstanding claims that carbon dioxide is responsible for virtually all the warming since the start of the 20th century, if the math is wrong they need to re-do the analysis. Alas, they showed no sign of doing so despite McKitrick’s critique and effort to engage with them, so after years of waiting he did it for them. In the new paper he presents a way to do the fingerprinting analysis that, unlike the IPCC method, satisfies all the requirements of mainstream statistical theory. And it confirms what other researchers have been arguing: the models warm way too much in response to greenhouse gases and don’t put nearly enough weight on natural variability.

Now “Fingerprinting” sounds extremely scientific and liable to lead to the arrest of the guilty party and “Optimal” is a good word too. The sort of word you’d like to see in, say, the name of a mutual fund. But what is it? According to a leading practitioner, Benjamin Santer of the Woods Hole Oceanographic Institution, “Climate fingerprinting seeks to identify the unique patterns of climate change associated with different environmental forces: the sun, volcanoes, and human-produced greenhouse gases, for example.” Supposedly therefore they can tell by, say, the way in which different layers of the atmosphere warm that it was due to the sun, wicked human GHGs, benign natural ones or something else.

If, that is, their assumptions are correct when regurgitated by the model as evidence. Including their assumptions on the key issue of how to separate out “noise”, data that isn’t part of a pattern, from “signal”, data that is key to understanding why something is happening. Because the whole problem is to know which details are evidence and which are distraction, which are patterns that reveal causes and which are random and conceal them.

We asked McKitrick for a brief explanation of the new method he is proposing, and here’s what he told us:

“It’s not a new method, it’s actually older than what the IPCC did. The method invented by the IPCC authors was ‘new’ in 1999 but not improved. Instead it was wrong. They were trying to address a problem with their data called ‘heteroskedasticity’, or unstable noise patterns, and didn’t realize the solution had been worked out 20 years earlier in econometrics. The IPCC picked a new method no one outside climatology uses and I showed that it doesn’t actually solve the problem and yields unreliable answers. They also talked about the problem of measurement error and once again ignored the fact that techniques to deal with it had been developed many decades earlier by mainstream statisticians. The IPCC once again invented their own method and never checked whether it’s correct, and I showed that it yields biased results.
For statisticians or econometricians reading this, all I’m doing is applying Instrumental Variables regression with HAC standard errors, and adding additional covariates to address omitted variables bias. For those to whom that explanation does not simplify things, let me just say that these methods are taught in introductory econometrics courses at university, including my own, and were long ago proven to be ‘consistent’, a technical term meaning they are as precise and unbiased as possible given the available data.”

McKitrick put together a data set on observed global temperature changes since 1900 and matched it to predictions from climate models. He then ran the IPCC-style optimal fingerprinting analysis and got the usual answer that greenhouse forcing explains all the warming, and while natural variability might play a role, it is statistically significant. But he then ran standard statistical tests on the model results, something which climate scientists never do, and showed that those answers are biased and unreliable.

He then ran the consistent method and got new results that showed the GHG effect in climate models needed to be scaled down by more than half, and the role of natural variability needed to be scaled up about four-fold, to fit the observed changes. He then repeated the analysis just using the post-1980 data. This time he found the GHG effect in models still needed to be scaled down by about one-third, and the natural variability effect needed to be scaled up by about 2.4-fold, to match the observations.

Then he used his results to work out an estimate of climate sensitivity to CO2, the famous ECS showing how much absolute warming in degrees Celsius will result from a relative doubling of atmospheric carbon dioxide. In many ways ECS, if indeed it’s a constant, is the holy grail of climate science. But when he ran that test McKitrick arrived at the same number UK statistician Nic Lewis had estimated in a 2022 study that we reported on here and which, as we mentioned at the time, is much lower than the IPCC’s estimate and which implies that CO2 is not nearly the problem alarmists have been claiming.

A lot of observers including us have long suspected that the IPCC’s knack for always finding CO2 responsible for everything was probably an artifact of dodgy statistical practice rather than real-world effects. McKitrick’s new analysis confirms our suspicions and adds to the evidence that in the real world outside climate models CO2 is not the control knob on global climate and natural variability matters a lot more than we’ve been told.

And confirms our suspicions that far from being “settled”, rigorous and, oh yeah, “simple”, climate science neglects basic tools of the trade from the statistical to the testing of hypotheses to try to disprove them not to confirm them at all costs.

One comment on “Climate fingerprinting done right”

  1. I watched a YouTube video of McKitrick explaining the original 1999 paper within which these false data sets were used to arrrive at suspicious CO2 responsibilities. It's cogent, logical and well explained, aimed at numerate but essentially lay people. It's compelling, and I advise anyone interested to search for it on the website.

Leave a Reply

Your email address will not be published. Required fields are marked *

searchtwitterfacebookyoutube-play