Chapter 8 in last summer’s U.S. Department of Energy contrarian climate report looks at the strange alchemy of attribution, the ways by which climate scientists gaze at the entrails of climate data and inevitably conclude that whatever happened, it’s your fault. The IPCC would have you know that they are very sure of their results. In fact the opening sentence of the 2021 AR6 Summary for Policymakers says “It is unequivocal that human influence has warmed the atmosphere, ocean and land.” Unequivocal no less. Funny then that, as the DoE team point out, the selfsame IPCC in 2007 said “unequivocal attribution would require controlled experimentation with the climate system. Since that is not possible...” the best they can do, they explained then, was to run statistical models. And since these models can never account for all the uncertainties, “ultimately expert judgement is required”. So according to the IPCC attribution of climate change to human cause is unequivocal. Also according to the IPCC unequivocal attribution is not possible, so expert judgment is required. And expert judgement is to say that things that cannot be said should be said. If that fails, we’ll read the entrails.
The contrarian report’s Chapter 8 explains that there are several methods used to blame you for the bad weather. “Optimal fingerprinting” is the most common statistical method, but there are also other statistical methods and “process-based” methods, which are used more for regional case studies. Now “optimal fingerprinting” sounds pretty optimal. Fingerprinting catches the culprit. But there’s more to science than fancy words. And the DoE team discusses three areas where the IPCC’s conclusions have come in for heavy criticism: inadequate treatment of natural variability, dodgy statistical methods and discrepancies between models and observations, the latter which the team discussed in a separate chapter.
The IPCC’s treatment of natural climate variability is, roughly speaking, to ignore it. Expert judgement, orthodox-enforcement style. For instance they dismiss the sun as a source of climate warming. But the DoE team refer to studies that have concluded the sun might be playing an important role that is not talked about in IPCC Assessment Reports. They also talk about the effects of large-scale ocean circulations which models have trouble replicating. And if you know where to look, the IPCC admits models do poorly at handling these crucial phenomena, which raises uncertainties in attribution findings. And here again the contrarian reports point to studies that argue a lot of the observed warming since the 1970s is likely attributable to natural oscillations. Studies the IPCC expertly judges unworthy of attention.
As for the term “optimal fingerprinting”, it refers to a statistical method invented by climatologists 25 years ago that they have heavily relied on to conclude climate warming is due to greenhouse gas emissions. The DoE report points out that climatologists have been using this method for a long time without supervision by statistical experts:
“there is very little literature examining the statistical properties of the results it generates. One of its inherent weaknesses is that results depend on assumptions about the accuracy of climate models, especially regarding their representation of natural variability.”
So the models confirm one another… or themselves. Data is another story.
The contrarian team discusses a series of papers published in climate journals since 2021 by Canadian economist Dr. Ross McKitrick, who showed that the math behind optimal fingerprinting is flawed and the results are unreliable. According to his analysis if the numbers are analysed correctly:
“the model response to greenhouse gases needs to be scaled down by about half to optimally match observations. The natural forcing signal coefficient, by contrast, ...natural forcing need to be scaled up two-to four-fold to match observed climate change.”
So models overstate the effect of CO2 and underestimate the effect of natural variability and other models that do the same agree with those models. Whereas McKitrick’s analysis closely confirms results by Nic Lewis showing much lower real world climate sensitivity to greenhouse gases than models usually claim.
Another set of tools is called Time Series Analysis. One of the applications (which we discussed here) shows that over the 440,000 year Vostok ice cores, warming precedes CO2 changes but not vice versa, undermining the usual claims about the direction of causality.
In sum, no matter which technique is used, attribution studies end up being weaker and more uncertain than the IPCC likes to claim. Which is why a Red Team was necessary in the first place. Because expert judgement in these matters has surrendered to dogmatism.
Next week: the case of the mysteriously declining albedo.



"So models overstate the effect of CO2 and underestimate the effect of natural variability" ... models do overstate the effects of CO2 but they understate the effects of less air pollution and the decline in the percentage of cloudiness ... the change of air pollution is man made but it is not known if the change in cloudiness is man-made of or natural ... it is not even known if the change in cloudiness is a climate forcing or a climate feedback .... the models do not ignore natural causes of global warming ... there is far less evidence of natural warming in the past 50 years than the evidence for man-made warming ... science deniers falsely belief that CO2 does little or nothing
"over the 440,000 year Vostok ice cores, warming precedes CO2 changes but not vice versa, undermining the usual claims about the direction of causality."
The usual conservative silly science not recognizing the difference between CO2 as a climate feedback to changes in ocean temperature and man-made CO2 emissions as a relatively new climate forcing
i've been reading this myth since 1997
it's so old it's turning grey