A piece in “The Conversation,” whose slogan is “Academic rigour, journalistic flair”, just hyped the sexy new “attribution science” that claims to attribute specific weather events to climate change even though you can’t. For instance the July 2021 European floods: “A team of climate scientists with the group World Weather Attribution analyzed the record-breaking storm, dubbed Bernd, focusing on two of the most severely affected areas. Their analysis found that human-induced climate change made a storm of that severity between 1.2 and nine times more likely than it would have been in a world 1.2 degrees Celsius (2.1 F) cooler.” Sure, if you say so. But since they never predict things, only jump up after the fact and claim to have an explanation for what just happened, there’s nothing to go on but their say-so. And even so all we get is a guess that something is somewhere between 1.2 and nine times more likely. If that’s your best estimate of whether you will roll a 7 before you roll another 6 you must not go to Las Vegas.
It is in fact 1.2 times more likely that you will “crap out” before “making your point” if it’s six. But people read such articles, and indeed write them, and do not realize that what they’re looking at is less reliable than a monkey throwing darts. Which we also strongly urge you to avoid, BTW.
In this case the author was a “climate scientist” and he insists that “A decade ago, scientists weren’t able to confidently connect any individual weather event to climate change, even though the broader climate change trends were clear. Today, attribution studies can show whether extreme events were affected by climate change and whether they can be explained by natural variability alone. With rapid advances from research and increasing computing power, extreme event attribution has become a burgeoning new branch of climate science.”
As Eric Worrall comments tartly, “How can anyone say with a straight face that attribution science is adding value to the conversation, when the best they can achieve is an uncertainty of 900%, and a bottom limit of no change in severity whatsoever? The bottom limit of 1.2 times worse seems indistinguishable from business as usual, or even a slight reduction in the severity of weather, with an uncertainty of that magnitude. … in my opinion a branch of scientific analysis which apparently cannot distinguish between an unfolding catastrophe and business as usual is way too immature to add any value to the public discussion about climate policy.”
It’s like the headline “Climate change: Up to 95% of ocean surface climates may disappear by 2100” and then you read the press release text and it turns out that “Between 35.6% and 95% of 20th century ocean surface climates — defined by surface water temperature, pH and the concentration of the mineral aragonite — may disappear by 2100, depending on how greenhouse gas emissions develop in the first half of the 21st century”. That the authors are using RCP4.5 and RCP8.5 instead of something that might really happen is not far short of fraud at this point. As is not knowing if it’s going to be one in three of all of them and saying you know anything. And “The authors conclude that while some marine species currently keep pace with changing ocean climates by dispersing to new habitats, this may no longer be possible if existing ocean climates disappear, forcing species to either adapt rapidly to new climates or disappear.” In fact the “climates” may change, as they always have. But they won’t disappear. (The fine print speaks of “new climates with higher temperatures, more acidic pH, and lower saturation of aragonite.” Sure. But how much higher?) We’ll always have the ocean.
Then there’s the whole issue of how often even “peer reviewed” research is based on numbers so bad that a former editor of the British Medical Journal just declared that “It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary”. On the other hand, when your margin of uncertainty is a factor of nine, way bigger than your result, what does it matter if the data is also rubbish?