Among the devices used to shut down open discussion on climate is the elevation of “peer review” to a silver bullet. Unfortunately it seems to be a clay ball instead. As Colby Cosh just wrote in the National Post, “Psychological science has been suffering a “replication crisis” for a decade now, but this week there was a particularly newspaper-friendly example of a famous finding failing to hold up on re-examination.” Specifically that a famous, often-cited paper on dishonesty relied on faked data. As Cosh says, “The lede writes itself”.
The paper in question was published in 2012, based on the tendency of insurance company customers to be more honest about their odometer readings if asked up front to promise not to lie. Unfortunately when the data was released after a failed 2020 effort to replicate the result, in an Excel Spreadsheet, it turned to have been generated using… Excel’s random-number generator. So it wasn’t just bogus, it was badly faked.
As Cosh says, “of course, this is science working the way it’s supposed to, in principle. The only reason people were combing over an old result was that everyone is now aware of the “replication crisis” and there is more funding available for replications of celebrated psychology experiments (which often turn out to be turkeys).” But what about the test scores behind, oh, say, Michael Mann’s hockey stick? Contrary to what his defenders say, Mann has never released them (though others long ago figured out what they would show if he did).
According to Cosh, the outcome with regard to the dishonest study of dishonesty was unusual partly because “It’s unusual that the bad Excel sheet was preserved for a decade, and unusual that it was ever openly distributed. This is not yet a strong ethical norm in the soft sciences… Many economics journals now insist on this, they add, but social psychology largely doesn’t.” Which surely underlines that “peer review” does not mean what the naïve think it does, that experts in a field take the original study and try to poke holes in it using elaborate methodology and original data. Instead they skim it, and go yeah, could be, especially if it’s also “pal review” where they know and like the author and in due course will have them peer review something of their own.
Claiming magical powers for “peer review” is not by any means the dirtiest trick in the alarmists’ bag. But it is an example of a mix of credulity on their part and cynical exploitation of credulity on the part of others. Which is definitely dirty. Real sciences are increasingly looking into the problems of replication and peer review. Time for climate science to do it too.