This not just in. Variations in solar cycles affect climate in measurable ways. In fact, another interesting tidbit from Ron Barmby’s overview Sunlight on Climate Change (p. 50) is that the phenomenon has been public knowledge since Adam Smith. Literally. “From 1779 to 1818, a world-famous astronomer, William Herschel, notice a correlation between sunspots and the price of wheat in England, as independently published by Adam Smith in the Wealth of Nations. Years with more sunspots produced better harvests, and the wheat price dropped.” It is not clear what to say about this fact now with us for 250 years. But silence clearly won’t do.
It is hard to believe that anyone would deny that the sun affects climate. It’s kind of obvious that the giant hot yellow ball up in the sky plays a major role in how hot it is, and that it varies by time of day and of year. And that it actually supplies 99.99% of all the heat energy on the planet. Which in some sense everyone concedes.
Having done so, many proceed to walk it back, by excluding the sun from their models. Or rather, excluding solar variability. They admit the sun is there, and shining, but make it a constant source of a basically invariant quantity of radiation.
There are other odd things about the way the models treat the sun. In keeping with the assumption that climate is stable unless bad people mess it up, they assume that outgoing radiation matches incoming radiation to keep temperature at whatever the ideal level is or at least that it did until the mid-20th century. And the number of things wrong with this claim is so large that it’s hard to know where to start. Although one tempting place is the ever-present, ever-ignored margin of uncertainty or, in technical terms, “error bars”.
As we observed last year, satellites measure incoming energy to within +/- 4 Watts per square metre, that is, with a potential variation of 8 W/m2. And yet we are then told first that incoming energy is 342, and outgoing also 342, except that man-made CO2 is now decreasing the latter figure by 0.1. A figure whose spurious precision conceals that it’s just one-eightieth of the margin of uncertainty.
Another issue is that if it’s really true that incoming and outgoing radiation were kept naturally in balance for a very long time, it means the system has important self-correcting features that ought to be able to cope with humanity’s fairly small addition to the natural carbon cycle. How that feature got switched off in the Nixon years has never been explained.
Then there’s the problem that we know, or think we know, that temperatures were rising from 1850 or even 1700 as the natural Little Ice Age ended. Again we’re not sure, because of the uncertainty of temperature measurements and uncertainties including the Urban Heat Island phenomenon and some increasingly brazen adjustments of inconvenient temperatures, like the hot 1930s. But if it is true, even in part, then it’s obvious that incoming and outgoing radiation were not perfectly in balance. And they certainly weren’t over the thousands, and millions, of years in which no sane person disputes that temperatures have changed significantly.
Which brings us back to Barmby, Smith and sunspots. Since the Little Ice Age, people have noticed that solar activity fluctuates, and that those fluctuations affect weather conditions on Earth. How did we ever get to the point that the scientists-who-say deny what Adam Smith blurted out a quarter of a millennium ago? And what shall we say about that bizarre phenomenon of discarding evidence to preserve the theory the way, the modern cultural myth has it, people did in the Dark Ages of superstition before Enlightened science came along to seek truth from facts?