1. Dr John Clauser’s ICSF/CLINTEL Lecture
On 5th October 2024, Dr John Clauser, winner of the Nobel Prize for Physics in 2022, gave a lecture to the Irish Climate Science Forum (ICSF) and Climate Intelligence foundation (CLINTEL) that was promptly denounced by climate scientists all around the world and led to the cancellation of a presentation he was due to give to the IMF. The lecture, which you can find here, is in two parts, the first of which is devoted to a detailed argument to the effect that the current consensus that the earth’s climate is in a critical state of warming – or is actually warming at all – is unsupported by the evidence.
The argument is centred on what is often referred to as radiative flux, which concerns fluctuations in the balance between the shortwave energy entering the earth’s atmosphere in the form of sunlight and the two forms in which this energy is then radiated back into space: shortwave energy in the form of sunlight reflected from the upper surface of clouds – known as the albedo effect – and longwave infrared radiation radiated from the earth’s surface after it has been warmed by the sun.
As will be obvious but must nevertheless be stated, the importance of this balance is that, if the amount of energy entering the atmosphere exceeds the net amount of energy leaving it, this will result in global warming – indeed, it is the definition of ‘global warming’ – whereas if the amount of energy leaving the atmosphere exceeds the amount of energy entering it, this will result in global cooling. This issue is therefore fundamental to the whole climate debate. For there is no point in trying to determine why the earth is warming – whether it be due to an increase in greenhouse gases, for instance – unless we know that the warming is actually happening. Similarly, there is no point in trying to determine what is causing an increase in greenhouse gases – whether it be due to the burning of fossil fuels by human beings, for instance – unless we are first certain that there is indeed an imbalance between incoming and outgoing radiation.
The problem is that determining whether or not this imbalance exists is a lot more difficult than one might imagine. For all three of the variables in the equation – the one form of energy coming in and the two forms of energy going out – are constantly fluctuating. Due to the earth’s eccentric orbit, even the amount of sunlight reaching the atmosphere varies throughout the year, which, in turn, has an effect on cloud cover, thereby affecting the amount of shortwave radiation being radiated back into space. Both of these phenomena then affect the earth’s surface temperature and the amount infrared radiation being radiated from it.
The problem is then made even more difficult by the fact that while the absolute amounts of energy both coming in and going out are huge, around 340 W/m2 (Watts per square meter), according to the IPCC, the imbalance is only 0.7 ± 0.2 W/m2 or just 0.2% of the input energy, which Dr. Clauser argues is just too small to measure with sufficient accuracy to be certain of whether there is an imbalance or not.
This argument he then backs up by examining a series of studies used by the IPCC in making its assessment, most of which sampled data over fairly short periods – typically three or four months – which is not enough time to measure radiative flux over the full length of the earth’s eccentric orbit – a sidereal year – during which the amount of sunlight reaching the earth actually varies by ± 9 W/m2. That is to say that it can be as much as 349 W/m2 or as little as 331 W/m2. Assuming that the output energy is not entirely determined by this variable input, as Dr. Clauser assures us is the case, it follows, therefore, that if the data used by a study is obtained during that part of the orbit when the earth is closest to the sun and receiving the maximum amount of sunlight, it is highly likely that it will show a positive imbalance, whereas if the data is obtained during that part of the orbit when the earth is the furthest away from the sun and receiving the minimum amount of sunlight, it is far more likely that it will show a negative imbalance.
It is also worth noting that the eccentricity of the earth’s orbit is not constant, varying with the position and gravitational influence of the other planets in the solar system, particularly Jupiter and Saturn. Sometimes, as a consequence, its elliptical path is stretched out, taking it even further away from the sun, while at other times the orbit is almost circular. This variable gravitational influence also means that the sun is seldom at the centre of the orbit, usually being closer to one end of the ellipse than the other, the two ends being known respectively as the perihelion and the aphelion. What this all means, therefore, is that, in order to gain an accurate view of the earth’s fluctuating energy inputs and outputs, one has to obtain more than one full year of measurements, the point being that, one year, it may have a positive imbalance, the next year a negative imbalance. This being the case, it is also possible that, even if, at present, there is a positive imbalance, it might all balance out over time. Based on sample data from just three or four months, however, it is impossible to tell.
What is particularly surprising, therefore, is that, since the first two studies in the series – by Stephens et al. in 1981 and Ramanathan in 1987 – no one has attempted to rectify this problem by extending the sample size. Instead, all subsequent studies have taken the extraordinary step of augmenting the work of Stephens and Ramanathan – both of whom based their calculations entirely on Top of Atmosphere (ToA) data obtained from satellites – with Ocean Heat Content (OHC) data, which, unless one has a perfect understanding of the relationship between the two, runs the risk of comparing ‘apples to oranges’.
Not that there weren’t some seemingly good reasons for attempting this methodological hodgepodge, the first of which was that, having sampled data from two different years, six years apart, the findings of the first two studies were extremely inconsistent, with Stephens et al. concluding that there was a positive radiative imbalance of 9 W/m2, while Ramanathan claimed to have found no imbalance at all. In fact, the discrepancy, as Dr. Clauser points out, is actually slightly worse than this. For while Stephens et al. did, in fact, find a positive imbalance, if one checks the figures, it was only 6.8 W/m2, while Ramanathan actually found a negative imbalance of -3 W/m2.
In their 2009 paper, this therefore led Loeb et al. to conclude that it was impossible ‘to provide an absolute measure of ToA radiation imbalance to the required level of accuracy’: an assertion which is true if one only samples three or four months’ worth of data. The fact that no one thought to extend the duration of the study, therefore, leads one to suspect that the real reason all subsequent studies have chosen to mix and match datasets has been to avoid the fate of Professor Ramanathan, whose findings were cited in a 2003 report by the US National Academy and National Research Council as not meeting ‘quality standards’.
2. Problems in the Culture of Modern Science
Importantly, it should be pointed out that Dr. Clauser does not prove that there is no imbalance between the radiation entering the earth’s atmosphere and the radiation leaving it or that the earth is not therefore warming, whether that be as a result of human beings burning fossil fuels or some other cause. All he is saying is that the standard of the science involved in these studies is so poor that one cannot conclude anything from them at all. The data samples are too small, the combining of datasets too methodologically suspect and the mathematical errors too numerous for the findings to carry any weight.
The question, therefore, is why the conclusions drawn from these studies should have been allowed to pass unchallenged and form the basis of the IPCC’s recommendations on combating climate change. In this regard, however, we need to tread very carefully. For while, in the past, I have suggested that one of the primary impulses behind the global warming scare lay in the interests of certain organisations, especially NASA, which, after the end of the Apollo programme and the mid-air explosion of the space shuttle Challenger, needed to repurpose itself, this is not to say that there was anything cynical about its choice of climate change as the vehicle for achieving this or that Dr. James Hansen, Director of NASA’s Goddard Institute for Space Studies (GISS), who led NASA’s climate change programme, did not genuinely believe in the Anthropogenic Global Warming (AGW) theory he presented to Congress in 1988.
It may well be that once NASA had jumped on this bandwagon, there was no turning back and that some senior executives may have understood this; but to suppose that everyone involved was – and still is – complicit in some grand deception is not only very unlikely but a gross over simplification of something that is far more complicated: something, indeed, that starts with the nature of science itself.
I say this because, as I have reminded readers of this blog on more than one occasion, according to Sir Karl Popper’s famous dictum, first put forward in ‘The Logic of Scientific Discovery’, originally published in German in 1934, no scientific theory can ever be proven, only disproven or falsified. This is because, no matter how long a theory has withstood the test of time, it is always possible that tomorrow we may come across empirical evidence that proves it false. The problem is that, while all scientific theories are thus provisional, in that they can never be proven, in practice it is also very difficult to disprove them.
This is because, if we come across evidence casting doubt on a particular scientific theory, we do not immediately assume that the theory is false. We check the data again, rerun the test, and even if the results remain the same, we then try to think of reasons why this anomaly should have occurred, explaining it perhaps in terms of a particular set of conditions which make it an exception to the rule rather than something that overturns the rule. It is only when these exceptions start to build up that we begin to question the theory itself and, even then, history is littered with scientists who have stubbornly adhered to their favoured theory long after it had passed its sell by date, Joseph Priestly probably being the best known example.
Nor is it hard to understand why this happens. For most successful scientists are intensely invested in their work. It is not just their careers that are at stake if a theory with which they are associated is undermined; it’s also their reputations, their legacy. It’s why scientific disputes can become so heated and why some scientists will grasp at almost any straw to defend a theory with which they are closely identified. One of the most vociferous supporters of combining ToA and OCH data in calculating radiative flux, for instance, was James Hansen, who would have been only too well aware that if someone were to undertake a study of sufficient duration based entirely on ToA data, there is a chance that, like Professor Ramanathan, they would discover a negative imbalance or, far more likely, no imbalance at all, thereby bringing the AGW theory into question and very possibly destroying Dr. Hansen’s reputation. He would never have admitted that such a consideration influenced his judgement, of course, and we do not know that it did. The point, however, is that scientific research is a human endeavour and, without rigorous and objective oversight – an issue about which Dr. Clauser is much exercised – it will inevitably reflect far more human failings and weaknesses than those with a more idealistic view of science care to acknowledge.
Indeed, the temptation to place one’s reputation above one’s objectivity – to which I have no doubt that at least some prominent scientists have occasionally succumbed – is only one of at least three different ways in which human nature affects both the character and quality of modern science. The second, and arguably the more pervasive, comes about as a result of the fact that, today, nearly all scientific research is carried out, not by individuals, but by teams, which are almost invariably led by a prominent scientist with an established or growing reputation in the field, who will also very probably be responsible for obtaining the team’s funding. As such, it will be this person who not only sets the overall direction for the team and decides on the particular lines of research to be followed, but also selects who gets to have a place on the team, thus giving him or her almost god-like power.
What this means, therefore, is that it takes a lot of courage, along with the accumulation of deep concerns about either the direction of the project or the quality of the work being done before anyone lower down the order will even consider questioning the team principal’s judgement, not only because it could well jeopardise his or her career, but because other members of the team may well feel betrayed or even threatened by it, thereby damaging group cohesion and sowing resentment.
In fact, as soon as scientific research becomes a group activity, in which both competitiveness and loyalty play a part in the group dynamics, it becomes almost impossible for the critical and objective oversight which all scientific research needs, to come from within the team, itself, and so has to come from outside. The problem is that this external oversight has been traditionally provided by a system of ‘peer review’, which Dr. Clauser quite rightly believes is breaking down, if it hasn’t already completely collapsed.
The primary reason for this is that, in order to continue receiving funding, a research team has to be able to demonstrate results, which it can only do by getting those results published. If a team has previously failed to publish the results of its work, therefore, funding bodies tend to take the view that those providing the funding, typically tax payers, are not getting value for money and are consequently less willing to provide more. This puts enormous pressure, not just on team principals, but on publishers to publish.
The problem with this, however, is that most scientific papers have very few readers, especially if they are in very specialised fields, which means that most scientific publishers are unable to make a profit or even cover their costs from readers’ subscriptions alone, even when the publications are solely online. This has therefore led to an inversion of the normal economic relationship between the producers of a product and its consumers, in which, instead of the consumers – in this case the readers – paying for what they consume and the producers – in this case the authors – being paid for what they produce, it is now the authors who have to pay for their papers to be published while the readers largely read them for free: an arrangement which is only made possible by the fact that the authors include the cost of publication in their applications for funding, which the funding bodies are happy to accept because it ensures that the results of the research they are funding will be published, which then covers them against any accusation of wasting tax payers’ money by funding research which doesn’t produce results.
Because most of the funding comes from those who are not involved, moreover, everyone inside the system is perfectly happy with it, especially the publishers, who are now spared the burden of having to be selective about what papers they publish. In fact, the more papers they publish the more they get paid, even if no one reads them: a clear contravention of a basic law of economics which has inevitably led to a seemingly inexorable growth in the number of scientific papers published each year, which rose to 5.14 million worldwide in 2022, more than 4 million of them in English and most of them paid for by tax payers.
Regardless of whether or not this is a species of fraud upon the public, however, an even bigger problem is the fact that while the quantity of scientific papers published each year has continually increased, their quality has almost certainly declined, not least because the last thing publishers want is some over-zealous peer reviewer to reject a paper merely because the figures in it don’t add up, as in the case of the papers by Stephens et al. and Ramanathan cited earlier. As peer reviews are conducted on a purely voluntary basis, moreover, without financial remuneration, reviewers have every reason to be lenient or charitable in their reviews, knowing that the author of the paper they are reviewing this week may well be reviewing their own paper next week. The scientific world being as small as it is, especially in highly specialised fields, the reviewer and reviewee may even know each other and will certainly have closely aligned views, otherwise the publisher would not choose them to review each other’s papers. Indeed, many papers today are almost certainly accepted for publication purely on the basis that the lead author is known to the reviewer, who may do little more than read the summary paragraph at the beginning of the paper just to ensure that there are no surprises in it.
What is really disheartening about this whole situation, however, is the fact that no one actually expects any surprises. For given the fact that most scientific papers today are written and published for the sole purpose of securing future funding, the last thing the authors want to do is rock anyone’s boat. They will want to add something new to their field of research, of course, so as to justify their work, but the one thing of which one can be sure is that it won’t be anything controversial.
3. Dr. Clauser’s ‘Thermostat’ Theory of Climate Regulation
If the way in which science has been structured over recent years has thus produced a level of autocracy, cronyism and sheer sloppiness, in which negligence and fraud are not uncommon, the real losers in all this are not just those mavericks who refuse to toe the line – if any there be – but we the public, who pay to maintain a scientific culture which has become increasingly moribund, with whole areas of research being declared off limits, the prime example being climate science, where anyone wanting to research any aspect of our climate that is not related to manmade global warming will get pretty short shrift, and where, despite all the money being poured into it, no significant progress has been made for over thirty years.
Of course, it will be said that this is because the science is ‘settled’. What this really means, however, is that anyone with an alternative theory is effectively excluded from the climate debate, including Dr. Clauser, who, in the second part of his lecture, does, in fact, put forward an alternative theory of how our climate works, which not only explains why, despite the presence of greenhouse gases in the atmosphere, there is no radiative imbalance, but why the earth’s climate has been sufficiently stable for long enough for life to have evolved here.
The answer, he says, is due to the fact that over 70% of the earth’s surface is covered in water, which means that over 70% of the sun’s energy reaching the earth’s surface is not used to heat the land but to evaporate water. The warm, moist air which results then rises into the colder air above where its energy is conducted to other gases in the atmosphere, primarily nitrogen and oxygen. This then causes the water vapour formed by evaporation to cool and condense, first into aerosols and then into droplets, which form clouds, the upper surfaces of which then reflect more of the sun’s shortwave energy back into space, reducing the amount of energy reaching the earth’s surface and the amount of water evaporated. In a continuous feedback loop, this then leads to less clouds being formed, more of the sun’s energy reaching the earth’s surface and more evaporation. In short, what Dr. Clauser suggests is that the earth’s climate is self-regulating, with clouds working a bit like a thermostat, either allowing more energy to reach the earth’s surface or reflecting more of it back into space, thus keeping the climate more or less stable.
Importantly, this is not the first time this theory has been put forward. I first heard it, I think, more than ten years ago. The difference is that Dr. Clauser is able to express it with a lot more mathematical precision than I have seen or heard before, with the result that one cannot help but be impressed by the sheer scale of this thermostatic feedback system. I say this because even according to the IPCC’s own figures, the amount of energy reaching the earth’s surface as a result of variations in cloud cover fluctuates by as much as 37 W/m2. According to Dr. Clauser, however, the range is actually closer to 73 W/m2, which is one hundred times greater that the IPCC’s estimated radiative imbalance between the amount of energy entering the earth’s atmosphere and the amount of energy leaving it.
Of course, it may be argued that there is no direct connection between the amount of energy reaching the earth’s surface as a result of clouds and the amount of energy being retained within the earth’s atmosphere as a result of greenhouse gases. Apart from the fact that this not entirely true, however – in that the lower the input energy, the less outgoing energy there is for greenhouse gases to absorb – sometimes the sheer difference in the size of two systems means that one totally eclipses the other without there needing to be a direct connection.
One can see this more clearly, perhaps, if one considers the relative amounts of different greenhouse gases in the atmosphere, of which water vapour is by far the largest, being thirty times more abundant than CO2 and accounting for nearly 95% of the 33°C which, according to the Stefan-Boltzmann formula, greenhouse gases contribute to the earth’s mean temperature. That is to say that if one were to reduce the amount of water vapour in the atmosphere by just 5%, which, if I read Dr. Clauser correctly, is well within its normal range of fluctuation, it would outweigh the heating effect of all the CO2 in the atmosphere. Similarly, the variations in the amount of energy being reflected into space by clouds so far exceeds the radiative imbalance estimated by the IPCC that the latter is simply lost within the former. It’s just a drop in the ocean.
That’s not to say, of course, that this drop in the ocean does not exist. In order to determine whether or not it does, however, one would not only have to even out the variations in the amount of energy entering the atmosphere as a result of the earth’s eccentric orbit, one would also have to strip out all the variations in the amount of energy leaving the atmosphere as a result of fluctuations in the albedo effect. And the chances of anyone actually doing this are virtually zero. This is because for anyone to fund such a study, they would have to question the fundamental assumption that there is an underlying imbalance in the radiative flux which has nothing to do with either the earth’s orbit or with Dr. Clauser’s water based thermostatic control system, but is solely caused by an increase in greenhouse gases, which no one, at present, is going to do.
Thus it is that the current state of our scientific culture effectively traps us in a classic ‘Catch 22’ situation, in which our institutionally enforced adherence to the prevailing orthodoxy prevents us from doing the work which might prove that orthodoxy false. While we persist in adhering to this self-perpetuating dogma, however, not only do we risk destroying our economy by trying to eliminate something so negligible that it is well within the margin of error of any way of measuring it, in the longer term we also risk doing ourselves an even greater disservice. For while our self-regulating climate control system may work pretty well most of the time, it is not perfect. I say this because there have been at least two periods during our recorded history – during the 5th and 6th centuries, and then again during the 16th and 17th centuries – which have been appreciably colder than the periods either side of them. This therefore suggests that there are other factors involved which may interfere with the basic thermostatic system. Unless we are allowed to look for these other factors and determine how they interact with the basic system, however, not only will we never know how the complete system works, but we will not be able to predict when cooler periods may occur in future and thus take measures to mitigate them.
4. The Proxy War
At this point, of course, it will be objected that, since the publication of the ‘Temperature Record of the Last 2000 Years’ by Michael Mann et al. in 2006, it has been generally accepted that neither of the two historical periods cited above, colloquially known as ‘The Dark Ages’ and ‘The Little Ice Age’, were any colder than the periods before and after them and that the earth’s climate remained more or less stable until the 19th century when it started warming as a result of industrialisation and increased CO2 emissions. The problem with Michael Mann’s study, however, along with more than two dozen other studies undertaken since then, is that they all mix and match data from multiple proxy sources, including tree rings, lakebed sediments, ice core samples and corals, which are then combined using various statistical methods to reconstruct the climate not just of a particular region but globally. The result is that it is very hard to determine how methodologically sound these studies are. Just like the studies used by the IPCC to calculate the earth’s radiative imbalance, however, we are simply expected to accept them at face value, even though Michael Mann, in particular, has consistently refused to publish either his data or his precise methodology.
There is, however, a form of proxy data which we can all access and which we do not need to statistically manipulate in order to reconstruct the climate of a period. It may not be very precise and most of it is not even quantifiable, but most of it is highly reliable. This is because people do not tend to lie when creating contemporaneous records, whether these be in the form of diary or journal entries or line items in a ledger. In fact, the latter are among most reliable data one can obtain, especially when generated within the Roman Empire, where maintaining crop yields and stable food prices were essential to overall stability and where Roman officials were therefore extremely meticulous in their record keeping.
This is reflected in the level of detail Kyle Harper is able to provide in his 2017 book ‘The Fate of Rome – Climate, Disease & The End of an Empire’, where he notes that a cooling of the climate first began to affect crop yields in the 3rd century AD, leading to higher food prices which successive emperors then tried to combat by debasing the coinage and minting more of it. This, of course, only made the inflation worse, until the Emperor Diocletian started taxing people, not in coin, but in kind, with the result that every bushel of wheat and jar of olive oil was counted and recorded like never before.
Fortunately for Rome, recorded harvests then began to recover again in the early 4th century, strongly suggesting, therefore, that the climate also began to recover, allowing the Emperor Constantine to enjoy a new golden age. This, however, was only short-lived. By the end of the century, temperatures began falling once again, resulting in mass migrations from areas of northern and eastern Europe where agriculture was only marginally viable. The Goths, for instance, migrated from Scandinavia and the Baltic to the Danube, while the Vandals, who originated in what is now Poland, first moved west, breaking through the Roman defences on the Rhine, before heading south to Spain and North Africa, which was much greener and more hospitable than it is now.
The cooling trend then greatly accelerated in the early 6th century, mostly as a result of a single event in 536, which was documented by numerous authors at the time. On campaign in Italy with Belisarius, Procopius of Caesarea, for instance, a prominent Greek scholar and historian, wrote that ‘during the whole year, the sun gave forth its light without brightness, like the moon, and it seemed extremely like the sun in eclipse’. Another writer, John of Ephesus, says that ‘the sun darkened and stayed covered in darkness a year and a half, that is eighteen months. Although rays were visible around it for two or three hours [a day], they were as if diseased, with the result that fruits did not reach full ripeness.’
What all the chroniclers of these events very clearly describe, therefore, is a sun blotted out by a massive cloud of ash, which was almost certainly the result of a volcano somewhere in the northern hemisphere, although no one actually knows quite where. To make matters worse for those living through the famine which followed, another volcano then erupted somewhere in eastern Asia in either 539 or 540, belching massive clouds of sulphates into the stratosphere, reducing global summer temperatures by an estimated 2.5°C and making the decade from 235 to 245 the coldest in the last 2000 years: the entire period covered by Michael Mann’s study, although it does not figure at all in his reconstruction of the climate.
Of course, it may be argued that volcanos are not, strictly speaking, climatological events, in that they are not systemically part of the climate. One would imagine, however, that any event which had such a profound effect upon the climate would be detectable in the proxy data used to reconstruct it. After all, there is enough proxy data for geologists to be absolutely certain that these two volcanoes did, in fact, erupt around the time that contemporaneous historians recorded their effects.
Proponents of the AGW theory consequently fall back on a secondary argument that, while these effects were largely confined to Europe, Michael Mann’s study covered the entire planet and would not necessarily, therefore, reflect local anomalies. Apart from the fact that the 539/40 eruption occurred somewhere in eastern Asia, however, we also have other forms of proxy data which suggest that the effects were far more widespread. This is because mass migrations are not just movements of people; the people are almost invariably accompanied by their livestock and other animals, along with various pests and diseases, for which we therefore have epidemiological data, especially in this case. For the most devastating of all the diseases which these 6th century migrations spread across the world was bubonic plague, which first arrived in Europe in 541, just two years after the second volcano erupted, the first unmistakeable case being recorded in Pelusium on the Mediterranean coast, just a few miles from the port of Alexandria.
From there it then spread all across the Mediterranean, arriving in Constantinople in 542, where it raged for four months, killing around 300,000 people out of an estimated population of half a million. In successive waves over the next 150 years, it then decimated every part of Europe for which we have written records. The city of Rome, which once boasted a population of more than a million, was reduced to just 10,000 people, so few in fact that, by the end, there were too few literate people still alive to document the devastation. Indeed, it’s why we call this period ‘The Dark Ages’. Because there was no one left to illuminate them.
Of course, one should not confuse these effects with what ultimately caused them. The Dark Ages went on long after the first wave of bubonic plague had died out, and this first visitation, itself, went on long after the mass migrations which brought it to Europe had ended. What is really significant about the spread of bubonic plague, however, is not the devastation it caused or even what brought it to our shores, but the place from whence it came. For the bacterium, Yersinia Pestis, which causes it, originated on the Qinghai-Tibet Plateau in central Asia, over four thousand miles away.
That’s not to say, of course, that anyone from the Qinghai-Tibet Plateau actually brought it to Europe. As I have explained elsewhere, it is far more likely that deteriorating agricultural conditions in the Himalayan uplands drove at least part of the population to migrate south onto the warmer Indian plain, where they not only infected the locals but intercepted one of the two main trade routes carrying silk, cotton, spices and gemstones to the Roman Empire, the latter stages of which involved shipping the goods across the Indian Ocean from ports on India’s west coast, sailing them up the Red Sea to the port of Berenice, caravanning them across the desert to Coptos and then sailing them down the Nile on Arab dhows to Alexandria, which explains why the first reported case was recorded at Pelusium. The fact that what precipitated this journey was a displacement of people on the Qinghai-Tibet Plateau, however, casts considerable doubt on the claim that the climatological effects of the volcanic eruptions of 536 and 539/40 and, indeed, the period of cooling that preceded them, were confined to that part of the world where it just so happens they were most thoroughly documented.
5. Climate Fluctuations and the Sun’s Magnetic Field
At this point, therefore, I am almost inclined to simply rest my case by asking readers to choose which reconstruction of the earth’s climate history they find more credible: that based on a reinterpretation of tree rings, lakebed sediments, ice core samples and the like or that based on the contemporaneous testimony of people who actually lived through the economic failures and social upheavals which a cooling climate has occasionally wrought upon the world. The unfortunate fact is, however, that while those who claim that there were no major changes in the climate until around 150 years ago do not have to explain this remarkable stability, anyone who claims that there have been at least two periods during the last 2000 years in which the earth became significantly cooler is more or less obliged to provide an explanation for this variability, especially as my purpose in drawing the reader’s attention to these variations is my belief that, while Dr. Clauser’s thermostatic climate control theory is almost certainly correct, it is incomplete in that significant variations in the climate still occur, for which an explanation is required. The good news is that the best such theory available – at least with respect to the last 2000 years – not only provides this explanation but is also entirely consistent with Dr Clauser’s theory and could even be described as an adjunct to it.
This is the theory that another major factor affecting the earth’s climate is the sun’s magnetic field, the evidence for which was first noticed during the second significant period of cooling in the earth’s recent history, known as The Little Ice Age. Again, of course, it will be objected that this was an entirely localised phenomenon. For given the existence of paintings and other records of ‘Ice Fairs’ on a frozen River Thames, it can hardly be denied that, during the second half of 16th century and throughout the 17th century, the climate of northern Europe did in fact get colder. For proponents of the AGW theory, therefore, the only possible response to this is to claim that this was once again confined to Europe. From the vast amount of evidence provided by Geoffrey Parker in his 2013 work ‘Global Crisis – War, Climate Change & Catastrophe in the Seventeenth Century’, however, it is fairly clear that the climate cooled everywhere else as well, most notably in China, where the three hundred year old Ming dynasty collapsed as a result of an invasion by Manchurians from the north, where, as a result of climate change, the agricultural economy could no longer sustain the population, which, like the inhabitants of the Qinghai-Tibet Plateau a thousand years earlier, had no choice, therefore, but to migrate south.
With respect to The Little Ice Age, however, we do not have to rely purely on historical evidence that the whole planet underwent a period of cooling at this time. For we have well documented empirical evidence, not just of the cooling itself, but of its possible cause. This evidence primarily consists of astronomical data showing that during the 28 year period between 1672 and 1699 there were less than fifty sunspots observed on the surface of the sun, around a thousand times fewer than the number observed during any similar length period since the early 18th century. Known as the Maunder Minimum, after solar astronomers Edward Maunder and his wife Annie Russell Maunder, who published papers on the subject in 1890 and 1894, the significance of this very low level of sunspots is that, being magnetic vortices or whirlpools in the sun’s plasma, their absence is indicative of the sun generating a much weaker magnetic field than usual.
Not, of course, that fluctuations in the sun’s magnetic field are, in themselves, unusual. This is because the sun’s magnetic poles actually flip every eleven years, with the north pole becoming the south pole and vice versa. When the magnetic poles are located at the geographic poles – which is to say ‘poles apart’ – the magnetic field is at its strongest. When the magnetic poles are in transit, in contrast, the magnetic field becomes chaotic and much weaker, as is evidenced by the number of sunspots observed over the period of each eleven year solar cycle. At no time in the last three hundred years, however, have sunspots been as scarce as they were during the Maunder Minimum, which strongly suggest that the sun’s magnetic field between 1672 and 1699 was as weak as it has ever been since astronomers first started making these observations.
Of course, we still don’t know why the sun’s magnetic field was as weak as it was during this period, or if this is what made the 17th century so cold. In fact, we still don’t know that these two phenomena were connected. A few years ago, however, a team of astrophysicists at Northumbria University, led by Professor Valentina Zharkova, discovered that the sun actually has two magnetic fields, generated at different depths within the plasma as a result of different layers rotating at different speeds. What’s more, these two magnetic fields are not always in sync. When they are in sync, they reinforce each other and the overall magnetic field is at its strongest. When they are not in sync, they partially counteract each other and the magnetic field is at its weakest. This is known as a Grand Solar Minimum (GSM) and happens every 350 to 400 years, which, if the Maunder Minimum was the last such event, strongly suggests that we are in for another one sometime this century.
Again, this still does not explain why the earth’s climate should become cooler during one of these events or how the two phenomena are related. The obvious place to look for an answer to this question, however, is with respect to the most significant effect the sun’s magnetic field has upon the earth, which is actually far more significant than one might suspect. For just as the earth’s magnetic field protects us from the solar wind – a stream of charged particles emanating from the sun, which, without our own magnetic field, would actually strip the earth of its atmosphere – so the sun’s magnetic field protects us – and, indeed, the whole solar system – from what is generally called cosmic radiation: mostly free neutrons blasted into space by exploding stars. One of the consequences of the fluctuations in the sun’s magnetic field, therefore, is that there are also fluctuations in the amount of these free neutrons getting through the magnetic field and entering our atmosphere.
In fact, we have known about these variations in the amount of cosmic radiation reaching the earth since the 1960s, in that they affect the results of radiocarbon dating. This is because all the 14C in the atmosphere and, indeed, most of the carbon on earth, is created by free neutrons striking the nuclei of nitrogen atoms in the upper atmosphere. These consist of seven protons and seven neutrons. When struck by a free neutron, however, one of the protons is occasionally knocked out of the nucleus, transforming the nitrogen atom into a carbon atom, 13C, which is very unstable and quickly loses a neutron to become the most common and stable isotope of carbon, 12C. Very occasionally, however, the free neutron striking the nitrogen nucleus will knock out a proton and then stick, forming the radioactive isotope of carbon, 14C, which has a half-life of 5,700 ± 30 years and is used in radiocarbon dating.
In order for radiocarbon dating to be accurate, however, one has to know how much carbon – which, once it has been created, immediately binds with oxygen to form CO2 – was in the atmosphere at the point when a tree, for instance, absorbed it through photosynthesis and used it to build its own cells so that an archaeologist, a thousand years later, might work out the age of a wooden artefact by measuring the amount of 14C still in the wood. Ironically, Willard Libby and his colleagues, who pioneered the use of radiocarbon dating, worked out how to calculate this when they observed that non-linear variations in the levels of 14C in tree rings were in inverse proportion to the rings’ width and density, such that narrower tree rings, produced in years with poorer growing seasons, had more 14C than would have been the case in a simple linear progression from the oldest wood at the core of the tree to the youngest wood just under the bark. What’s more, they also established that, while there were localised variations in the quality of growing seasons, both the annual pattern of growing seasons and the amounts of 14C in the atmosphere each year were the same throughout the world, thereby establishing a clear correlation between the amount of 14C in the atmosphere – and hence the relative strength of the sun’s magnetic field – and the global climate.
What did they not establish, of course, was a causal connection between the two. That was left to Danish physicist, Henrik Svensmark, who, in 1996, discovered that the intensity of cosmic radiation entering the earth’s atmosphere correlated closely with variations in global cloud cover and put forward the hypothesis that the increased ionization of the atmosphere caused by cosmic radiation increased the transformation of water vapour into aerosols, the first stage of cloud formation. Recognising another possible threat to the AGW theory, climate scientists at the time immediately dismissed the idea, claiming that the aerosols formed by charged particles were too small to go on to form droplets. These claims, however, were based purely on mathematical models. The published results of experiments conducted by Svensmark and others in 2013, on the other hand – which were later confirmed by the CLOUD collaboration experiment at CERN in 2016 – showed that this was not the case and that increased cosmic radiation as a result of a weak solar magnetic field does, indeed, increase global cloud cover.
6. A Complete & Unified Theory of the Earth’s Climate?
The question this instantly raises, of course, is how it fits with Dr. Clauser’s thermostatic climate control theory. And the answer is that it essentially sets the range in which Dr. Clauser’s continuous feedback mechanism works. The sun’s energy reaching the earth’s surface still evaporates water but, during periods of increased ionization this produces more clouds than usual, which increases the albedo effect and thus reduces both the amount of energy reaching the earth’s surface and the amount of water being evaporated. Even with less water vapour, however, the increased ionization goes on producing clouds at more or less the normal rate which continue to restrict the amount of energy reaching the earth’s surface and so prevent it from warming as much as it would usually do. Add to this the fact that water vapour is a greenhouse gas, of which there is now a smaller amount, and the planet gets even colder.
In fact, the danger here lies in the possibility of a runaway negative feedback loop in which the temperature just keeps on being ratcheted down. Because Dr. Clauser’s climate control mechanism is still working, however, the system can never reach a point at which the amount of energy reaching the earth’s surface is so reduced that it cannot evaporate enough water to maintain the cloud cover. This is because this, in itself, would allow more energy to reach the earth’s surface and evaporate more water, thereby creating more clouds and maintaining a stable balance. The only difference is that, during periods in which the sun’s magnetic field is weaker, allowing more cosmic radiation to enter the earth’s atmosphere and increase its ionization, the whole system operates at a lower range of temperatures, making the planet significantly cooler.
Thus, we now have two theories which, joined together, first explain how the stability of the earth’s climate is maintained and then explain why there are variations in the range of temperatures over which this thermostatic mechanism works, thereby giving rise to the possibility that we now have a single, unified and complete theory of how the climate works. Unfortunately, this is almost certainly not so. For while I have no doubt that this combination of the Clauser and Svensmark theories is correct, it still leaves us with a number of unanswered questions.
The first and most obvious of these arises from the fact that the Maunder Minimum, from 1672 to 1699, occurred at the end of The Little Ice Age rather than at the beginning, raising the question as to what caused the earlier cooling. Of course, it could be that the sun’s two conflicting magnetic fields only reduce the strength of the overall magnetic field very gradually, with the result that the full GSM only happens at the end of this process. While this makes sense, however, we don’t actually know that this is the case.
Then there is the problem that if GSMs happen every 350 to 400 years, there should have been at least two more cold periods between the start of the cooling at the beginning of the Dark Ages and the start of the cooling at the beginning of the Little Ice Age. There was, of course, the Wolf Minimum from 1280 to 1350 and the Spörer Minimum from 1460 to 1550, but not only were their effects upon the climate fairly minimal, neither of them really fits the time line.
What this suggests, therefore, is not only that we still don’t fully understand the dynamics of the sun’s magnetic field but that, when prolonged and severe cold spells occur, other factors, such as volcanos, have to be involved. This then raises the question as to whether the involvement of these other factors is purely a matter of coincidence or whether there is a connection between these phenomena, as indeed, is suggested in a 2023 paper by Komitov and Kaftan, which established that, during the period from 1551 to 2020, the number of volcanic eruptions each year correlated closely with the normal 11 year solar cycle and were particularly powerful every 33 years, though no one knows why.
In fact, we don’t know why a weakened solar magnetic field should be associated with volcanic eruptions at all, which is rather the point of this essay. For when it comes to our climate we really don’t know very much. We don’t know all the variables involved or how much weight to attach to each one. We don’t even know what general trajectory the climate is on, except that it has been getting cooler ever since the Holocene Climate Optimum around 8,000 years ago. Of two things we can be absolutely certain, however. The first is that our climate is far more complicated than proponents of the AGW theory would have us believe. The second is that, while we continue to believe that manmade greenhouse gases are the sole cause of changes in the climate, climate science will not be permitted to properly explore other avenues and will therefore remain in this appalling state of ignorance.