Friday, 4 July 2025

The Goldilocks Planet

1.    The Drake Equation

In one of the first essays I wrote for this blog, back in January 2011, I produced a critique of the Drake Equation: a formula devised by Frank Drake in 1960 in order to estimate the number of technologically advanced civilisations (N) likely to be found in our galaxy. The equation has seven variables and states that:

N = R* · fp · ne · fl · fi · fc · L

where:

R*        is the number of stars formed in the galaxy each year.

fp            is the percentage of those stars with planets.

ne            is the number of planets in each of those planetary systems capable of supporting life.

fl             is the percentage of those planets on which life actually develops.

fi             is the percentage of those planets on which intelligent life then evolves.

fc             is the percentage of those planets on which advanced communications technology then develops.

L          is the length of time such technological civilisations then endure.

While, for reasons that have always eluded me, this equation has long since been accepted as a useful heuristic device for estimating N, the most critical phase of its deployment, of course, is in the assignment of values to each of the variables, the most critical of which is ne, the number of planets in each planetary system capable of supporting life. This is because, given enough time, a planet capable of supporting life will very probably do so, which will then very probably evolve into intelligent life capable of producing advanced technology. It is with respect ne, therefore, that one has to be particularly careful in assigning a value, not least because while the precise set of conditions necessary for a planet to support life is unknown, it could be fairly extensive, thereby greatly limiting the number of planets which qualify.

To appreciate what this set of necessary conditions for a planet to support life might include, one can start by just looking at some of the more obvious parameters, such as the physical and chemical composition of the planet, a gas giant, for instance, being very unlikely to support life. Then there is the type of star the planet is orbiting, the amount of energy the star radiates and the radius and eccentricity of the orbit itself. A planet orbiting too far away from a red giant, for instance, is unlikely to be warm enough to support life, while a planet orbiting too close to a white dwarf is likely to be too hot.

Then there are the slightly less obvious parameters such as the planet’s size and whether it has a molten iron core. This is important because it is a planet’s molten iron core which creates its magnetic field, without which its atmosphere, if it ever had one, would be torn away by the stream of charged particles released from its star’s outermost atmospheric layer: a stream of particles which, in the case of the sun, we call the solar wind.

This is why the size of the planet is also important. For in order to maintain an atmosphere, a planet has to be large enough to exert enough gravitational pressure on its core to keep it molten. A good example of a failure in this regard is Mars, which is generally thought to have once had a molten iron core and an atmosphere. Being slightly smaller than the earth, however, it could not maintain enough gravitational pressure on its core which consequently cooled and solidified, collapsing its magnetic field and allowing its atmosphere to be stripped away.

Then there are the planetary attributes necessary for life which I am fairly sure Drake didn’t even consider but which have become far more salient in recent years as a result of our study of the climate. One of these is the need for a planet’s atmosphere to be largely composed of nitrogen and oxygen, with the nitrogen being just as important as the oxygen. This, as I have already explained a couple of times in recent essays, is because most of the carbon on earth has been created in the upper atmosphere by the free neutrons in cosmic radiation striking the nuclei of nitrogen atoms, thereby dislodging one the atom’s protons and turning nitrogen it into carbon. Each newly formed carbon atom then binds with an O2 molecular, to form CO2, which, being 50% heavier than either oxygen or nitrogen, then descends to the earth’s surface where it is absorbed by plants through photosynthesis. Without both oxygen and nitrogen in the atmosphere, there would consequently be no carbon dioxide and no plant life on earth, which is to say no life at all.

Another planetary attribute which is almost certainly necessary for life or, at least, ‘life as we know it’ is the presence of large bodies of water, which are necessary not just in order to provide a medium in which life is more likely to evolve, but because sunlight falling upon large bodies of water creates water vapour, which, as I have also explained in recent essays, plays two vital roles in maintaining a stable and habitable climate. The first of these roles is as a greenhouse gas. In the case of the earth, in fact, it is the most abundant greenhouse gas in our atmosphere, accounting for more than 90% of the infrared radiation retained within it and raising the mean surface temperature from minus 18°C, at which it is very doubtful whether the earth could support life, to a balmy plus 15°C, at which temperature life flourishes.

The second vital role that water vapour plays, as Dr John Clauser explained in his famous 2024 lecture, is in regulating atmospheric temperature by keeping it within certain bounds. This it does by forming clouds, which then limit the amount of sunlight reaching the earth’s surface, thereby limiting the amount of water turned into water vapour by evaporation. This then reduces the number of clouds being formed, allowing more sunlight to reach the earth’s surface and creating a continuous feedback loop in which a warmer earth’s surface causes more clouds to form, thereby cooling the surface, which then causes less clouds to form, thereby warming the surface. The result is that, for most of the time, when not affected by extraneous factors such as volcanos, the entire system is more or less self-regulating.

With so many varied and often intricate conditions needing to be met for a planet to support life, it is a wonder, therefore, that there are any planets in our galaxy with life on them at all, let alone the 50,000 technologically advanced civilizations Frank Drake believed to exist in 1960. Back in 2011, when I, myself, undertook this exercise, assigning far more conservative values to Drake’s seven variables than he had done, I came up with the number of 10. Today, factoring in the need for a mostly nitrogen and oxygen atmosphere and for a large percentage of the planet’s surface to be covered in water, I am surprised that there is even one, making the earth not just a goldilocks planet, where all the conditions for supporting life are ‘just right’, but very possibly ‘the’ Goldilocks Planet: the only one in our entire galaxy.

2.    The Religious & Materialist Perspectives

In my last essay, ‘On Stupidity’, I argued that the most dangerous form of stupidity is societally based. After all, we do not learn or discover for ourselves most of the things we believe; we acquire them from the society of which we are a part. What’s more, we tend to regard a general consensus in favour of a particular proposition or conjecture as a strong indication that it is true, even if there is no independent evidence for it and we have never actually thought about it ourselves. As a result, we can come or be led to believe some of the stupidest things imaginable, such as the now almost universally accepted view that greenhouse gases, essential for all life on earth, are ‘bad things’, which, in the case of humanly produced carbon dioxide, which we exhale with every breath, must be eliminated. To emphasise just how stupid this is, in fact, I then half-jokingly quipped that, far from being the work of the devil, the existence of both water vapour and carbon dioxide in our atmosphere is probably the best argument in favour of the existence of God I have ever come across.

Not that I actually thought very much about it at the time: it was just a way of making a point In the weeks that have followed, however, my thoughts have continually returned to it. Indeed, it is what brought me back to the Drake Equation. For if the odds against the existence of our goldilocks planet are as a great as I suspect they are, it is not unreasonable to consider whether that existence might not be entirely accidental. Yes, I know what most people will say to this: that such a line of thinking embodies one of the oldest fallacies in the book. For even if the odds against a goldilocks planet are a billion to one, with more than a billion stars in the galaxy, there is a good chance that at least one goldilocks planet will in fact exist. This argument, however, is also entirely specious. In fact, it closely resembles an argument I used to hear when I was growing up: that if one gave an infinite number of typewriters to an infinite number of chimpanzees and allowed them to keep pressing keys at random for long enough, eventually they would clatter out the complete works of Shakespeare. This, however, is simply false. For an infinite number of chimpanzees could produce an infinite number of random combinations of letters, numerals and punctuation marks without one of them being the complete works of Shakespeare. Similarly, our galaxy could produce an infinite number of planets without one of them being a goldilocks planet.

The only sound argument in this regard, in fact, is the one that says that if, within a closed system, something is not impossible, then no matter how improbable it may be, once it has occurred, one can explain it as the result of random chance without positing the existence of an external agency. The problem with such an explanation, however, especially in the context of something as momentous as the existence of our goldilocks planet and therefore ourselves, is that it is not really an explanation at all. For if the answer to the question, ‘Why are we here?’ is ‘Random chance’, then what we are actually saying is that there is no reason for our existence, which, given how both significant and unlikely that existence is, is somewhat less than satisfactory, especially as it has such profound implications for how we live our lives.

I say this because, as I explained in my essay on Dostoyevsky’s ‘Notes from Underground’, this materialist way of looking at both the universe and ourselves inevitably leads us to the conclusion that our lives are pointless, which just as inevitably leads to the kind of nihilism which Dostoyevsky predicted would result in the totalitarian regimes and death camps of the 20th century. If, on the other hand, we believe that it was not purely random chance that produced us, suggesting, therefore, that we were created for a reason, this then leads to the formation of a completely different set of values, which necessarily includes the need to understand what that reason might be.

Nor is this the only modification in our attitude to both ourselves and the universe that flows from our adoption of what we might call the ‘religious’ perspective, a term I use in a very narrow sense merely to denote just these changes in the way we think about the world as a result of not viewing our existence as purely accidental. In regarding our goldilocks planet as not just special but extraordinary, for instance, we inevitably see ourselves as also having a duty to look after it, not in the sense of today’s climate activists, many of whom will have almost certainly adopted this modern day mission in life as a way of injecting some meaning into an otherwise meaningless existence, but more in the sense of not wanting or needing to waste the earth’s resources on acquiring material possessions which we then just throw away when something newer and more fashionable comes along: a trait in human beings which, as Dostoyevsky noted in ‘Notes from Underground’, is anther more or less inevitable consequence of a purely materialist outlook on life.

Another way in which our behaviour is modified by this change in our perspective is the emergence of an urge to pass on this sense of the extraordinary to our children. In part, this urge arises in order to help fend off the tide of materialism which has engulfed western civilisation since the middle of the 19th century and which has significantly impaired the quality of the lives we now live. Of even greater importance, however, is the gift of meaningfulness and purpose which a sense of the extraordinariness of our existence confers on all those who embrace it. For if our goldilocks planet was created so that life might evolve here, and if we are the pinnacle of that evolution, being the only beings of whom we are aware who are not just sentient but self-aware, there is a sense in which we are the medium through which the universe has actually achieved consciousness, making it incumbent upon us to at least try to understand this extraordinary phenomenon, which is simply impossible from a purely materialist perspective.

Of course, at this point it will be objected that we cannot just choose what we want to believe on the basis that it might afford us a better, more meaningful life. Similarly, we cannot choose not to believe in something simply because it leads to nihilism. We believe what we believe because, based on reason and evidence, we believe it to be true. And, apart from its extreme improbability, there is no evidence to support the view that our goldilocks planet is the result of anything other than random chance. More to the point, we cannot even say what would count as evidence either for or against any assertion to the contrary, making any such assertion not just unscientific but a threat to what we regard as our enlightened age, in which it is one of our most important guiding principles that all issues are to be settled, not by dogma, but by evidence and reasoned argument.

The problem with this argument, however, is that dogmatism is not an intrinsic attribute of the religious perspective as I have so far defined it and can just as easily arise in societies that are entirely immersed in the materialist perspective. Take, for instance, the belief that anthropogenic emissions of carbon dioxide are a threat to the planet, which is so dogmatically held by the vast majority of people that anyone who dares speak out against it, even a Nobel laureate such as John Clauser, is instantly cancelled. It is neither the religious nor the materialist perspectives in themselves, therefore, which give rise to dogmatism but, far more commonly, power and corruption, which are used to play on our collective stupidity in order to keep the powerful in power.

That’s not to say, of course, that the natural sciences, to which the materialist perspective have given rise, haven’t, in the past, greatly helped to overcome dogmatism. They taught us to think critically by teaching us to question everything. The problem was that they were actually too successful, especially with regard to the huge range of new technologies to which they gave rise, leading us to believe that the material universe they described was the only reality and that this description, itself, was the only and absolute truth. We consequently forgot that for a theory, explanation or description to be scientific it has to pass the test which all assertions based on the religious perspective manifestly fail to pass: it has to be falsifiable. That is to say that we have to be able to specify what empirical evidence would prove it false, which means that no scientific theory, explanation or description is true in any absolute sense. For they are all provisional, pending falsification.

This is something of which we have been very dramatically reminded over the last two and a half years, in fact, since the launch of the James Webb Space Telescope (JWST) in December 2021. This is because the JWST has a mirror with a diameter 2.7 times larger than the Hubble Space Telescope  and can therefore detect objects principally galaxies much further away than the Hubble can. This means that the light from these objects originated from much further back in time, with the result that they are often referred to as very ‘old’. The fact is, however, that they are actually very young. The light from the most distant galaxy which the JWST has so far discovered, MoM-z14, began its journey across the galaxy just 280 million years after the Big Bang, making it a mere baby at that time.

What this also means is that these newly discovered galaxies should exhibit all the typical features of very young galaxies, the two most significant of which are, firstly, that the stars within them should be composed almost entirely of hydrogen and helium and, secondly, that their form should be rather chaotic, without the spiral arms which are a feature of older galaxies such as our own. What is very odd, therefore, is that the stars in MoM-z14 contain large quantities of nitrogen and that, while considerably smaller than our galaxy, it has a very similar shape, for which there would appear to be only two possible explanations.

The first is that both stars and galaxies can develop in ways contrary to what we have previously believed. The problem with this explanation, however, is that there nothing in our current physics that would explain such a divergence in development timelines. The second possible explanation, therefore, is that the universe is actually a lot older than the 13.8 billion years we previously believed to be. In fact, some people have suggested that it could be as much as twice as old. The problem with this, however, is that, if the universe has been consistently expanding at the rate specified by the Hubble constant, 67 kilometres per second per megaparsec (a megaparsec being 3.26 million light years), then, at 27.6 billion years of age rather than 13.8 billion years, it should be a lot bigger than it is.

To complicate matters further, measurements taken using the JWST have confirmed Dr Adam Riess’ 1998 discovery for which he shared the Nobel prize for physics in 2011 that the rate of the universe’s expansion is actually accelerating, which, given the fact that a decline in the gravitational pull which galaxies exert on each other as they move apart is already built into the Hubble constant, should not be happening.

As a result of the JWST, in short, the standard cosmological model, including the Big Bang theory itself, is rapidly unravelling and, as yet, no workable alternative has been put forward to replace it. That’s not to say, of course, that one won’t eventually emerge. I’m sure it will. However, I am also fairly sure that it will involve a very substantial paradigm shift, painting a very different picture of the universe than the one with which we are currently familiar and making it very hard to believe that it will actually take us any closer to the absolute and final truth.

3.    The Transcendental Perspective

The irony is, of course, that while we cannot fully believe in any proposition emanating from the religious perspective in that it will always be unfalsifiable, so too we should never fully believe in any proposition emanating from the scientific/materialist perspective in that it will always be provisional, pending falsification. Our biggest mistake, however, is not just in assuming that these two perspectives are in permanent opposition and that we have to choose between them with the result that, based on its enormous success, we have now almost universally committed ourselves to the materialist perspective and believe that everything science tells us is true in an absolute rather than provisional sense but that we believe that these are the only two perspectives that exist. There is, however, another perspective which transcends both the religious and materialist perspectives and which actually puts them into perspective, allowing us to see that while they both have their limitations, they both have their value.

This transcendental perspective is grounded in the work of the 18th century German philosopher Immanuel Kant, who actually began his career as a physicist and only turned to philosophy in his fifties when he felt the need to ground the laws of physics and, indeed, of all science in something that would give them absolute certainty, which he knew was not to be found in their ever shifting empirical content. After all, we have spent the last sixty or seventy years thinking that all young stars are composed purely of hydrogen and helium and then, in the space of just two years, we discover half a dozen galaxies comprising very young stars which actually contain nitrogen. The answer, Kant therefore realised, had to reside in those laws or forms of knowledge which underpin all the sciences, regardless of their empirical content, and which we can know to be true with absolute certainty precisely because they are not part of the empirical world, being expressions, in fact, of the way our minds work.

To those unfamiliar with Kant, this idea that one could possibly ground scientific laws with absolute certainty, not by empirically studying the material universe, itself, but by looking at ourselves, will probably seem somewhat less than credible. It will hopefully become slightly more so, however, when one considers that the first of these laws of thought, as Kant called them, comprises the laws of logic, the cornerstone of which is the law of non-contradiction and excluded middle, which says that any unambiguous proposition p is either true or untrue and that it can’t be both and can’t be neither: a law which is also, non-coincidentally, the basis of all computing, as demonstrated in the basic architecture of standard microprocessors, all of which comprise binary switches which are either open or closed, on or off, true or false and can’t be both and can’t be neither.

Of course, it may be argued that human beings are not as restricted as computers in the way we think and are not only able of holding contradictory beliefs but can simultaneously believe that a proposition is both true and false. These two forms of contradiction, however, are actually very different. For while the former is a kind of cognitive dissonance, brought about by a combination of self-deception and strict compartmentalisation, the latter is invariably the result of the proposition being ambiguous, such that, interpreted in one way, it appears true, while interpreted in another way, it appears false. In order to resolve the apparent contradiction, therefore, one has to start by resolving the ambiguity, which one does by formulating the proposition in a more precise way, which is also a very common feature of science.

As demonstrated by Bertrand Russell in his 1903 book ‘The Principles of Mathematics’ and expanded upon by Russell, himself, and Alfred North Whitehead in their three volume work ‘Principia Mathematica’, published between 1910 and 1913, the laws of logic are also the foundation of all mathematics, which means that by grounding the laws of logic in the way our minds work, Kant effectively, if indirectly, grounded the laws of mathematics in the same way, thereby going a long way towards achieving his goal of grounding the laws of physics with absolute certainty.

Of course, it may be asked why grounding the laws of logic and mathematics in the way our brains work gives them this certainty. Unless we have suffered brain damage, however, we cannot really doubt the reasoned arguments and calculations we make as a result of our brains’ normal functioning. Indeed, to question the validity what we have concluded on the basis of what we believe to be sound reasoning would, itself, be dysfunctional.

The problem gets a bit more tricky, however, when we come to Kant’s next category of synthetic a priori knowledge, which is to say knowledge about the world which we know to be true prior to empirically studying the world. This is because it concerns our innate senses of space and time, which Kant argued are just as hard wired into a brains as the laws of logic, his argument being that because we cannot conceive of space not extending infinitely in three dimensions or time not proceeding in one direction without a beginning or an end, these are categories of knowledge which we cannot have learnt from experience. For if this knowledge were empirical rather than a priori, not only would we have to examine every cubic inch of space to find out whether it was actually three dimensional, but it would not come as a surprise to us to one day encounter the edge of space or discover that time had come to an end. The fact is, however, that neither of these  experiences is actually possible.

To see this more clearly, try to imagine encountering the edge of space, for instance. What would that be like? Would it be like encountering a wall? If so, wouldn’t we wonder what was on the other side of it or try to find a way round it? And if so, wouldn’t that be like not encountering the edge of space? Similarly, try to imagine discovering that time had come to an end. The difficulty here is that this would entail experiencing a point in time when time was still continuing and then experiencing another point at which it had stopped. To experience a point in time after time had stopped, however, would mean that, for us, time was still continuing.

If our concepts of space and time are thus hard wired into our brains, however, this then raises a question which did not arise in the case of the laws of logic for the simple reason that our conception of the laws of logic is not filtered by the way our brains work: the laws of logic are the way our brains work or, at least, part of the way our brains work. In the case of space and time, however, the role of our brains as a kind of filter is very much an issue. For if the dimensionality and infinitude of space and time are aspects of the way perceive and conceive of the universe, one has to question whether space and time, as they exist in themselves, beyond our perception and conception of them what Kant calls the noumenon or the universe’s noumenal reality are the same as our phenomenal experience of them. And the answer, of course, is that we cannot know. For regardless of their noumenal reality, we will always perceive and conceive of space as extending infinitely in three dimensions and of time as flowing from the past into the future without a beginning or an end.

Of course, it will be objected that modern cosmology does, in fact, conceive of time as having a beginning and the universe as being finite. This, however, may be one of its problems. For in postulating something which violates the laws of thought and then combining this postulation with empirical observations of a phenomenal universe which is actually shaped by these laws of thought, it was more or less inevitable that the standard cosmological model would throw up the kind of inconsistencies which the JWST has discovered.

To counter this argument, one could of course argue that if the standard cosmological model is wrong, it is not because it violates the laws of thought, but because it is inconsistent with empirical observations and that this is therefore a purely scientific matter not a philosophical one. This, however, is to ignore the fact that our empirical observations will always be consistent with the laws of thought for the simple reason that, as demonstrated with respect to time and space above, we could not experience anything that was inconsistent with them. Nor can it be argued that the standard cosmological model doesn’t have to be consistent with the laws of thought because what it describes is actually noumenal, a common mistake made by undergraduates. For the noumenon is simply a label we attach to that which, to us, is unknowable. It doesn’t have any attributes and cannot be used to explain the phenomenal universe as the  standard cosmological model purports to.

Despite the noumenon simply marking the limits of our knowledge, however, there is a slightly odd asymmetry in our relationship to it. For while we can neither say what the noumenon is, nor what it is not, our inability to say what it is not produces a number of anomalous consequences which are best illustrated by Kant’s last main category of synthetic a priori knowledge which concerns ‘causality’, which, he argues, is as hard wired into our brains as our senses of space and time, his argument being that, just as we do not have to check every cubic inch of space to find out whether it is three dimensional, so we do not have to check out everything that happens to find out whether it had a cause. We may need to study it to find out what that cause was, formulating hypotheses and testing them until we arrive at a satisfactory explanation, but even if a satisfactory explanation is not forthcoming, we would never conclude that it just happened spontaneously without something or someone causing it.

Nor is this merely a feature of our scientific age and our materialist perspective upon the universe. For even in more superstitious times, when we believed in magic and miracles, the magic or miracle always occurred as a result of some form of agency, whether that be another human being, a demon or, indeed, God. Indeed, the very idea of God, or of gods in the plural, may have been invented simply to satisfy our need to identify a cause when none could otherwise be found. Today, in contrast, we simply assume that, even when we do not know precisely what caused something to happen, there must have been some causal chain of events that led to its occurrence. Thus, in the case of our goldilocks planet, we may not know exactly how it got to be the way it is, but we assume that it must be the result of a series of random and highly unlikely but nevertheless causally determined chemical transformations in the earth’s atmosphere, for instance. The point is, however, that while causality must always be an attribute of the phenomenal universe, if it is primarily a law of thought, like space and time, it may not be an attribute of the noumenal universe as it exists in itself, in which case it is perfectly possible that, in the noumenal universe, things could come into existence spontaneously, without a cause, or as a result of forces of which we have no knowledge and could not comprehend even if we did.

That is not say, of course, that this is actually what happened with respect to our goldilocks planet. Because we do not know what the noumenon is, indeed, it is pointless even to speculate about this. Because we do not know what the noumenon is not, however, we cannot entirely discount it, the asymmetry between these two sides of the knowability coin thereby opening the door to the religions perspective.

4.    The Mistakes Religions Make

While simultaneously limiting the scope of the materialist perspective, Kant thus provided the first and so far the only sound philosophical basis for entertaining the possibility that our existence may not be purely accidental. The problem, of course, is that some unknowable and incomprehensible force or intelligence within the noumenon is not what people typically mean when they talk about God. For in most religions, God not only has knowable attributes, such as gender, but intentions which are comprehensible in human terms even if they are slightly mysterious. This is because, in most religions, of course, God is not noumenal, most religions having been founded long before Kant wrote the ‘Critique of Pure Reason’, during eras when it was perfectly normal for people to project human characteristics on to their gods, even when they became monotheistic.

Even more importantly, Kant’s transcendental idealism is both hard to understand and difficult to accept, with the result that, except for a brief period during the 19th and early 20th centuries, it was largely shunned outside of Kant’s native Germany, which was the only country in which it gained any traction within Christian theology. Not only has this general failure to entertain Kant’s transcendental vision been one of the worst philosophical mistakes of the modern era, however, it has also been something of a cultural disaster. For unless one accepts that’s the phenomenal universe is at least partly a product of the way our minds work, one will not be able to see that the materialist perspective does not represent the ultimate reality, thus leaving us with no basis for the religious perspective. For it is only when one accepts that there is more to the universe than the laws of physics that one can entertain the possibility that there are things within it that are beyond our understanding.

Of course, it may be argued that we maintained a religious perspective long before Kant showed us that the materialist perspective has its limitations. This, however, was at a time before science dominated our view of the universe, when the materialist perspective encompassed little more than the world of our daily lives and when it was quite possible to combine it with an entirely fabulous religious cosmology in which people believed simply because everyone else did. Today, in contrast, there is no room for heaven and hell within the standard cosmological model and no one literally believes that God created heaven and earth in six days and then rested on the seventh.

The Church may contend, of course, that this has some metaphorical significance. But having entirely embraced the materialist perspective, from which all meaning and significance has been excluded, it would probably be hard pressed to say what exactly this metaphorical significance is. For in a world in which our existence is now regarded as purely accidental and we are only constrained in how we live by the ever-shifting pressures of a fickle media, not only does it make no sense to ask why we are here or how we should live our lives, but there is very little room for the answers the Church used to give to these questions based on the teachings of its founder. The result is that it, too, now largely takes its lead from the shifting social mood, championing every woke cause which grabs the headlines, from the climate ‘emergency’ to transgender rights.

More to the point, it does not understand that these are not concerns which proceed from the religious perspective. For having no real belief that our existence is anything other than the result of natural laws and random chance, it no longer has that sense of our extraordinariness from which the religious perspective springs and cannot therefore fully grasp it, let alone embody it. Instead, it wears it like the liturgical vestments it puts on to preside over the now largely empty rituals it still performs to mark our births, deaths, and marriages, but which, today, have very little religious significance.

It is why so many people now regard Christianity especially as it is embodied within the Church of England as an irrelevance: a sad but entirely predictable state of affairs which recently led a friend and regular reader of this blog to ask me what the Church of England needed to do in order to regain people’s respect: a question to which I responded rather inadequately by saying that I didn’t think that there was anything it could do but that, at a minimum, it should start by taking itself seriously. What I should have said, however, is that it should start by taking the religious perspective seriously, by which I mean that it should publicly proclaim that we are not here by accident… and should actually mean it.

The problem, of course, is that the Church of England has no philosophical basis for this and would not be able to back it up in any intellectually rigorous way. For the only sound philosophical basis upon which this claim can be sustained is Kant’s transcendental idealism. And while there have been German theologians, from Friedrich Schleiermacher to Dietrich Bonhoeffer, who have based their theology on Kant’s work, I doubt whether there is anyone in the Church of England today who is sufficiently steeped in Kant to do this.

Even if there were enough members of the Church who understood enough of Kant’s philosophy to know that it could provide the intellectual basis necessary to maintain a true religious perspective, moreover, it is questionable whether they would choose to go down this path or could persuade enough others to follow them. For if God is noumenal, an even more shocking corrollary of this transcendental perspective is that so are we. We may experience ourselves and others as phenomenal beings in a phenomenal universe, but that’s just the way our minds work. Like everything ese, we have both a phenomenal and noumenal existence, which means that we are essentially unknowable to ourselves and will only find out who we really are, perhaps, when our phenomenal existence comes to an end.

Even more significantly, it’s perfectly possible that a noumenal God has, at some time, projected part of its being into the phenomenal real in phenomenal form in order to communicate with us. Indeed, this would be the most logical interpretation of the Christian message from a Kantian perspective. To many Christians, however, especially non-German Christians, reinterpreting traditional Christian teaching in this way may well be a bridge too far. We Anglo-Saxons, in particular, prefer something a little less mind-boggling and more down to earth. Many members of the Church of England may well, therefore, baulk at this Kantian option, suspecting that it could easily drive more people away from the Church than it would actually attract.

Given the philosophical battle between Christianity and western materialism, however, which Christianity has all but lost, this holding on to the past, even when combined with the latest woke virtue signalling, is almost certainly a recipe for continued decline. With the standard cosmological model is collapsing under the weight of discoveries by the JWST, therefore, this just might be the time for both physicists and theologians to abandon their opposing positions and come together, perhaps, in a meeting of minds over Kant.

 

Saturday, 24 May 2025

On Stupidity

 

1.    A Model of Stupidity that is also its Perfect Illustration

I recently came across a 1976 essay called ‘The Basic Laws of Human Stupidity’ by Italian economic historian Carlo Cipolla, in which he argues that the greatest threat to a civilisation is not evil but stupidity, which, being an economist and thinking in economic terms, he defines by positioning  all human beings along two axes according to the benefits and losses they cause themselves and the benefits and losses they cause others, which, according to Cipolla, produces four basic character types:

1.      Bandits who benefit themselves at the expense of others;

2.      The helpless or naïve, who may benefit others but often do so at their own expense, often because others take advantage of their naivety in order to use them;

3.      The intelligent who, realising that there is more advantage to be gained by working with others rather than against them, look for win/win solutions to problems so that both they and others benefit from the outcome;

4.      The stupid who act to the detriment of others but accrue no benefit to themselves, effectively producing a lose/lose outcome.

While this classification of societal character types is sometimes illuminating, there are, however, a number of problems with it. For not only will all of us probably fall into each of the first three categories at some time, and probably all four as Cipolla, himself, admits but the system is not quite as symmetrical as it initially seems, not least because no sane individual, with the possible exception of someone driven by an overwhelming desire for revenge, would deliberately act to the detriment of others without at least believing that they would, themselves, gain some benefit from it. This means that category 4 actually comprises two different groups of people, the first of which are merely failed bandits: people who try to benefit themselves at others’ expense but are just not smart enough to pull it off. It is the second group, therefore, which is the more interesting. For these are people who do not deliberately act to the detriment of others but are too stupid to realise that that is actually what they are doing. In fact, stupid people are often at their most dangerous precisely when they think they are helping others.

An even bigger problem with this classification, however, is that it is just that, a classification, which does not attempt to understand why people act stupidly and doesn’t therefore explain or define what stupidity actually is. In fact, Cipolla’s second law of stupidity says that ‘the probability that a certain person is stupid is independent of any other characteristic of that person’, which would seem to imply that the probability of being afflicted with stupidity is completely random. He also says that stupidity can be found in every class and walk of life, additionally suggesting, therefore, that it is not determined by any external factors such as education or upbringing but is entirely innate.

His most extraordinary claim, however, is that, while stupidity is randomly distributed throughout the population, the percentage of stupid people in any given population is constant. When I first read this, in fact, I was utterly astonished by it. It stems, however, from the fact that Cipolla’s whole system is grounded in economics and statistics, rather than psychology or sociology, which not only gives rise to his very distinctive characterization of stupidity but also explains why he thinks that stupidity is a greater threat to society than evil. For of all his four character types, the stupid person is the only one who contributes a negative balance to society’s benefits and losses account. The successful bandit, for instance, merely transfers benefits, which we more commonly refer to as wealth, from others to himself, while both the intelligent and naïve person may actually increase overall societal wealth. By depriving others of benefits while gaining nothing for himself, the stupid person is thus the only one who actually destroys wealth: a defining characteristic of stupidity which is not only central to Cipolla’s entire thesis, but consequently places a limit on the number of stupid people any society can tolerate. For there are only so many wealth-destroying people a society can withstand before that society is, itself, destroyed. Hence the greater threat to society posed by stupidity rather than evil.

While this consequence of his classification may be central to Cipolla’s whole thesis, however, it causes him a very serious problem. For while it follows from his classification system that the number of stupid people in a society cannot rise above a certain threshold level without that society ceasing to exist, this does not mean that the number could not be below this level or that it is constant. In fact, it could vary from zero to whatever the threshold number is. In order to explain such variations, however, Cipolla would have to introduce external, societal factors into his theory which are outside the scope of his model and, indeed, his field of expertise.

One of the problems his not doing this causes, however, is that he cannot explain, for instance, why we find more stupid people in positions of power in societies that are in decline than in societies which are on the rise, other than to suggest that the stupid people in positions of power who cause the decline then tend to appoint other stupid people to positions of power. While this may be true, however, statistically, it is far more likely that an increase in the number of stupid people in prominent positions is actually a reflection of an increase in stupidity in the population as a whole. What’s more, it is also highly likely that this relationship between the stupidity of a population and the stupidity of its representatives contains an element of circularity. For not only are stupid people more likely to elect stupid leaders, but stupid leaders are far more likely to introduce social measures which actually increase the level of stupidity in their populations, thus increasing the threat to society about which Cipolla is trying to warn us but cannot state explicitly within the confines of his model.

The irony of this, however, is not just that Cipolla’s way of characterising stupidity draws our attention to a threat to society which it simultaneously prevents us from properly understanding, but that his whole approach to the subject is one of the most perfect examples of stupidity I have ever come across. For in constructing this heuristic artifice in which to explore stupidity, he so constrains his way of thinking that he cannot step out of it in order to see that it is totally divorced from our phenomenal experience: a ‘disconnect’ which takes us much closer to the real essence of stupidity than anything Cipolla actually tells us about it.

2.    Stupidity as an Affliction of the Collective rather than of the Individual

This detachment from reality is evident, in fact, from the very beginning of the essay, in Cipolla’s pronouncement of his first law stupidity, which states that ‘always and inevitably everyone underestimates the number of stupid individuals in circulation’, a claim on which he further elaborates by observing how ‘repeatedly startled’ we are to discover that ‘people whom we had once judged to be rational and intelligent turn out to be unashamedly stupid’.

The purpose of his assertion that we are ‘repeatedly startled’ by such discoveries is, of course, to make us think about just how many undiscovered stupid people may still be out there. In all my seventy years, however, I have never actually had this experience, either in my private life or at work, which, if such startling discoveries were real and commonplace, I’m fairly sure I would have done. What this indicates, therefore, is that Cipolla’s rather simplistic two-dimensional model simply doesn’t reflect the complicated, multi-dimensional nature of our real world relationships with others, not least because they are dynamic rather than static. After all, we only get to know most of the people with whom we are acquainted slowly, over time, and even then we usually only get to see certain aspects of their characters. Does this mean that they sometimes surprise us? Of course, they do. But often in a good way, as when we are suddenly struck, for instance, by their tact and considerateness in a particularly delicate situation. Do they sometimes disappoint us? Yes, to that too. But not because of their stupidity. As long as the consequences are not too serious, in fact, a friend who does something stupid usually makes us smile or even laugh. It is the gradual revelation of a character trait such as petty vindictiveness that makes us feel uneasy and may cause us to withdraw from someone’s company, not stupidity.

That’s not to say, of course, that stupidity does not have its darker side or that it may not, in fact, be a greater threat to civilization than evil. This kind of stupidity, however, is nothing like what Cipolla thinks it is. For in addition to making the mistake of characterizing stupidity in terms of an over-simplified model, he makes the further mistake of assuming that stupidity is primarily an affliction of individuals, who then threaten society, whereas, in fact, it is primarily an affliction of society, itself, which then infects individuals and makes them stupid.

This is because one of the primary characteristics of stupidity is an adherence to false beliefs and/or faulty ways of thinking, which lead us to arrive at false conclusions. In most cases, however, we, as individuals, do not originate these false beliefs and faulty ways of thinking. Yes, at some point, there must have been someone who did originate them; but most of us are not that person. Most of us acquire our beliefs and ways of thinking from the society which collectively subscribes to them. Thus it is that, if those beliefs and ways of thinking are false or lead us to arrive at false conclusions, it is society that is the primary source of our stupidity.

In fact, this is merely the reverse side of a coin which, on the other side, has been entirely to our advantage, constituting one of the most important factors in our success as a species. This is because society acts as the repository of all that society’s collective knowledge and wisdom, which it then transmits from generation to generation. Without it, each individual would not just have to learn everything from scratch but discover everything from scratch, from which plants are edible and which are poisonous, to when to sow seeds to how to forge metals. The problem is that, along with all this useful knowledge and wisdom, society also acquires and stores a lot of far less useful stuff, including superstitions, old wives’ tales and beliefs that are simply false, along with ways of thinking that are not just stupid but downright dangerous, leading to wars, religious persecutions and ritual killings.

Given how pernicious this kind of stupidity can be, one might therefore ask why, throughout most of our history, we have allowed it to be perpetuated. The answer, however, is because it is perpetuated by exactly the same character traits in human beings that perpetuate our more useful beliefs and ways of thinking, primarily an innate conservatism, which has also played an important part in our evolutionary success. This is because, throughout most of our history, most of the more useful stuff stored in a society’s repository of knowledge and wisdom has been of a practical nature, concerning such things as how to grow crops, how to store them and how to prepare food. Once these techniques have been perfected, therefore, people are very reluctant to change them. After all, the wastage of food as a result of poor storage and preparation could be a death sentence. Primitive societies have always, therefore, been resistant to change, with the result that anyone suggesting change is not just regarded as stupid but dangerous.

The problem with this is that, because, in most cases, primitive people do not know why certain practices are efficacious, they do not know which practices actually are. They probably know, for instance, that watering a crop is important; but they don’t know that chanting a prayer while doing so is not. So they do everything in the same way every time, leaving nothing out, with the result that some very barbaric and harmful practices are maintained to the detriment of some people and the benefit of none: an equation which exactly fits Carlo Cipolla’s definition of stupidity but with, I hope, a little more substance behind it and a little more understanding of what stupidity actually is.

3.    The Rise & Fall of Critical Thinking

Because societally based stupidity is so entrenched in our evolutionary psychology and so hard to shift, it is little wonder, therefore, that it took thousands of years for those rare individuals who wanted to develop different, more rational ways of thinking to actually do so, or that they were often persecuted for it. Nor did it help that in order to develop new ways of thinking about the world around them, they also had to develop new ways of thinking about thinking, itself, making it also quite natural that, in ancient Greece, for instance, the development of the natural sciences was accompanied by the development of philosophy or that, at their most basic level, these two ways of thinking should share the same fundamental structure, comprising two main aspects or stages: the analytic and the synthetic.

In fact, all forms of what is generally referred to as ‘critical thinking’ follow this same pattern. In the natural sciences, for instance, we first make individual observations and collect data (analysis). We then construct a hypothesis or theory which both explains and brings this data together to form a consistent view of how some part of the world works (synthesis). Similarly, in philosophy, we start by looking for self-evident truths, which we then synthesise into more complex propositions using logical argument. In fact, the same structure is to be found in just about every scientific or rigorous academic discipline. Historians, for instance, start by gathering the known facts which they then piece together to form a coherent account of something which happened in the past.

Importantly, both the analytical phase of this process and its synthesising phase are equally essential. If one doesn’t get the facts right or starts from false or unsound premises then one is hardly likely to arrive at a sound or meaningful conclusion. If all one has is a few facts, however, or the basis for an argument which one cannot actually build, one hasn’t really got anything meaningful at all. Analysis without synthesis is bit like taking a motorcycle apart and then not being able to put it back together again. Indeed, one of the best tests of whether someone really understands something whether it be an object or a text is not just to get them to analyse it but to explain it, the explanation being a kind of reconstruction.

The same is also true with respect to the creative arts, most of which have two sides to them in the same way as critical thinking. Someone who has a basic understanding of music, for instance, will be able to analyse it with respect to its musical form, key and time signature etc. The best proof that someone really understands music, however, is their ability to compose which is to say ‘synthesise’ a wholly new piece.

This also reveals something else which critical thinking and the creative arts have in common. For neither of them comes easily to us. Unlike imitation, which is central to the societal transmission of practical knowledge, critical thinking, like learning to read and write music and play a musical instrument, is something in which we first have to be trained and which we then have to constantly practice, requiring a disciplined regime which is made all the more difficult by the fact that most people, even within the education system, do not understand what critical thinking is and cannot therefore teach it: an obstacle to its cultivation which  raises the question as to how, over the last five hundred years or so since the Reformation, in fact it has so manifestly managed to flourish.

The answer is that we learn it or, at least, used to without actually knowing that we are learning it, primarily through the reading of books and other long form texts. It is for this reason, indeed, that the Reformation was so important in its development. For before the Reformation, most people in Europe couldn’t read. The one book of which they knew anything was the Bible, which was written in Latin and explained to them by the Roman Catholic Church, which thus placed all the power inherent in the possession of this knowledge in the hands of the clergy. In fact, it was the desire of many people to be able to read the Bible for themselves, in their own language, and therefore decide for themselves what it meant, that was one of the main drivers of reform, as well as one of the principal motives behind the opposition to reform by those in power. For the ability to acquire knowledge by reading a book is the most liberating and empowering ability one can ever acquire, not because of the knowledge contained in any one book, but because reading teaches one to think for oneself rather than simply being told. In short, it teaches one critical thinking.

This is because, in reading a book, whether we are conscious of it or not, our first task is to analyse the text in order to extract from it the individual propositions or pieces of information of which the text is comprised. We then synthesise this information in order to work out the meaning of the text as a whole, a two stage process which is made even more starkly clear if we think about it in terms of hermeneutics, a theory about what we actually do whenever we read a long form text which was first put forward in the early 19th century by the German theologian and biblical scholar, Friedrich Schleiermacher, and which states that in understanding any long form text and, indeed, many other things as well we interpret the whole on the basis of our understanding of the individual parts, but we also interpret the individual parts on the basis of our understanding of the whole.

We most often notice this, in fact, if we read a book more than once. For even though we start synthesising our overall understanding of a book from the moment we start reading it, on our first reading that understanding is almost entirely informed by our interpretation of individual passages. Only when we have got to the end of the book do we usually have a fully formed understanding of the book as a whole. If we then go back to the beginning, however, and start reading it again with this overall understanding now in our possession, we often find that our first interpretation of some of the early passages was incorrect. Not having known the significance of some of these early passages, indeed, we may even have totally overlooked them. This means that we now interpret them afresh in the light of our new overall understanding of the book, which, in turn, can alter that understanding once again, forcing us to reinterpret other individual passages in what is potentially an infinite feedback loop.

Of course, we only usually undertake such multiple readings with regard to particularly difficult or important books, but the principle holds for all long form texts, even novels, which we may only read for pleasure but to which we still apply this same continuous process of analysis and synthesis. In fact, crime or detective novels provide us with some of best opportunities to practice what is, in essence, critical thinking. For alongside the detective, we are constantly being presented with new clues and vital bits of information which we continuously attempt to synthesise into a theory of ‘whodunit’. Because we read novels for pleasure, moreover, and may spend quite a lot of time doing so, they also help us improve our readings skills and our ability to sustain concentration.

This is important because, with a decline in the reading of books among generations younger than my own, many university lecturers are now reporting that many students cannot read more than one book a week, if that. Instead of handing out reading lists at the beginning of their courses, this has therefore led to most lecturers now handing out photocopied passages from key texts at the beginning of each lecture, during which, of course, they then provide an analysis of these selected passages. This means that students do not have to undertake either of the two essential analytical tasks involved in reading an entire book: that of determining which are the key passages in a given text and exactly why they are so important. They are simply told.

To compound this further, in many subjects, the long form essay, which students used to have to write in answer to questions designed to reveal their level of understanding of the subject in question, has now given way to multiple choice questionnaires or questions which only require short form answers. This means that they no longer have to take the results of their analysis and synthesise them into a coherent explanation of why something is the case.

This decline in both aspects of critical thinking has also resulted in a collapse of the distinction between information, which is passively received, and knowledge, which has to be attained by forming connections or putting things together, which, from a neurophysiological perspective, involves the laying down of new neural pathways which tend to be more durable than simple memories. This is why techniques designed to improve one’s memory nearly always involve the making of connections, using such constructs as ‘memory palaces’ for instance. In most cases today, however, we don’t even bother to commit information to memory, knowing that, if we need to retrieve it, we can simply look it up on the internet, society’s new repository of all knowledge and wisdom, to which most of us resort fairly uncritically, assuming that most of the information stored there is  true or correct without really even thinking about it.

After five hundred years of increasing literacy rates, increased reading and more widespread critical thinking, it now seems, indeed, as if our failure to understand critical thinking, a change in technology and a consequent decline in long form reading may actually be causing us to slip backwards.

4.    Stupidity & Mob Rule

If stupidity is once again on the rise, however, it is not entirely due to either changes in technology or the debasement of an education system which never understood critical thinking in the first place. Our evolutionary psychology still plays a massive part. For even though departures from the societal consensus may not now mean the difference between life and death, we are still extremely resistant to them, with the result that, having lost the ability to think critically, the societal consensus now has as big a hold over us as it has ever had. Once a stupid idea has been adopted by a society and become part of that consensus, it is consequently very difficult to get rid of it. What’s more, anyone who tries to do so, is subject to the same mob rule as they ever were. Today, contrarian critical thinkers may not be lynched, but their lives can be destroyed in other ways.

In my last but one essay, for instance, called ‘The Overly Narrow Focus of Today’s Climate Science’, I reviewed a lecture by Dr John Clauser, winner of the Nobel Prize for Physics in 2022, in which he demonstrates quite clearly that none of the papers which have supposedly contributed to the IPCC’s findings on radiative imbalance actually support those findings. In fact, he demonstrates that the science contained in these papers is so poor that one cannot draw any conclusions from them at all, a finding which, in itself, is very disturbing. Instead of being congratulated for his clear-sighted analysis, however, Dr Clauser was universally castigated for casting doubt on the Anthropogenic Global Warming (AGW) theory. He even had a presentation he was due to give to the IMF cancelled.

Regardless of the merits or otherwise of the AGW theory, this travesty of the principles of disinterestedness and objectivity in science is so incomprehensible that, if it weren’t actually happening, I suspect that it would also be very hard to believe. After all, it is the duty of every scientist to critically examine everything, including the work of other scientists. To persecute a scientist for merely doing his job thus marks, in a very real way, the end of our scientifically based culture and civilization. What’s more, the fact that we do not seem to understand this or what it means is another very clear indication of just how stupid we have become. For the material consequences of turning our backs on that which made European civilization the most successful civilization there has ever been are likely to be very serious indeed.

Just recently, for instance, the British government’s Advanced Research and Innovation Agency (ARIA) announced funding for experimental research into Solar Radiation Modification (SRM), which will include planes releasing tiny particles into the stratosphere to reflect more of the sun’s radiation back into space, which, if it were done on a large scale, could be very dangerous. The reason for this, as Dr Clauser coincidentally explains in the second half of his lecture, is that this reflection of the sun’s radiation away from the earth’s surface is something that is already being done… by clouds. In fact, clouds reflect up to a third of the sunlight entering the atmosphere, an amount I both hope and suspect we could never match. For the most important thing about the way in which clouds do this job is that it is self-regulating. This is because, by limiting the amount of sunlight reaching the earth’s surface, over 70% of which is covered in water, they limit the amount of water vapour created by evaporation, which then reduces the number of clouds being formed. With less clouds, more sunlight then reaches the earth’s surface causing more evaporation and hence more clouds.

What we have, in effect, is a continuous feedback loop, in which a warmer earth’s surface causes more clouds to form, thereby cooling the surface, which then causes less clouds to form, thereby warming the surface. In this way, the entire system is kept in balance. By artificially reflecting more of the sun’s radiation back into space, however, less sunlight will reach the earth’s surface, thereby reducing the amount of evaporation and the number of clouds, for which we will then have to compensate by artificially reflecting even more of the sun’s radiation back into space. In fact, by ratcheting this up, we could easily find ourselves taking over the job of clouds completely, but doing it less efficiently and at far greater cost.

What’s more, clouds don’t just regulate the temperature of the earth’s surface; they also  produce rain to water our crops, without which we’d all starve. In fact, meddling with this system has got to be one of the stupidest and most dangerous ideas anyone has ever had, especially as nature has been successfully maintaining this state of balance for millions of years without any help from us.

‘But,’ you say, ‘we have already undermined this natural state of balance by pumping so much carbon dioxide into the atmosphere. We have therefore got to do something to rectify this.’ The problem with this argument, however, is that it is largely a myth. Yes, we pump CO2 into the atmosphere by burning fossil fuels. But most the CO2 in the atmosphere is produced the way it has always been produced: by cosmic radiation, in the form of the free neutrons, striking the nuclei of nitrogen atoms and knocking out one of their protons, thus turning nitrogen atoms into carbon atoms, which then bind with oxygen atoms to form CO2 molecules. Being 50% heavier than both nitrogen and oxygen, these then descend to the earth’s surface, where they are either dissolved in the oceans or absorbed by plants, which extract their carbon by photosynthesis to build their own cells while releasing the oxygen back into the atmosphere. All we do is recycle a little of this carbon by burning the fossilized remains of plants which absorbed the CO2 millions of years ago.

Of course, it will be argued that while we cannot stop carbon being created naturally in the atmosphere, we can keep the carbon that is safely buried in the ground where it is, instead of burning it to form a greenhouse gas that is dangerously overheating the planet. This argument, however, is based on two premises, both of which are false. The first is to suppose that CO2 makes a significant contribution to this warming effect, when its contribution is actually quite minor, accounting for only around 3% of total warming by greenhouse gases. A far greater contribution is actually made by water vapour, which is thirty times more abundant than CO2 and accounts for over 90% of all greenhouse gas warming.

The argument, of course, will then be that, while we cannot do anything about the amount of water vapour in the atmosphere except by reducing the amount of sunlight reaching the earth’s surface as described above, thereby reducing oceanographic evaporation, we can and should do something about our emissions of CO2, even though these constitute only a small percentage of all the CO2 in the atmosphere, which, in turn, only accounts for a very small percentage of the overall warming caused by greenhouse gases. As should be clear, however, this argument is already beginning to look pretty thin.

It is the second false premise upon which these arguments are based, however, that really tips the balance. For this second false premise is the widespread assumption that greenhouse gases, which actually keep the planet warm, are essentially a problem. What no one bothers to ask, however, is how much cooler the planet would be without them. According to the Stefan-Boltzmann equation, however, the answer is 33°C. That is to say that instead of a mean surface temperature of around 15°C, without greenhouse gases, the earth would have a mean surface temperature of -18°C. In fact, most if not all of the surface would be covered in ice, making it doubtful whether it could support any life at all. Instead of constituting a threat to the planet, therefore, greenhouse gases are what makes life here possible, with water vapour making by far the greatest contribution, not only keeping us relatively warm at a mean 15°C, but constantly adjusting cloud cover to keep the temperature stable.

If water vapour is the stand out performer in keeping the planet habitable, however, CO2 comes in at a close second. For while its contribution to keeping the planet warm may be relatively small, it is, of course, essential to all plant life and hence all life on earth. In fact, the existence of both water vapour and CO2 in our atmosphere is probably the best argument for the existence of God I’ve ever come across. And yet we have somehow got it into our heads that these greenhouse gases are ‘bad things’. Which rather raises the question as to how this could have happened. And the answer, of course, is just stupidity. We believe that CO2 is a problem because everyone else believes it, our reasoning being that if everyone believes something, then it must be true. This, however, is the very essence of societally based stupidity. And the way things are going, it could well be the cause of our destruction. For in this one regard, of course, Carlo Cipolla was almost certainly correct: stupidity really is a greater threat to civilisation than evil.