Friday 1 December 2023

How The Whole World Can Come To Believe Things That Are Not True

 

1.    Experience, Authority & Resistance to Change

One of the main differences between home cooking and restaurant cooking is the fact that most restaurant dishes today are cooked to order in individual portions. As a result, a typical restaurant chef may cook the same dish half a dozen times or more in a single evening, with the further consequence that he very quickly becomes good at it. Constant repetition also promotes efficiency of thought and movement and hence speed. With all the ingredients prepared earlier, an experienced chef can therefore be plating up within minutes of receiving an order. In fact, most restaurant dishes, today, are designed for this kind turn around.

Most homemade food, in contrast especially the kind of homemade food I ate when I was growing up, sixty-odd years ago is cooked for the whole family and very often consists of slow-cooked dishes such as stews and casseroles, puddings and pies. The home cook, as a consequence, doesn’t cook each dish anywhere near as often as a restaurant chef, a feature of home cooking which, when I was a boy, more or less determined the size and scope of most home cooks’ repertoire.

I say this because if, at that time, a home cook’s repertoire was too large, she and I say ‘she’ here because, in  those days, the home cook was nearly always mum – could not cook each dish often enough to ensure that it always turned out to the same high standard. This then created both positive and negative feedback loops. For the more often a mum cooked a particular dish, the better she got at it; the better she got at it, the more approbation she received from her family and the more she was therefore encouraged to cook that particular dish again. Thus the dishes she cooked regularly and was good at became family favourites, while the dishes she cooked less regularly and was consequently less good at gradually disappeared from the menu.

Just as important to overworked, multitasking mums in the 1950s and 60s was the fact that the more often a mum cooked the same dish, the more quickly she was generally able to prepare it. Something she could put together while simultaneously comforting a crying three-year-old with a grazed knee was therefore infinitely preferable to something upon which she had to concentrate all her attention. It would never even have occurred to most mums at that time, therefore, to venture much beyond the tried and tested dishes they knew their families loved.

At some point in the late 1970s or early 1980s, however, the distinction between home cooking and restaurant cooking began to blur. This was partly due to the fact that, with greater prosperity, people were eating out more and then wanted to try their favourite restaurant dishes at home, and partly due to the rise of celebrity chefs, who sold recipe books based on television series in which they encouraged their viewers to believe that, by following these recipes, they could cook as well as any professional chef. The only problem was that, while, in some cases, some of these recipes turned out so well that they were incorporated into the home cook’s regular repertoire, most of the recipes were only ever cooked once, with the result that each foray into the kitchen was a journey into the unknown made all the more stressful by the fact that this new form of home cooking inevitably gave rise to a fashion for dinner parties, which, in many cases, became extremely competitive.

What really made this new form of home cooking so taxing, however, was the fact that we were no longer ‘in’ what we were doing. When mum used to cook for her family in the 1960s, she didn’t weigh out ingredients or follow exact instructions; she knew when the consistency of a batter or cake mix was right by look and feel. Because we were now merely following instructions, however, we could no longer make such sensory judgements. Worse still, because we didn’t really ‘know’ how to cook the dishes we were preparing, we could not always tell when something was wrong, either with the instructions themselves or with the way we had followed them. This is why the process became so fraught. Because instead of simply immersing ourselves in something we’d done a hundred times before and could almost do without thinking which is actually quite relaxing we now had to constantly break off to consult the damn recipe and try to work out whether we were doing it right.

Our real mistake in all this, however, was the philosophical one of failing to draw the proper distinction between practical knowledge, or the knowledge of how to do something, and factual knowledge, or the knowledge that something is the case. As such, we confused our factual knowledge that Yorkshire puddings, for instance, are made from eggs, flour, milk and pinch of salt, with our assumed ability to actually make one as good as our grandmother used to make, which, even if one has a list of ingredients and knows in theory the required technique, is far more difficult than most people think.

Of course, it may be argued that the factual knowledge of what goes into a Yorkshire pudding and how it should be cooked must nevertheless have been present among our grandmother’s store of factual beliefs in order for her to have successfully made one. This idea, however, that we have this store of factual knowledge, which we apply in our practical dealings with the world, is based on a very distorted model of how the human mind actually works, especially in driving our actions. When our grandmother made her Yorkshire pudding, for instance, she would have known that, because Yorkshire puddings do not contain a raising agent, she had to get her oven as hot as it could be in order to get the air in her aerated batter to expand as quickly as possible, thereby causing the pudding to ‘puff up’ before almost simultaneously setting. She would not, however, have actually thought this. She would not have said to herself: ‘Because Yorkshire puddings contain no raising agent, I have to get the oven as hot as possible so that the air in the batter expands’. She would have simply waited for the oven to get hot enough before pouring in the batter.

Similarly, she would have had her own way of gauging when the oven was sufficiently heated. Typically, this is done by waiting until blue smoke is visibly rising from the fat in the pan. Again, however, she would not have said to herself: ‘Ah! There’s blue smoke coming off the pan. That means it’s hot enough to pour in the batter.’ She would simply have seen the smoke, poured in the batter and closed the oven door as quickly as possible in order to prevent the temperature dropping too much.

In fact, most of what we do in life we do in this way: without explicitly thinking about it. The problem is that while this makes our most common cognitive processes much simpler than we tend to imagine, it also makes them far less easy to represent. For they are not like reasoned arguments which can be laid out sequentially in a series of logical steps. Not only do they often omit or fail to make explicit many of the premises on which the rationality of our actions depend but, more often than not, we only give voice to the thoughts or reasons behind our actions when someone asks us to explain them. And even then our explanations may not be very helpful.

If you had asked your grandmother why she always used a hand whisk rather than an electric whisk to whisk her Yorkshire pudding batter, for instance, she would probably have told you that it was because one has to get the consistency of the batter ‘just right’, the implication being that she could judge this better when using a hand whisk than with an electric whisk. She may even have added, by way of explanation, that if a Yorkshire pudding batter is too stiff, the pudding doesn’t rise enough to make it light and crisp, while if it is too thin, it doesn’t hold,  causing the pudding to collapse as soon as it is taken out of the oven. None of this information is of very much use, however, if you do not know what the term ‘just right’ means in the context of making a Yorkshire pudding.

It’s why, when teaching practical skills of this type, it is usually better to show the student how to do something rather than try to tell them how to do it. The teacher may then stand over them, offering guidance while they attempt to do it themselves, but ultimately all such skills can only be learnt by experience, which necessarily includes the experience of repeatedly getting it wrong.

It’s also why this kind of knowledge is so prized. Because it is always so hard earned. Factual knowledge, which one can gain simply by reading a book, is cheap in comparison. Because it cannot be gained by reading a book, however, those seeking a particular form of practical knowledge usually need a teacher: someone who has the requisite skills and level competence sufficient to earn them the trust and respect of their students. This is especially true in the case of disciplines such as haute cuisine and the playing of a musical instrument, where students are often willing to go to great lengths in order to sit metaphorically at the feet of an acknowledged master.

While such celebrated specialisms have always generated a revered status, however, historically this kind of respect and resultant authority have also been accorded to those who had practical knowledge of a more mundane nature. This was especially true with respect to agricultural knowledge, where someone’s ability to tell when a crop needed to be brought in because the weather was about to change may have been critical to the economic wellbeing and even survival of an entire community, and would have been even more esteemed not only because, in many cases, the person who had this knowledge may not have been able say how he knew what he knew but because, in some cases, the knowledge may not have even existed.

This may sound slightly odd, but just as it is possible to make a perfectly good Yorkshire pudding without knowing why one has to aerate one’s batter and get one’s oven as hot as possible as long as one actually does so so it’s possible to know that one has to follow certain agricultural practices without knowing why.

When our hunter-gatherer ancestors first started cutting down areas of forest in order to plant crops, for instance, they would have very quickly noticed that, after they had grown the same crop in the same field for a few years, the land ceased to be quite as productive as it had been before, forcing them to clear another area of forest and start again. At some point, however, someone must have also noticed that if they went back to one of the old pieces of land which had lain fallow for a while, its fertility would have returned to what it had previously been. This eventually led to the practice of crop rotation and of regularly turning fields which had previously been used to grow cereals into pasture for animals: a system which was used by farmers for hundreds of years and clearly worked even though it is to be seriously doubted whether any of the early farmers who practiced it knew why.

What’s more, this lack of any factual basis upon which to explain why the crop rotation system worked would not have necessarily undermined the authority of those who knew from experience the order in which the crops needed to be rotated and when a field needed to be turned into pasture, especially if any part of this knowledge were derived from such sensory clues as the smell and texture of the soil, which, in themselves, would have been very difficult to explain. In fact, the very inexplicability of someone’s expertise in making these judgements would have almost certainly enhanced their reputation and authority still further, which, in turn, would have made them even less inclined to attempt an explanation. After all, knowledge is power.

In addition to thus increasing the authority and mystique of those who were believed to have knowledge most people lacked, this absence of any factual basis upon which to explain why certain practices worked while others did not would have also had one further major consequence for the general population: it would have made them extremely conservative and resistant to change. For if one doesn’t know why a particular practice works, one cannot know whether any modification to it will improve it or make it less effective, thereby rendering all such changes inherently risky. For an agrarian population whose very survival depended on the success of their harvest, therefore, doing what had always worked in the past while deferring to those whose judgement had consistently been proven correct was clearly the optimal strategy.

Indeed, the general ignorance which pervaded our agrarian past, along with the extreme caution it promoted, has probably done more to shape humanity over the last ten thousand years of our evolutionary history than any other evolutionary factor, with the result that these two character traits deference to authority and resistance to change are now almost certainly part of our genetic make-up. Indeed, we can actually see the results of this in society today, where independently minded risk-takers are still very much in the minority.

The problem is that, while playing it safe and going along with the herd may have had a certain survival value during the agrarian era, when practical knowledge was dominant, in an information age, dominated by factual knowledge, this may no longer be the case. Indeed, there is a distinct possibility that uncritically accepting what those in authority tell us is now a threat to our very existence, not because those in authority have any malign intent, but because their authority is no longer based on our belief in their ability to do something, about which it is very hard for us to be mistaken, but on our belief that they have certain factual knowledge, about which it is very easy for us to be wrong.

2.    How Things Go Wrong

I say this because, with respect to practical knowledge, it is very easy for us to ascertain whether someone can actually do something: we can simply ask them to do it. More to the point, practical knowledge is not about truth or falsehood, but about whether or not something works. If a particular way of doing something produces positive results, we continue using it; if it doesn’t, we abandon it or it never becomes a common practice in the first place. We do not say that it is true in the one case and false in the other.

Factual knowledge, in contrast, is always either true or false. What’s more, there are a whole variety of different ways in which we can be deceived into forming a false belief, making it very easy for us to do so.

The first two such ways are mistakes we make ourselves, the first of these being observational errors. Typically, we think we have seen something when, in fact, we haven’t or took it to be one thing when it was actually something else. The second type of mistake then consists of logical errors. We correctly see something but then infer from it something that is not actually the case.

The mitigating characteristic of both these types of error is that, because we make the errors ourselves, their correction is also in our own hands. We can take a closer look at the thing we mistakenly thought we saw, for instance, or, in the case of a logical error, we can go over our reasoning once again and see how we might have jumped to the wrong conclusion. It is far more difficult to correct our false beliefs, however, when their source is someone else and when they have come to us in the form of information, the falseness of which can originate in three different ways, the first and most obvious of which is when someone tells us a lie.

The biggest problem we have with respect to lies, of course, is that, most of the time, we do not know when someone is lying to us. In fact, it is part of the definition of a ‘lie’ that it causes us to form a false belief, which means that it must go undetected for least some period of time. After all, if we know that something is a lie as soon as we hear it, it is not a very good lie and the liar has actually failed in his intention. The critical issue for most of us, therefore, is not in determining whether a particular piece of information is true or false, which may be difficult to tell, but in deciding in whom we can trust. This is why being lied to often feels like a betrayal. Because the liar has betrayed our trust. It is also why in some cultures, such as the Zoroastrian Achaemenid and later Parthian empires, lying was regarded as such a heinous crime that people were actually executed for it.

Most of the false information we receive, however, is not intentionally false. In fact, much of it comes about as a result of observational or logical errors. Someone thinks they saw something when they didn’t but honestly reports what they think they saw. If we regard the source as reliable, this then causes us to form a false belief which, in all good faith, we may then relate to someone else who also forms a false belief. Depending on the velocity of communication, this false belief can thus multiply and spread exponentially, which, of course, is one of the reasons why it is possible, given the current state of our communications technology, for the whole world to believe something that is not true.

What it is important to note here, however, is that this kind of viral spread is not, on its own, a sufficient condition for the creation a universal false belief. This is because most of what is spread in this way is so ephemeral that it is almost instantly forgotten. For something to stick, it has to have one or more of a certain set of attributes, such as being amusing, for instance. What’s more, not all such false beliefs have the same capacity to cause us harm. To have a seriously deleterious effect they have to make us see the world in a significantly different way such that it affects our behaviour, which brings us, therefore, to the third and most significant way in which false beliefs can be created: through invention, not for the purposes of deception, but with the intention of entertaining, offering moral guidance or helping to explain the world through the use of stories.

The most innocuous of these different uses of fictional narrative is, of course, entertainment. Even here, however, there is still a danger that some people perhaps even a lot of people may be led to form false beliefs, not, perhaps, about the actual existence of the characters in a particular story or about whether the events in the story actually occurred, but certainly about the possibility of real people doing the kind of things that the characters in the story do. If one watches enough science fiction, for instance, it is quite possible that one could come to believe that, one day, human beings might actually travel to planets in distant star systems, totally impossible though this is and ever will be.

Stories can also give us a very false picture of what human beings are really like. This is especially true with respect to ancient myths and modern action films, in which the characters are often very one-dimensional, single-minded and untroubled by conflicting interests, motives, beliefs or emotions. This over-simplification of the human experience can then be further exacerbated if at least part of the intention behind a story is to provide some form of moral or salutary lesson, which it can be very difficult to draw out or make unequivocal if the characters are complex human beings with both strengths and weaknesses, virtues and vices. Simplifying the characters so that they exemplify just one virtue or vice makes it very much easier, therefore, to use the characters to convey moral principles. The problem is that this can also lead to a rather Manichean view of the world in which people are either all good or all bad and do evil things simply because they are evil.

That’s not to say, of course, that such over-simplified representations of human beings are the only cultural artefacts in our media-rich environment that can cause us to see the world purely in terms of black and white. The more we are exposed to these artefacts, however, the more prevalent this Manichean worldview seems to be, as has been fairly clearly demonstrated in recent months by our rather odd reaction to recent world conflicts. For instead of calling for peace, which would have been the standard response of third parties in the past especially among the young our overwhelming response, at least as represented in the media, has apparently been to take sides, blaming the conflict entirely on one side, which we then consistently portray as evil Vladimir Putin being the perfect example while completely exonerating the other, whom we then portray as entirely innocent. Not only are international conflicts seldom this simple, however, but representing them in this way also has some very negative consequences. For not only does the taking of sides make it more or less impossible for us to broker a peace, but it positively encourages the side being supported to continue wars in which thousands more people are killed.

If fictional narratives can thus cause us to both lose touch with reality and create moral oppositions which can have fatal repercussions for whole populations, these unfortunates consequences pale into insignificance, however, when compared to the dangers inherent in using stories to explain the world to ourselves, especially as, throughout most of our history, the world we have attempted to explain in this way has especially included natural phenomena such as storms and earthquakes, which would have otherwise been completely inexplicable to us.

Nor is it difficult to understand why the use of stories to explain these natural phenomena should have posed more threats to us than any other use of fictional narratives. For if one knows nothing of the causal laws which govern an essentially inanimate universe, then one has very few other options for explaining violent storms, for instance, than to animate them, depicting them either as something akin to rampaging beasts or as something unleashed by a hidden but nevertheless sentient being whose anger might have been diverted from ourselves if we had only offered it the right inducements.

Not that even this, on its own, would have necessarily led to the kind of horrific consequences we see littered throughout much of our history. For while all human failings were liberally represented among the gods and spirits that were then seen as animating nature, including our susceptibility to both bribery and flattery, there were also ways in which this anthropomorphised conception of the natural world actually had an uplifting and beneficial effect upon the human spirit. Consider, for instance, the Romans’ practice of making small offerings, particularly honeyed oatcakes, to their household gods, which, more than anything else, had the effect of reminding Romans of their blessings: the rooves over their heads, the food on their tables, etc.

If the Romans were an exception in this respect, however as I suspect they were it is because what most conspicuously characterised so many Roman religious practices was the fact that, being centred on the family la familia – they were essentially private and personal. All Roman households, for instance, had a shrine somewhere within them where family members could not only leave offerings for the spirits they felt inhabiting the world all around them, but where they could pray or talk to their ancestors and recently departed loved ones, thereby endowing their relationship to the spirit world with an intimacy which more or less precluded violence. It is only when a religious practice or, more especially, the perceived need to offer up sacrifices, becomes communal and public usually because the whole community believes it has a problem that the more destructive aspects of such practices are unleashed.

Suppose, for instance, that a village has had three bad harvests in a row which cannot be explained by any of the well-known natural hazards associated with farming. Worse still, food shortages have led to a greater susceptibility to disease and a higher mortality rate, which the villagers see as a quite separate punishment being inflicted upon them. So they go to see their most experienced elder the one who tells them when to sow their seed grain and when to let a field lie fallow and ask him why this is happening, what they have done wrong and what they must do to amend the situation.

This, however, puts the elder in a very difficult position. For he has no practical solution to offer his people, who have been farming in the same way for hundreds of years, during which their crop rotation system has always served them well. If he admits that he has no idea what they should do, however, this would mean a loss of his authority. So he tells them that, in practical terms, they are doing everything right but that they have neglected to properly acknowledge a particular spirit, to whom they must make redress by offering up a substantial sacrifice before the next growing season.

This they do by sacrificing one of their few remaining pigs, which appears to have the desired effect in that the following year’s harvest is back to normal. Believing that they have already done enough to appease this angry god, however, they do not repeat the sacrifice the following year, with the apparent result that they have yet another poor harvest. Realising their mistake, the next year they therefore sacrifice a pig once gain. But the harvest is still poor. And so they go back to their elder and ask him again what’s wrong, to which he replies that, after they had omitted the sacrifice the second year, the god was made even more angry, with the result that a pig was no longer enough to propitiate him. The sacrifice needed to be of something more precious to them: in short, it had to be one of them. And so, from that year on, without fail, they assiduously sacrifice one of their own number in order to assure a good harvest, establishing a ritual from which they cannot now retreat. For to cease a practice on which they believe their lives depend, and in which they have been instructed by the person upon whom they have bestowed authority, would be to once again invite disaster.

Nor can one fault their logic. For if one does not know what actually causes a harvest to be good or bad but believes that crop rotation, allowing fields to lie fallow one year in four and the making of human sacrifices are all equally necessary, one simply has to go on doing all three. Driven by fear, in fact, the assiduousness with which the sacrificial ritual is performed becomes a kind of obsession, with the further consequence that anyone who questions it is regarded as a heretic and a threat to the community, and is very probably the next person to be sacrificed. Very soon, therefore, people are simply too afraid to speak out, which then allows those in charge of the ritual to continually expand it, turning it into a sacred rite with a rationale which goes way beyond the reasons it became established in the first place.

During the reconsecration of the Great Pyramid of Tenochtitlan in 1487, for instance, it is said that, over the course of four days, the Aztecs sacrificed more than 80,000 prisoners to their sun god Huitzilopochtli in order to keep the sun shining. Others have actually questioned this number, putting it at closer to 20,000, but whether the number was 80,000 or 20,000, the ritual had clearly developed a life and internal logic of its own which bore little relation to whatever real world experience had originally brought it into being.

What makes the Aztec practice of sacrificing human beings in such numbers truly shocking, however, is the fact that 1487 is only 536 years ago, an elapsed period of time which, without a fairly dramatic change in the environment, is simply not long enough for any significant evolutionary changes to have occurred in the human species. That is to say that, in genetic terms, we are exactly the same today as our Aztec cousins, as is further demonstrated by the fact that, during the 20th century, over 100 million people were murdered in concentration camps, all of them, no doubt, on the authority of someone who said that their sacrifice was necessary for society’s greater good.

3.    Science to the Rescue?

Of course, it will be pointed out that, although we may not have evolved appreciably over the last five hundred years, we have nevertheless changed, most notably by abandoning our mythological explanations of natural phenomena and devoting ourselves, instead, to discovering the causal laws which underlie and explain how the natural world actually works. To suppose that this fundamental change in the way we think about the world has somehow saved us from ourselves, however, is to ignore the fact that science is as capable of getting things wrong as any other form of factual knowledge and does so for the very same reasons, starting with observational errors.

In the case of science, these typically come about not as a result of thinking we saw something when we did not or taking it to be one thing when it was actually something else, but as a result of faulty measuring equipment, for instance, or by not basing our data on a statistically large enough sample. Again, as with observational errors in general, these mistakes can be corrected or more usually avoided by simply taking more care, which, in scientific terms, translates into adherence to the highest standards of scientific rigour. The problem with this, however, is that disciplining oneself to take more care is dependent upon a belief in and commitment to a certain set of values which can be very easily undermined if the institutions of science, themselves, become corrupted, which, as I shall endeavour to demonstrate later, can also happen very easily.

As in the case of all factual knowledge, the second category of error into which science can fall then comprises logical errors, which are intrinsically more difficult to avoid than observational errors in that scientific laws are based on inductive rather than deductive reasoning. That is to say that if we observe something ninety-nine times in a row and what happens is always the same, we then assume that the same thing will happen on our one hundredth observation. Indeed, it is consistent correlations of this kind upon which most scientific laws are based. The only problem with this is that this kind of inductive reasoning is never 100% certain. For just because the same thing has happened each time one has observed something in past, doesn’t mean that the same thing is going to happen the next time one observes it. For there is always the possibility of an exception, which one will then have to explain.

It is with respect to the explanatory aspects of science, however, that the biggest problems lie. For even if there are no exceptions or none that have yet been observed science is incomplete unless it can explain why a certain correlation always or usually holds. This means going behind the observed reality and positing an unobserved reality that would explain it. That is to say that just as our ancient ancestors explained the world to themselves by telling themselves stories, so we tell ourselves a story: a different kind of story, of course, but a story nonetheless.

In 1522, for example, one of Ferdinand Magellan’s ships completed the first ever circumnavigation of globe, thereby proving conclusively that, rather than being flat, the earth is round. However, it also rekindled the eternal question as to why, if the earth is round, everyone, other than those at the top of the sphere, doesn’t just slide off: a mystery which was not satisfactorily explained until the publication of the Philosophiæ Naturalis Principia Mathematica in 1687, in which Sir Isaac Newton presented his now famous theory of universal gravitation, which posits that every particle in the universe attracts every other particle with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centres.

The most important thing to note about this definition, however, is not just the fact that, by basing it on quantifiable relationships, Newton was able to formulate a mathematical equation for calculating the gravitational force between two bodies for the first time, but that he conceived of gravity as an attractive force, a bit like magnetism, which we now believe to be false. This is because astronomers in the 19th century discovered that Newton’s mathematical equation did not accurately predict the orbit of the planet Mercury, which no one could explain in terms of any additional factors. All they knew, therefore, was that there had to be something wrong with the theory. Without an alternative, however, scientists continued using this now falsified theory, along with Newton’s mathematical formula, for the best part of a century, until the publication of Albert Einstein's theory of general relativity in 1915, in fact, when a replacement theory was thus finally made available.

My point in telling this story, however, is not just to give substance Sir Karl Popper’s dictum that scientific theories can never be proven, only disproven, and that when a theory is disproved, eventually someone has to come up with an alternative, but also to demonstrate how little we really understand science: a point which is perfectly illustrated by the fact that, today, a full century later, most people either still believe that gravity is an attractive force or hold to the completely incoherent view that, even though Einstein’s theory conceives of gravity as a warping of space, both theories are somehow true, which is entirely impossible. Either gravity is an attractive force or it is a warping of space; it cannot be both. What’s more, there is a fairly high probability that it may be neither. For there are numerous critics of modern theoretical physics, including the American philosopher of science Paul Feyerabend, who have cast doubt on whether the mathematics of Einstein’s theory predict Mercury’s orbit any better than Newton’s did. 

Not, of course, that this is necessarily a problem. After all, we worked with a disproven theory of gravity for nearly a century and nothing untoward happened. Now, in all probability, we just have two such theories, with either of which we can work quite happily as long as we have the honesty and humility to accept that scientific theories are the stories we tell ourselves to explain observed correlations in the world and that from time to time they have to be amended or even replaced in order to accommodate new observations. Our problems only begin when, for whatever reason possibly the loss of religious faith – we start to believe that scientific theories are true in some absolute sense and that scientists therefore speak with absolute authority: a misconception which is partly due to the fact that we do not understand science and partly a result of the fact that a need for some such authority is part of our genetic make-up. It is also, however, a betrayal of science: one which, as I aim to demonstrate, is the main cause of our increasing loss of scientific integrity.

To understand this, however, we need to go back in history once again, to the year 1660 when the Royal Society was founded in London, thereby ushering in a new scientific era the aims and ideals of which were enshrined in the society’s founding charter. Nothing expressed these aims and ideals better, however, than the choice of ‘Nullius in Verba’, meaning ‘By No One Else’s Word’, as the society’s motto, which was unequivocally understood at the time as an explicit rejection of authority as then exemplified by the church as the basis of belief. Their vision of science was rather one in which all scientists not only had the right but a duty to review the evidence for themselves and form their own judgements. They problem was that they were only able to do this because most scientists in the 17th century were generalists rather than specialists and could all, therefore, review each other’s work.

One of the best examples of this was Robert Hook, who, along with Sir Isaac Newton, was one of the Royal Society’s founding members and whose work took in just about everything from physics to zoology. He even designed and built his own microscope in order to study and draw microscopic lifeforms invisible to the naked eye.

As the scope of science expanded, however, it inevitably became more specialised  with the result that the 17th century’s encyclopaedic approach to science became increasingly less feasible. Worse still, this fragmentation of science into more and more finely defined specialisms also had the inevitable consequence of each discipline developing its own dedicated vocabulary, which meant that scientific papers, which might once have been read by everyone, became more and more inaccessible to anyone not immersed in each highly specialised language. This then created a vicious circle. For knowing that their readership would be restricted to colleagues within their own field, scientists gradually gave up even trying to communicate to a wider audience, finding it easier to write using the shorthand jargon of their own specialism rather than try to explain themselves to the uninitiated. The overall result was that scientific writing in general became ever more impenetrable, with the further consequence that scientists in other fields had little choice, therefore, but to defer to the acknowledged experts in each specialism, thereby creating figures of authority which, one suspects, most scientists actually found rather congenial.

However, the rot didn’t stop there. In fact, an even more unfortunate consequence of this fragmentation of science was the way in which the primary purpose of scientific writing now also changed. For as its function as a method of communicating new scientific knowledge to the world waned, its main value came to be seen in terms of the standing it conferred upon the author or authors, especially when the latter were applying for jobs or research funding. The result was that most scientists now had to find a publisher for their work in order to further their careers, which then had the further consequence of causing a proliferation in the number of journals being published, which themselves became ever more specialised.

Because these more specialised journals naturally had a more limited readership, however, this then led to many of them charging their authors for the privilege of seeing their work in print, a practice which the authors were obliged to accept for the simple reason that they needed to be published in order to secure funding. Because most of them would not have been able to afford this on a university lecturer’s salary, however, this then led to the practice of including the cost of publication in their funding applications, thereby forcing the taxpayer to shoulder this cost rather than passing it on to consumers.

The obvious flaw in this arrangement, however, was that authors, from whom the journals now obtained their income, became more important to the journals than readers, which had the further inevitable consequence of journals becoming less selective about what they published. This then had the further consequence that the readership for most scientific papers contracted still further, until it became a standard joke within scientific publishing that most scientific papers only ever have three readers: the authors, themselves, the editor of the publishing journal and the peer reviewer, who, in most cases, merely checks the summary at the beginning of the paper in order to make sure that it is roughly in line with most other research being undertaken in the field.

Which, of course, it always is. For apart from the fact that the majority of research funding always goes to those holding the majority opinion, if one’s sole objective is to get one’s paper published in order to add another title to the list of publications on one’s CV, then the last thing one wants to do is cause the editor of the publishing journal or the peer reviewer to question its soundness by including findings inconsistent with the general consensus. The result is that, apart from adding a few confirmatory findings of their own mostly in order to demonstrate that their work makes an original contribution to the field most authors of scientific papers will carefully ensure that their conclusions align with the views of both their peers and the funding bodies, even going so far as to cherry-pick or statistically manipulate their data in order to achieve this end.

In fact, such practices have become so commonplace in academic science today that, as I pointed out in ‘Problems in the Culture of Modern Science’, most young scientists do not even realise that they are doing something wrong when they statistically manipulate their data to make it fit the prevailing theory. They simply do what everyone else does and are able to get away with it, not only because going along with the herd and not making waves is also what everyone else is doing, but because it doesn’t matter. For if one’s work more or less aligns with everyone else’s, and if hardly anyone is going to read it any way, it makes no difference whether one assiduously adheres to the highest standards of scientific rigour or simply makes it all up. One may be committing fraud and one’s results may be entirely bogus but, in most cases, it is not going to affect anything.

The problem is that once science has reached this level of corruption, occasionally it will throw up fraudulent results in a field of study that is not only of general public interest but one in which governments feel that they have to get involved. What’s more, governmental involvement almost invariably brings with it more funding, thereby increasing both the potential rewards and the dangers for those working in the field. For while there may be more money available, there are also likely to be more people with an interest in scrutinising one’s work, with the result that those committing fraud may no longer be able to rely on the passive corruption of institutions simply allowing things to slide and may therefore have to take more active measures, as has been only too well demonstrated by those leading the campaign against climate change.

4.    Climate Change & The Repurposing of NASA

Because most people in the world today believe that the earth’s climate is warming catastrophically as a result of manmade greenhouse gases, it is highly like that any expression of scepticism in this regard on my part will result in many readers ceasing to read this essay at this point, concluding that I am some kind of nutcase and a climate change denier. Any open-minded review of climate science, however, very quickly reveals that the issue is not only far from settled but that the Anthropogenic Global Warming (AGW) theory which currently dominates the science has actually been disproved multiple times in its chequered history.

That history began in 1896 when the Swedish physicist and Nobel laureate, Svante Arrhenius, discovered that, while the two most plentiful gases in the earth’s atmosphere, nitrogen and oxygen, absorb electromagnetic radiation mostly in the visible part of spectrum, there are some gases, most notably carbon dioxide, methane and ozone, which absorb radiation at much lower frequencies and longer wavelengths, in the infrared part of the spectrum. These gases he called greenhouse gases for the very good reason that, while visible light is emitted by very hot objects, in this case the sun, infrared radiation is emitted by much cooler objects, in this case the earth’s surface, which, having been warmed by the sun, then reradiates this energy back out into space… unless, of course, it is absorbed by one or more of those gases in the atmosphere capable of absorbing radiation at these lower frequencies and longer wavelengths, thereby causing an increase in atmospheric temperature.

In fact, according to the Stefan-Boltzmann law, which describes and quantifies the relationship between the temperature of an object and its thermal radiation, without greenhouse gases in the atmosphere the average surface temperature of the earth would be -18°C, 33°C colder than its current +15°C. What this means, therefore, is that, in all probability, without greenhouse gases, there would be no life on earth at all. In fact, even just without carbon dioxide, which all plants absorb through photosynthesis and on which they all therefore depend, the earth would be lifeless.

This therefore raises the question as to why greenhouse gases have got such a bad reputation. And the answer is that Arrhenius made a mistake. For despite the fact water vapour is thirty times more plentiful in the earth’s atmosphere than carbon dioxide and is by far the biggest contributor to the overall greenhouse effect, he omitted it from his calculations. As a consequence, he thought that carbon dioxide contributed far more to atmospheric warming than it actually does, thereby giving rise to his concern that increasing levels of carbon dioxide might lead to runaway global warming of the kind we see on the planet Venus, where carbon dioxide actually constitutes 96.5% of the atmosphere rather than the mere 0.04% it does on earth.

The actual contribution of carbon dioxide to the earth’s retention of energy was far more accurately calculated by the German physicist, Karl Schwarzschild, who worked out that, while total greenhouse gases slow down the rate at which infrared radiation is radiated into space by about 30%, such that, at any one time, 30% of this energy is retained within the atmosphere, because carbon dioxide represents only a fraction of total greenhouse gases and only absorbs infrared radiation at a limited number of wavelengths, it only accounts for around 10% of the 30% of retained energy, which is to say 3%. This also means that it only contributes around 3.3°C of the 33°C by which the earth is warmed by greenhouse gases according to the Stefan-Boltzmann law.

What’s more, increases in atmospheric carbon dioxide do not increase temperature lineally. This is because the way in which greenhouse gases work is not the way in which they are usually represented as working in most diagrams, where absorbed infrared radiation radiated from the earth’s surface is shown as being reradiated back down to the earth’s surface again. When gases reradiate absorbed radiation, however, statistically they do it in all directions. Even more importantly, before reradiation occurs, molecules of gases warmed by the absorption of infrared radiation conduct this energy to molecules of gases which do not absorb infrared radiation and are therefore cooler. That is to say that it is not just the greenhouses gases that are warmed in this process but the whole atmosphere, with the greenhouse gases acting as conductors. Their capacity to do this, however, does not increase lineally with volume but logarithmically, such that each incremental increase in volume has a diminishing effect. In the case of carbon dioxide, for instance, half of the 3.3°C which it contributes to atmospheric temperature at its current volume of 400 ppm (0.04%) was contributed by the first 20 ppm (0.002%). The second 20 ppm then contributed around 0.38°C, the third a further 0.19°C, and so on. This means that the contribution of the next 20 ppm to be added to the current volume of 400 ppm will be so negligible as to be hardly measurable.

By the beginning of the first world war, therefore, when all this was more or less understood, the AGW theory was effectively dead. By then, in fact, Arrhenius had actually withdrawn his original paper and admitted his mistake. It was briefly revived in the1950s but then shelved again in the 1960s when scientists working at research stations in Antarctica developed new ways of calculating both the age of ice core samples and the temperature of the atmosphere at the time when the ice was formed. Using these techniques, they then made the rather disconcerting discovery that what used to be regarded as the last ice age, which ended around 11,000 years ago, wasn’t an ice age at all but merely the latest cycle of glaciation in a whole series of such cycles stretching back around 2.5 million years, each cycle lasting roughly 100,000 years. They also discovered that during most of each cycle, around 90,000 years, most of the northern hemisphere lies under a sheet of ice about a mile thick. It is only during brief warm periods of around 10,000 years, at the top of each cycle, that the northern hemisphere is actually habitable. Given that the current warm period, known as the Holocene, has already lasted for 11,000 years and reached its peak 6,000 years ago, this would therefore suggest that the earth is due to enter another period of glaciation fairly soon.

In fact, during the early 1970s, when this information was first made known, this was the great climate fear, made all the more alarming by the fact that the most plausible theory put forward to explain these cyclical changes in the climate would suggest that there is absolutely nothing we can do to stop them happening. This is because the theory in question is based on cyclical changes in the earth’s orbit of the sun, which were first worked out by the Serbian mathematician and astronomer Milutin Milanković during the first world war, the three most important of which are as follows:

1.      Cyclical changes to the actual shape of the orbit due to the gravitational pull of other planets, principally Jupiter and Saturn. This can make the orbit either more round or more elliptical and also results in the sun seldom being at the centre of the orbit, usually being closer to one end of the ellipse, known as the perihelion, than the other, known as the aphelion.

2.      The difference between the length of the sidereal year, the time it takes for the earth to complete one full orbit of the sun, and the calendar year. This results in the phenomenon known as precession, or a falling back of the orbit over time, such that over a cycle of 40,000 years, the northern hemisphere’s winters take place in different segments of the orbit, sometimes occurring when the earth is in the perihelion, making these winters mild, sometimes occurring when the earth is in the aphelion, making them very much colder.

3.      Cyclical changes in the earth’s axial tilt, which varies from 22.1° to 24.5°, thereby either increasing or decreasing the temperature differences between winter and summer.

Importantly, nobody has ever actually done the work necessary to fully demonstrate a positive correlation between these cyclical variations in the earth’s orbit and its 100,000 year cycles of glaciation and glacial retreat. Moreover, there are other theories that have been put forward to explain the latter, including one which links cyclical variations in the climate to cyclical changes in the sun’s magnetic field, a correlation which has been positively demonstrated. The reason none of these theories have received the attention they deserve, however, is because, at some point in the 1970s, someone dug up the old AGW theory once again.

Initially, this was done merely to raise the possibility that increases in greenhouse gases over the last hundred and fifty years might counteract and prevent another cycle of glaciation: a possibility which governments were willing to fund scientists to investigate on the basis that it might allay people’s fears. At some point, however, the possibility that global warming rather than global cooling might be the real threat was raised once again: a possibility which governments were equally happy to pay scientists to investigate on the basis that, whereas global cooling, caused by changes in the earth’s orbit of the sun, was not something they could do anything about, global warming as a result of manmade greenhouse gases, was clearly something they could prevent through the introduction of policies of which they rightly believed their electorates would approve.

The result was a whole flurry of institutions coming forward with research proposals, one of the most surprising of which was an institution which, up until the mid-1980s, had only had a subsidiary role in climate science, but which, rather tellingly, needed to diversify its activities in order to secure its rather precarious hold on government funding.

This was the National Aeronautics and Space Administration (NASA) which, during the 1960s, had been provided with an ever-expanding budget in order to fund its Gemini and Apollo space programmes, which culminated in the successful landing of a manned spacecraft on the moon in July 1969, followed by the even more important feat of bringing all three members of its crew back home again safely. The only problem was that this whole adventure served absolutely no economic purpose. It was a pure vanity project designed to showcase America’s scientific and technological supremacy but was otherwise utterly pointless. In December 1972, with worldwide television audiences growing tired of watching astronauts playing golf on the moon, the Apollo programme was consequently brought to a what was probably an overdue end, leaving NASA with the huge problem of finding some other way to justify its huge budget.

This it did by developing a reusable space shuttle, which, during its first phase of operation, from 1981 to 1986, was primarily used to put commercial satellites into orbit. The problem once again, however, was that this made absolutely no economic sense. For with a crew of between three and seven people along with all their life-support systems it took an absolutely massive launch vehicle to get it off the ground, which, in turn, added even more weight to the combined assembly and even more cost to each flight. The result was that each kilogram of cargo or payload that was put into orbit cost $65,000, which simply wasn’t commercially viable.

In contrast, the unmanned and therefore much smaller and lighter Soyuz spacecraft, which the Russians used to put satellites into orbit, cost just $21,000 per kilogram of payload, raising the question, therefore, as to why the usually more business-minded Americans didn’t adopt the same strategy. So ridiculously expensive was NASA’s alternative, in fact, that the only way to explain it is to assume, once again, that it had something to do with PR. For not only did a reusable, manned spacecraft seem light years ahead of the old Soyuz rockets, which had been around since the 1960s, but it also seemed to hold out the promise of future manned missions of a more ambitious nature, possibly to other worlds, totally fanciful though this was.

Setting aside the exorbitant cost of each shuttle mission, however, it was the manned nature of these flights that ultimately brought the future of NASA into question, when, in January 1986, the Space Shuttle Challenger blew up 73 seconds after launch, killing all seven members of its crew and creating one of the worst PR disaster in US governmental history, especially when it was subsequently discovered that there were serious design flaws in some of the shuttle’s components, about which NASA’s senior management had known but had done nothing to correct, ignoring the documented warnings of its top engineers.

Given the overall level of criticism meted out by the board of enquiry which investigated the accident over the next three years, it is a wonder, in fact, that NASA actually survived, strongly suggesting, therefore, that the decision to keep it going was largely political, no one in the US government wanting to admit that one of its most prestigious agencies was not fit for purpose. In addition to the belated establishment of an Office of Safety, Reliability, and Quality Assurance, however, it was forced to make some very major changes. The most significant of these was that, when operations were resumed three years later, it had to stop using the shuttle to launched commercial satellites. This not only saved a huge amount of money but also reduced the overall number of missions flown, thus reducing the risk of another catastrophe. The most significant effect of this, however, was that NASA was reduced still further to being a mere showcase for US technology, with the shuttle only really now being used for such PR exercises as the launch of the Hubble Space Telescope in 1990.

It was this curtailment of NASA’s primary function which then almost certainly led to the decision to diversify the organisation’s activities, principally by getting it more involved in other areas of national scientific interest, one of the most conspicuous of which, of course, was in the rapidly expanding field of climate science. In James Hansen, Director of NASA’s Goddard Institute for Space Studies (NASA GISS), moreover, it had the perfect candidate to take a leading role in this highly visible area of public concern. For not only had he been involved in the climate change debate since the late 1970s, but he had actually written his PH.D thesis on the atmosphere of Venus and so knew all about carbon dioxide.

In fact, it rather raises the question as to how he could have failed to differentiate between Venus, which is very hot because its atmosphere is nearly all carbon dioxide and absorbs all the infrared radiation given off by the planet’s surface at the relevant wavelengths, and the earth, where a very small percentage of carbon dioxide warms the atmosphere by conducting the energy it absorbs to other gases, and where, for this very reason, even a doubling of its current volume would have a negligible effect. Given his background and his position as Director of NASA GISS, however, Hansen’s presentation to Congress in 1988 was accepted as absolutely authoritative, thus making the climate change bandwagon more or less unstoppable as it sped towards Rio de Janeiro, where, in June 1992, the first international environmental treaty to combat ‘dangerous human interference with the climate system’ was signed by 154 states, thereby opening up the spigots of governmental funding to institutions all around the world.

5.    The Unquestionability of a Science Hardly Anyone Knows

Of course, it may be thought that the mere fact that this happened casts doubt on my analysis. For why would 154 countries sign up to something unless their own scientific advisers were absolutely certain that it was supported by the underlying science? Then again, why would the UK’s top medical officer advise people to wear surgical masks during the Covid pandemic when, at 600 microns, the mesh or weave of a surgical mask is just too loose to prevent viruses with an average diameter of 5 microns passing through it?

Part of the answer, of course, is that, just as it was politics which largely drove our response to Covid in 2020, so, by 1992, there was already a considerable level of political momentum behind the climate change agenda. This meant that there was money available for any institution that made the right political noises. An even bigger factor, however, was the already described compartmentalisation of science into ever more discrete specialisms, which meant that there were very few people, even among the scientific community, who really understood the atmospheric physics affecting our climate. This therefore had the further consequence that most scientists were quite happy to defer to those scientists, such as James Hansen, who claimed that they did understand the underlying physics, especially as many of the projects that were now being funded were carried out by multi-disciplinary teams in which atmospheric physicists were often a very small minority, with most of the team being made up of specialist in other disciplines, especially computer science.

A good example of this was the Coupled Model Intercomparison Project (CMIP), which began in 1995 and initially involved 102 institutions from all around the world with the objective of developing computer simulations of the climate for the whole of the following century. Although each of the national institutions were nominally independent, however, giving it the semblance of a truly global project, in terms of the underlying climate science, all the participants were under the coordinated leadership of the Lawrence Livermore National Laboratory in California, which not only provided the climatological datasets which all participants were required to use but instructed them to assume a 1% increase in atmospheric carbon dioxide per year.

Even more importantly, all participants had to share in the collective ethos of the programme if they wanted to go on receiving their funding, with the result that, of the original 102 participants, only 42 survived to reach Stage 5 and only 13 made it to Stage 6, at which point only the most zealous institutions remained. Given this structure and organisation, it is hardly surprising, therefore, that most of the 1995 forecasts for the mean temperature of the earth over the next twenty years, up to 2015, were wildly in inaccurate, with some of the projections being a full 1°C hotter than actually turned out to be the case.

What made this whole programme so insidious, however, was the fact that while these computer models merely reflected the assumptions programmed into them – including the assumption that carbon dioxide was the main or even sole driver of the climate – their projections were regarded as having the same status as empirical data. Instead of being thought of as forecasts, which may or may not come true, they were treated as facts. After all, computer models don’t lie: they are the truth! So if your computer model tells you that, without some dramatic changes in our behaviour, the earth’s mean surface temperature is going to rise from around 15°C in 1995 to more than 16°C in 2015 you believe it. It’s why Al Gore predicted that summer arctic sea ice would have disappeared by 2014.

What’s more, this then gave rise to further downstream predictions. For if summer arctic sea ice was going to have disappeared by 2014, this also implied that a substantial proportion of the Greenland and Antarctic ice sheets would have melted by then, with a commensurate rise in sea levels. It’s just simple maths. With the flooding of low-lying coastal areas and the consequent loss of habitat for plants and wildlife, it was then also fairly easy to predict which species were going to become extinct. In fact, most of the downstream science was more or less faultless. The correctness of each of the downstream predictions, however, not only depended on the correctness of each of the predictions that preceded it, but ultimately on the correctness of the underlying atmospheric physics, which very few people working in the downstream disciplines really understood and which therefore went largely unquestioned.

From the public’s perspective, this misleading projection of an irrefutable chain of consequences was then further compounded by the fact that commentators on it did not distinguish between the different sciences involved or make it clear that each prediction was conditional on the original premise from which everything else flowed. Instead of qualifying their reports with any kind of indication that what was being reported was dependent on the AGW theory being true, they simply stated, therefore, that, according to ‘scientists’ – as if all scientists were the same – all kinds of bad things were going to happen as a result of manmade global warming, as if manmade global warming was a fact not a prediction based on a repeatedly disproven theory.

This therefore transformed what was an essentially scientific issue into a moral and political one. For if it is a fact that whole species are going to be wiped out and large parts of the planet made uninhabitable by something we, ourselves, are doing, then we have to stop doing it. Indeed, as moral arguments go, it is pretty unassailable. As a result, it has placed enormous pressure on politicians who may not understand the science, themselves, but know enough about politics to know that, if they want to get re-elected, they cannot be seen as causing the death of millions of furry animals.

Not only did this reinforce the public’s widespread false beliefs on these matters, however, but it also opened the door to another round of corruption. For if governments were going to stop global warming, then according to the AGW theory they had to phase out fossil fuels in favour of sources of energy which do not produce carbon dioxide. The problem was that if these sources of energy were commercially viable, then someone would have already been exploiting them. The fact that they were not means, therefore, that they are almost certainly non-viable and have to be subsidised in order to make their development and deployment possible. As I have explained elsewhere, however, subsidies do not make non-viable technologies viable. They merely make it possible for corporations to make money out of them, which large, politically influential corporations are usually very happy to do in that the very existence of the subsidies guarantees that they will make a  profit. Eventually, however, someone still has to pick up the tab, that someone usually being the consumer or the taxpayer, both of whom are impoverished while the recipients of the subsidies are enriched.

The fact that most people do not understand this and are happy to see governments hand out subsidies to all and sundry is again, of course, due to our general ignorance. For just as very few of us understand atmospheric physics – or, indeed, very much science at all – so we don’t understand economics either. We therefore do what we have done throughout history: we leave decisions on such matters to those whom we believe do understand them, even though most of these ‘experts’ are just like the village elders in our agrarian history, who didn’t really understand the crop rotation system any better than anyone else but had the political nous to make it seem as if they did. The problem is that, just as those in authority in our distant pass often led us to disaster, so this is precisely what is happening today. For by giving up cheap fossil fuels and adopting renewable technologies which actually consume more energy than they generate, we are effectively destroying our economies.

Not, of course, that our quest for net zero is the only factor in this equation. The fact that we believe that we can go on borrowing money and living beyond our means on an indefinite basis is also a major part of the problem. There is, however, a difference between these two delusions. For whereas our belief that one can create a robust economy simply by printing and borrowing money is merely wishful thinking, our belief that we are destroying the planet by burning fossil fuels and that we are therefore guilty of some terrible sin creates its own reality: one very similar to that created by the Aztecs, who, having convinced themselves that they had to sacrifice their fellow human beings in order to save their world, proceeded to turn that world into a version of hell.

Of course, it may be argued that I have not actually proven that our belief in the AGW theory is false or that the changes we are making to our lives in response to it are going to end in catastrophe. Nor have I really explained how, if our belief in the AGW theory is false, the whole world has come to believe it. Indeed, it may well be argued that I still haven’t actually explained how the whole world comes to believe things that are not true in general and so haven’t actually fulfilled the objective implicit in the title of this essay. Ask yourself, however, what evidence you have for believing the AGW theory. After all, there was no less summer arctic sea ice in 2023 than there was in 1988. In fact, none of the predictions which James Hansen made in his address to Congress thirty-five years ago have come true. So why do we still believe in the theory upon which these predictions were based? The answer is simple. We believe it because everyone else does!

But surely, you say, someone must actually know whether it is true or false. No! No one can know whether a scientific theory is true because scientific theories cannot be proven, only falsified. There are, however, people who do actually know that the AGW theory is false: Richard Lindzen, Emeritus Professor of Meteorology at MIT, for instance, and William Happer, Emeritus Professor of Physics at Princeton University, on whose work most of the scientific information in this essay is based. In fact, any physicist of distinction and integrity will tell you that it is false. If they want to keep their jobs or their funding, they just won’t go public with that information. Being retired, Richard Lindzen and William Happer, however, face no such restrictions.

That is not say, of course, that every proponent of the AGW theory in employment is corrupt. It is just that very few of them have sufficient knowledge the specific science. Most of them simply repeat, in good faith, what they are told. It is because they repeat what they are told in good faith, however, that we believe them. If we thought they were lying, we would not. Because we believe them, however, we then form a false belief, which, in all good faith, we may then relate to others. Indeed, if one wants to know how the whole world can come to believe things that are not true, this is how. Throughout our entire history, in fact, most of the things we have believed that were not true have not been believed by ourselves alone but by all of us. The problem, today, is that we now have the technology to propagate false beliefs on a scale and at a speed that has never been possible before, such that even if it is not the AGW theory that is going to be our ultimate undoing, unless we understand this fundamental truth about ourselves, eventually, at point, a false belief will take hold in such a way that it brings disaster upon us.