Sunday, 11 January 2026

The Insidious Nihilism of Kurt Vonnegut’s ‘Slaughter House Five’

1.    A Personal Memoir?

A little while ago I mentioned that I have a lot of books: so many, in fact, that I have run out of places to put them and decided, therefore, that, for the most part, I would stop buying new ones and would start rereading books from what is, effectively, my own library.

One of the first books I chose for this journey through the bookshelves of my past was ‘Notes from Underground’ by Fyodor Dostoyevsky: a quite extraordinary book in which Dostoyevsky creates a Christian allegory through which to conduct a debate on the subject of free will and materialistic determinism, which I don’t think I really understood during the winter of 1974/75 when I first read it, but which so impressed me on second reading that I decided to write an essay on it, which you can find here.

Since then, I have reread a number of very good books which I am pleased to have revisited, including ‘Earthly Powers’ by Anthony Burgess, which I regard as one of the best first-person semi-autobiographical historical novels I have ever read. None of them, however, has inspired me quite enough to want to share my thoughts about them with the readers of this blog… until now! My reason for trying my hand at literary criticism once again, however, is not because ‘Slaughter House Five’ by Kurt Vonnegut is in the same class as ‘Notes from Underground’ but because, while being equally as extraordinary, it is probably one of the most insidious books ever written. It draws us in with its quirkiness; it ensnares us in its non-chronological labyrinthine structure; it lulls into a false sense of security with its understated gentleness; but, most of all, it captivates us with its sheer cleverness: an attribute that is most clearly made manifest by the fact that, on the surface at least, it would appear to combine a personal memoir of the second world war with a work of science fiction, which most people would generally assume to be impossible.

After all, a personal memoir is supposed to be factually based. The author’s memory may be unreliable at times with the result that he or she may get some of the facts wrong. But that is not by design. A work of science fiction, on the other hand, is actually intended to present us with a largely imaginary world.

Nor is the dissonance this creates entirely dissipated by the fact that, on first reading, the book would appear to be divided into two parts: a main part, which is written in the third person and is about the strange life of Vonnegut’s central character, Billy Pilgrim, and what appears to be an introduction or preface, which is written in the first person and is about Vonnegut’s struggles to actually write the book. This impression is significantly undermined at the end of this apparent introduction, however, when Vonnegut makes two rather odd statements. Firstly, he tells us that after numerous unsatisfactory drafts, all of which he threw away, he finally finished the book, which is something we already know. After all, it is the book we are actually reading. It would have been far more helpful, therefore, if he had told us what the impediments to him finishing it had been and how he had overcome them. But this he doesn’t do. Then he tells us that the first line of the book is ‘Billy Pilgrim has come unstuck in time’, which seems even more odd, not only because there doesn’t seem to be any reason why he would tell us this but because ‘Billy Pilgrim has come unstuck in time’ is actually the first line of the second chapter of ‘Slaughter House Five’ as it is actually printed. This is because the first section of the book is not labelled as an introduction; it is labelled ‘Chapter One’ or, rather, just ‘One’, in that Vonnegut does not use the word ‘chapter’ in his chapter headings.

Of course, it could be argued that Vonnegut’s intention in telling us what the first line of the novel is is to make it clear that the first numbered section is, indeed, an introduction and not part of the actual novel. But then why didn’t he just label it ‘Introduction’? Unless, of course, that’s not what it is: a possibility which, I have to admit, I didn’t even consider when I first read the book in the summer of 1974. Like most people, I simply took what seemed to be an introduction at face value. Having read it a second time, however, I now realise that, when it comes ‘Slaughter House Five’, nothing should ever be taken at face value: a realisation which, when it hit me, actually led me to look up Vonnegut’s biographical details on the internet to check whether he was even in the army during World War II. And, sure enough, he was part of a reconnaissance unit which was captured by the Germans in the Ardennes Forest during the Battle of the Bulge, after which he was duly shipped off to Dresden, where, along with other American PoWs, he was quartered in a disused abattoir designated Schlachthof Fünf  Slaughter House Five in the deep subterranean meat locker beneath which he and his fellow PoWs took shelter during the fire-bombing of Dresden in February 1945: an act of pointless destruction by the allies which, again on the surface, would seem to be the main focus of the book.

Just because what Vonnegut tells us about his time as a PoW generally accords with the known facts about his life, however, does not mean that everything he tells us in his ‘introduction’ is equally factually based. At one point in what he describes as his struggles to recount his experiences in Dresden, for instance, he tells us that he telephoned an old army buddy, Bernard O’Hare, who was also a PoW in Dresden, in order to get his opinion on how the book should be structured. In particular, he tells us that he asked Bernard what he thought about making the execution of a certain Edgar Derby the climax of the book, the implication being that Edgar Derby is not just a character in Vonnegut’s novel but someone both Vonnegut and O’Hare knew in real life and whose absurd execution for salvaging a miraculously preserved china teapot from the rubble of a bombed-out building they both witnessed.

The problem with this very natural interpretation of what Vonnegut tells us, however, is that, whatever else he may have been, Edgar Derby is a character in the novel we are reading and a rather important one at that. Being a high school teacher in civilian life, he is older than the rest of the PoWs housed in Schlachthof Fünf and adopts a somewhat paternalistic attitude towards them, especially the hapless Billy Pilgrim, whom he befriends. Billy Pilgrim, on the other hand, is as hapless as he is precisely because he has become unstuck in time, a rather fantastical notion which only makes sense in a work of science fiction, which rather implies that his existence is entirely fictional.

Not, of course, that mixing fictional and real life characters in a novel is entirely without precedent. Vonnegut also tells us, however, that the Edgar Derby in his novel is a devoted husband and father who spends a lot of his time writing imaginary letters to his wife in his head, a very specific detail about Derby’s inner life about which Vonnegut could not have known if, as PoWs together in Dresden, he had merely observed him from the outside. Of course, it is not unknown for writers to embellish characters they have taken from real life with additional habits and traits foreign to the person on whom they are based. But if Edgar Derby had indeed been a real person, what would his wife have thought about Vonnegut using her husband in this way, especially as he does not even change his name, something upon which Vonnegut’s publishers would surely have insisted.

If the character of Edgar Derby is as fictional as that of Billy Pilgrim, however, what this also means is that the telephone conversation between Vonnegut and O’Hare in which they discuss Edgar Derby’s execution is also fictional, as the character Bernard O’Hare, himself, may well be. In fact, as soon as one teases out one thread from this interwoven fabric, the whole thing begins to unravel, raising the question, therefore, as to what purpose this supposed introduction to the novel is actually intended to serve: a question which, had we ever raised it which most of us did not Vonnegut clearly wanted us to answer by focusing on a key passage towards the end of the introduction in which he describes taking his young daughter and one of her friends on a road trip to visit Bernard O’Hare and his family in Pennsylvania, the primary purpose of which is to allow Vonnegut to continue trawling O’Hare’s memories of Dresden.

Assuming that the character of Bernard O’Hare is a fiction, however, so too must this whole episode be, along with the character of Kurt Vonnegut within it, as becomes patently obvious as the episode unfolds. For no one could be as naïve and insensitive as Vonnegut presents himself as being while, at the same time, so astutely describing the rather strange behaviour of O’Hare’s wife, Mary, who makes her displeasure at Vonnegut’s visit felt as soon as he and his little party enter her home. After ushering the two men into the kitchen, where they have to sit at the kitchen table rather than in the comfortable leather armchairs in O’Hare’s study, she more or less orders her own children to take Vonnegut’s two girls upstairs to play and watch television, making it clear that she doesn’t want any of them listening to Vonnegut and her husband talking about the war.

Not that there is much chance of that, as O’Hare insists from the outset that he doesn’t remember very much, when what he really means, of course, is that there is not much he wants remember. As a result, the two men quickly lapse into an awkward silence while they listen to Mary stomping around in the adjoining living room, clearly very unhappy.

Nor does Vonnegut have to wait very long to find out what’s he’s done to make her so mad at him. For he is fairly sure that it is he who is the cause of her wrath, not Bernard. And he’s right. As her anger reaches a crescendo, she storms back into the kitchen to vent her fury at Vonnegut, not just because he has come there to dig up memories her husband would rather remain buried but because of something more primal. ‘You were just babies’, she says, ‘when you went to war. Like those upstairs’, to which Vonnegut has to admit that she probably has a point, both he and O’Hare being just boys, fresh out of school, when they enlisted. ‘But you are not going to write it that way, are you!’ she goes on. ‘You’ll pretend you were men and you’ll be played in the movies by Frank Sinatra and John Wayne or some of those other glamorous, war-loving old men. And war will look just wonderful, so we’ll have a lot more of them. And they’ll be fought by babies like the babies upstairs.’

The fictional Vonnegut, of course, is both stunned and slightly cowed by this, not because he was planing to write the kind of book Mary has accused him of planning to write but because, up until this point, we are supposed to believe that he didn’t know what kind of book he was going to write at all and that it is Mary’s outburst that lifts the scales from his eyes. This, however, is quite clearly a fiction. For when ‘Slaughter House Five’ was published in 1969, at the height of the Vietnam war,  Vonnegut had already had twenty-five years to forge his opinions on war and certainly didn’t need a blast of Mary O’Hare’s anger to help him make up his mind on the subject. It just makes for a better story. It only works, however, if the revelation Vonnegut is depicted as having in Mary O’Hare’s kitchen is then made clear. The only idea which Vonnegut and O’Hare subsequently discuss, however, is that of The Children’s Crusade of 1212, in which children from Germany and France were manipulated into travelling to Italy, from where they were supposed to be shipped to the Holy Land to convert the Muslims to Christianity, but were actually shipped to Tunis to be sold as slaves.

As such, this clearly resonates with Mary O’Hare’s angry statements about old men glorifying war and sending babies to their deaths. The only problem is that, despite Vonnegut choosing ‘The Children’s Crusade’ as the subtitle of his book, there is absolutely nothing in the main text of the novel that relates to it in any way.

In fact, trying to find any connection between the strange life of Billy Pilgrim and any of the subjects  Vonnegut writes about in the introduction is actually very difficult. One possibility, of course, is that one could view Billy Pilgrim’s becoming unstuck in time as a metaphor for the ways in which wars change those who fight in them. Indeed, we get a strong hint of this earlier in the introduction when Vonnegut tells us that his first job after leaving the army was as a reporter in Chicago, where he covered a story about a lift operator who was crushed to death by his own lift. When he gets back to the office, the stenographer to whom he dictated the story over the telephone consequently asks him how he managed to stay so calm in the face of something so horrific. ‘It must have been a terrible sight’, she says, to which he replies that he saw far worse things during the war.

One of the worse things he tells us he saw in Dresden was a group of school girls who had been boiled alive in a water tower. During the fire-bombing, they had climbed up into the tower to escape the fire storm below and had got into the water because they thought it would both save them from being burned and keep them cool. It didn’t.

Later on, in the main body of the novel, he then describes how, when the American PoWs came up from the meat locker beneath Schlachthof Fünf the following day, the sky was still black with smoke, the sun a little pink dot trying to poke its way through. The whole city, which had once been one of the architectural jewels of Europe, he describes as looking like an eerie moonscape, with hardly a single building still standing. Even more shocking is the fact that 135,000 people were killed in Dresden in just that one night, nearly twice as many as were killed by the atom bomb dropped on Hiroshima. For the PoWs, however, the worst was yet to come. For with so many people dead, they were naturally put to work digging out bodies from under the rubble, most of them in cellars where they had gone to shelter while their houses burned down above them, but which became super-heated by the fire storm, killing those entombed within them. Within days, as a consequence, the miasma floating above Dresden became so putrid from the decomposing corpses that many of the PoWs became seriously ill from retching, with the result that the Germans had to bring in soldiers with flame throwers to finish incinerating the bodies where they lay.

After such experiences, it is hardly surprising, therefore, that many soldiers returned home profoundly changed by what they had been through and were consequently unable to reassimilate into ordinary life, which they now found themselves observing from the outside: detached, remote… a bit like Billy Pilgrim. The only problem with this interpretation of Billy’s symptoms, however, is that his almost constant state of disorientation and confusion didn’t start during the war; he is described as having always been like this. What’s more, his condition makes him more or less immune to the kind of psychological trauma other soldiers suffered, not only bringing into question whether Vonnegut’s book could really be about the effects of wars on those who live through them, but raising the question as to whether it is actually about war at all.

After all, the only reason we have for believing that this is a book about war and more specifically about the fire-bombing of Dresden is that, throughout the introduction, Vonnegut constantly tells us that it is. If we look at the main body of the novel, however, events in Dresden only occupy a couple of chapters towards the end. If we accept that the introduction is as much a work of fiction as the novel as whole, moreover, it begins to look as if the purported subject of the book might also be a fiction and that the purpose of the introduction is therefore to serve as a source of misdirection of the kind magicians use on stage to draw our attention away from what they are actually doing. The question, therefore, is what Vonnegut is actually doing or, more specifically, what ‘Slaughter House Five’ is really about.

2.    The Tralfamadorians

Another reason for doubting whether Billy Pilgrim’s coming unstuck in time is a metaphor for some kind of war-induced PTSD is that, when he returns home from the war, he resumes his training to be optometrist as if nothing had happened to interrupt it. In 1948, he admittedly suffers a brief and unexplained mental breakdown, but even if this doesn’t prevent him from completing his studies finishing third in his class and marrying the daughter of the owner of the optometry school, who duly sets him up in business as an optometrist, at which is he is very successful. Over the next twenty years, as a result, he and his wife are able to enjoy the kind of comfortable American middle class lifestyle depicted in Hollywood films of the 1950s and 60’s. They have two children a boy and a girl a nice home and a new Cadillac every other year. In fact, Billy Pilgrim couldn’t have been more ordinary if he’d tried.

It is not until 1967 that three things happen which completely change his life. Firstly, he is one of only two survivors of a plane crash over Vermont, which leaves him with serious head injuries requiring major brain surgery. Secondly, his wife dies of carbon monoxide poisoning as a result of a traffic accident which she, herself, causes while rushing to the hospital to be at Billy’s bedside. Then, to cap it all, he is abducted by aliens from the planet Tralfamadore.

Of course, there will be some who may suspect that his belief that he was abducted by aliens might have something to do with the brain damage he suffered as a result of the plane crash, especially as no one else knows anything about his abduction. This, however, he explains as being due to the fact that the Tralfamadorians are able to travel in time as well as space, with the result that, even though he spends several months on Tralfamadore, being studied in a kind of zoo for captured alien species, when the Tralfamadorians finally return him to earth, they do so at a point in time that is only a few seconds after he was abducted.

Not, of course, that the question as to whether Billy Pilgrim ‘really’ was abducted by aliens or whether he developed this delusion as a result of brain damage is one that gains very much traction in a book in which just about everything is subject to multiple interpretations. What is far more important is the effect that what Billy believes to have been his abduction by aliens has on his life, partly as a result of the abduction itself and partly as a result of what the Tralfamadorians teach him. For they are not only able to travel in time but actually experience time in a way that differs significantly from the way in which human beings experience it. For instead of experiencing it lineally, as a series of moments, each one following the one before in a sequence which extends infinitely into both the past and the future, they see it all at once. What’s more, they can move about in it, selecting which moment they want to inhabit.

This therefore makes them effectively immortal, not because they live for ever they don’t but because having visited the last moment of their lives, which they can do an infinite number of times, they can simply go back to some other moment in their lives. Because they experience all time as present time, what’s more, they not only know when their own lives come to an end, they even know when and how the universe, itself, comes to an end. In fact, they actually cause it when a Tralfamadorian test pilot, testing a new rocket fuel, presses a button to start the engine of his space craft and causes the universe to blink out of existence.

Billy, of course, asks them why, if they know this, they don’t stop the test pilot from pressing the button, which the Tralfamadorians find very amusing. In fact, they find it so amusing that they flock to the zoo at precisely that moment to hear him ask the question again. The answer, however, as they try to get him to understand, is because the test pilot always presses the button, has always done so and will always do so. The moment is simply structured in that way. To the Tralfamadorians, however, this is not a problem because they can simply avoid inhabiting moments in which bad things happen and inhabit, instead, moments in which good things happen, like listening to Billy ask hilarious questions.

After long and careful consideration, to Billy, this seems like a very enlightened and intelligent approach to life. He, however, has a problem which the Tralfamadorians don’t have. For although travelling through time to reach Tralfamadore also seems to have given him the ability to travel in time, unlike the Tralfamadorians, he is not able to control it or choose which moments in his life he visits. One moment, he’ll be walking through a door in 1958; the next, he’ll be back in the Ardennes Forest in 1944. This keeps him in a constant state of anxiety, which he describes as like being an actor with permanent stage fright, never knowing which part of his life he is going to have to play next. This also explains why, throughout his life, he has always appeared to others as somewhat shocked and confused, looking around him in a daze as if wondering where he is. It also explains why he has always been extremely passive, a mere observer of the world rather someone who is engaged in it, and why he has always attracted bullies, people who see his passivity and lack of agency as weaknesses of which they can make fun.

Nor does it help that, unlike the Tralfamadorians, most of the moments in his life he randomly revisits are precisely those he would prefer to avoid: like the time his father tried to teach him how to swim by throwing him into the deep end of a swimming pool; or the time he was being beaten up in the Ardennes Forest by a bully called Roland Weary, who might have beaten him to death if the Germans hadn’t intervened and taken them both prisoner; or, of course, the moment he and his fellow American PoWs emerge from the meat locker beneath Schlachthof Fünf to find the once architecturally beautiful city of Dresden reduced to a moonscape under a blackened sky.

What this also suggests is that, even before his abduction by the Tralfamadorians in 1967, when his ability to travel in time is initially acquired, he has actually been time travelling all his life, a strange form of temporal paradox which is further confirmed by the fact that he knows in advance when the Tralfamadorians are going to abduct him and actually goes out into his garden to greet them. Like them, he also knows when and where he is going to die and does nothing to prevent it. Indeed, he actually walks into it. For he is assassinated in a convention centre in Chicago in 1976, having gone there to give a well-publicised lecture on the Tralfamadorians and time travel, which alerts a Chicago gangster called Paul Lazzaro to his forthcoming presence in Chicago, enabling Lazzaro to hire a hitman to shoot Billy for an offence Lazzaro wrongly believes Billy committed in 1944.

At this point, of course, you may be wondering what Billy is doing in 1976 giving public lectures on time travel and the Tralfamadorians, something which his daughter, Barbara, believes is another symptom of his brain damage. Having experienced his own death multiple times, however, and each time having jumped to some other point in his life, Billy decides that he wants to teach all his fellow human beings what the Tralfamadorians taught him: that death is not something to be feared, that it is just one moment in a life one eternally continues to inhabit.

In this, of course, he is wrong. For even if his abduction by aliens and his travels in time are real, and not symptoms of the injuries to his brain he suffered in the plane crash, he acquired his ability to travel in time by being abducted by the Tralfamadorians, which means that other human beings, who are not abducted by the Tralfamadorians, may not have the same ability and may not jump to another point in their lives when they die. To Billy’s followers, however, this is not something they choose to consider. They rather choose to believe that, when they die, they will jump to another point in their lives because this is what they want to believe.

That is to say that what Billy actually creates is, in effect, a new religion, one in which those who believe in him are promised eternal life. And if that sounds familiar, it should. For just like Dostoyevsky’s ‘Notes from Underground’, ‘Slaughter House Five’ can also be read as a Christian allegory, in which the Christ figure, Billy Pilgrim, suffers all the horrors and pains, traumas and torments of all human life, but transcends them through his fatalistic acceptance that that is just what life is. Like the Tralfamadorians, he understands the futility of asking ‘Why?’ when what exists cannot be explained and must simply, therefore, be endured. At one point, he even tells his followers when and where is going to be killed, to which they respond by shouting ‘No’. But then he tells them that if they cannot accept this, then they have not understood a single word he has been telling them.

3.    Heaven or Hell?

The problem with Billy’s new religion, of course, is that while it may liberate its followers from the fear of death, it has moral consequences which Vonnegut does not even address. For if one really believes that one is condemned to go on reliving one’s life for all eternity, one has to be very careful what kind of life one makes for oneself, especially if, like Billy, one cannot control which parts of one’s life one jumps to after one dies. The problem is that this doesn’t necessarily mean that those who subscribe to this belief will devote themselves to living rich and fulfilling lives or even lives of continual pleasure. For while they may desire to create a heaven on earth for themselves, which they can then enjoy throughout eternity, given the kind of terrible things that can happen to people in this world, they may be far more driven by the fear that they may accidently create an eternity in hell for themselves: a fear which could make them so risk averse that they may not be able to do anything at all.

In this regard, I am reminded of another novel in which ‘time’ is a major theme, ‘The White Hotel’ by D. M. Thomas, which, as far as I can remember not now having a copy of the book to which I can refer is about a young Jewish opera singer who is referred to Sigmund Freud for analysis because she is suffering from chronic psychosomatic pains in her left breast and ovary. In line with his standard methodology, Freud duly attempts to identify some trauma in her past that would explain these pains, but fails to do so, at least to her satisfaction. For the one thing Freud does not consider, of course, is that the trauma may not lie in her past but in her future.

In 1941, however, she is captured by the Germans and taken to a place just outside Kiev called Babi Yar, where, along with thousands of other prisoners, she is made to strip naked and line up on the edge of a ravine, where she and the other prisoners are then machine-gunned, a mode of execution which results in one bullet passing through her left breast and another through her left ovary. The way D. M. Thomas describes it, it is one of the most horrific scenes in all literature, made all the more so by the fact that, although she topples into the ravine with all the other victims, she is not dead, only wounded, and is actually killed by the hundreds of naked bodies which subsequently fall in on top of her, crushing the life out of her and burying her alive.

Given D. M. Thomas’ particular vision of how time sometimes operates, with the future sometimes affecting the past, she is not, of course, condemned to relive this experience over and over again for all eternity; she is merely haunted by it throughout her life. Imagine how terrified one would be,  however, if one actually believed in Billy Pilgrim’s new religion and feared that something like this might be one’s own fate. One can imagine that some people might even take their own lives to avoid it.

Of all the negative consequences that could possibly flow from believing in Billy Pilgrim’s new religion, however, this one is relatively mild compared to what could be brought about by the possible existence of a group of people who not only believe in Billy’s religion but so hate another group of people that they are willing to make the lives of this group hell so that they will, indeed, have to relive them for all eternity. They even build concentration camps where they work and starve these people to death over as long a period as possible, so as to prolong their suffering, and inflict on them every form of cruelty imaginable.

The good news is that, despite the existence of a real historical parallel, this is probably one of the least likely consequences of believing in Billy’s new religion to actually occur. In fact, it is far more likely that a widespread adoption to this belief would actually reduce the amount of cruelty in the world. I say this because the hate-filled vindictiveness which drives the imaginary concentration camp guards in the above scenario differs significantly from the values and beliefs which determined the behaviour of real concentration camp guards in places like Auschwitz, most of whom were not sadistic killers who inflicted suffering on people for the sake of it, but merely did what they were told because they largely accepted the propaganda they had been constantly fed that these people were sub-human vermin who had to be eradicated for the good of society. It does not excuse them, of course, but, for the most part, they did not believe that they were evil or that what they were doing was evil. Indeed, it is this that makes the Holocaust so dreadful: that it was ordinary men and women, like you and me, who largely carried it out.

This, however, could not be said in the case of anyone who deliberately inflicted suffering on others because they believed that this suffering would be repeated throughout eternity. For anyone who did this would surely have to know that what they were doing was both evil and irrational. For they would also have to know that the hell they were creating would not just be for their victims but for themselves as well. For they, too, would be trapped in it forever, endlessly repeating the same acts of cruelty for all eternity, thereby turning themselves into what most Christians would describe as being quite literally devils: something which no sane person would surely ever choose to be,  not least because there can be no forgiveness or absolution for those who commit atrocities which have no prospect of ever coming to an end.

More to the point, a devil is not what most of us want to be. Most of us like to consider ourselves at least halfway decent human beings, a characteristic of being human which thus highlights the real flaw in Billy’s new religion. For while the Tralfamadorian belief that all time is present time means that nothing can be changed, most of us want to believe that we can not only choose the way we act but the kind of person we are.

In this regard, ‘Slaughter House Five’ can thus be seen as a vehicle for the same debate between free will and materialistic determinism we find in ‘Notes from Underground’. The only difference is that Vonnegut and Dostoyevsky are on opposite sides. For while, at the end of ‘Notes from Underground’, its unnamed narrator sacrifices his own interests in order to avoid inflicting himself on another human being, thereby exercising free will, Vonnegut would argue that, like the Tralfamadorian test pilot who presses the button that ends the universe, both of these events are entirely determined and thus involve neither freedom nor choice.

In fact, Vonnegut’s determinism or the determinism that results from the Tralfamadorian view of time – lies at the heart of any interpretation or assessment we make of ‘Slaughter House Five’. For what Vonnegut does not seem to have understood is that it actually results in a contradiction within his overall message. I say this because what makes the Tralfamadorian view of time so liberating, of course, is the fact that, if nothing can be changed, then nothing we do really matters, from which can be derived the central nihilist precept that nothing matters at all except the knowledge that nothing matters. This is because it is this knowledge, that nothing matters, that sets us free from such values and social structures as those which have led men to fight and die in wars since the beginning of time. After all, if nothing matters, then there is nothing worth fighting and dying for. The problem is that our liberation from traditional roles and values further implies that we have the freedom and ability to act outside and contrary to these roles and values. That is to say that it implies that we can choose to live in a different way. Indeed, it is this vision that made Vonnegut a hero on university campuses all across America throughout the 1960s and 70s. Not only does this contradict the entire deterministic world view from which this whole line of reasoning flows, however which means that there has got to be something wrong with it somewhere but its adoption raises questions with which we are still struggling today. For if our new freedom from traditional roles and values allows us to now choose how we live our lives and, hence, who we are, the question this poses, of course, is ‘How do we want to live our lives and who do we actually want to be?’

4.    Meaning & Identity

Indeed, it is this question, which, today, is usually posed in terms of meaning and identity, that is the primary legacy handed down to us by the 1960s, not because, at some point in the 1970s, we all started reading Kurt Vonnegut, but because, in 1960, the US Food & Drug Administration (FDA) approved the release of a safe and reliable oral contraceptive, which, as many predicted, led to an inevitable decline in traditional marriage and a weakening of the once very distinctive roles of husband and wife that men and women traditionally played. This, in turn, then led both men and women to question more profoundly their purpose in life, a very destabilising process which has almost certainly affected men more deeply than women.

I say this because when a man was the husband of his wife and the father of his children, he not only knew who he was, but this very identity bestowed on him various responsibilities. It was his duty, for instance, to provide for his family, to put food on the table and keep a roof over their heads, duties which gave him a very clear purpose life and imbued it with meaning. By no longer inhabiting these roles or, at least, not to the same extent both men and women have therefore had to define both their identity and their purpose in life in other ways.

One of the most obvious of these, of course, is through their careers which have become more and more important, especially to women, as family roles have declined. Indeed, it is probable that most of us now define ourselves far less in terms of our place within a family and more in terms of our position within the working world. The problem with this, however, is that most people simply do not have careers that are important enough to carry the full burden of life’s meaning. Many of us pretend that we do, of course, and our employers pander to our need to be of significance by giving us fancy titles; but most of us know that the world wouldn’t come to an end if we didn’t turn up for work tomorrow.

To compensate for this, of course, many people simply throw themselves into their social lives, not least because being a member of a particular social group confers on one a certain group identity. The problem with this, however, is that most social groups are either based on shared activities and interests, or come about merely as a result of people being at the same school together or drinking in the same pub. Even if people didn’t just drift away to go to university, for instance, or to take up a new job, social groups are therefore essentially ephemeral and, as we get older, tend to become little more than nostalgic relics of a distant past.

The few close friendships most of us have do, of course, last longer and are thus more meaningful. The problem here, however, is that what usually makes a friendship meaningful is the role friends play in supporting us in other areas of our lives, especially our careers and marital relations. Without problems to discuss and other people to moan about, friendships therefore tend to become more like routines or habits, providing us with someone with whom we can go out and have a drink rather than sit at home watching TV.

That’s not to say, of course, that this is a bad thing. Friends certainly make our lives a little less empty. But they don’t provide us with the purpose in life our families use to, making it hardly surprising, therefore, that, as marriage and the family have declined, more and more people have sought to make their lives meaningful by taking up a social or political cause, which, because such causes usually involve some sort of group activity, also provides the participants with an additional group identity.

The problem with basing even a part of one’s identity and purpose in life on a social or political cause, however, is that it tends to have three very unfortunate consequences, both for the individual concerned and for society as a whole. Because choosing a cause to which to devote oneself is also a choice of one’s identity, and because one wants to think of oneself as a good and righteous person, the first of these unfortunate consequences is that the choice of a cause is essentially a moral one, which means that anyone who opposes one’s advocacy of that cause either doesn’t know what they are doing, and is therefore stupid, or knows full well what they are doing and is therefore immoral. This, however, is extremely divisive. For if one believes that another’s opposition to one’s cause is due to either their stupidity or their immorality, it is very difficult to tell them that, while one disagrees with their views, one respects their right to their own opinion. Not being able to agree to disagree consequently makes it very difficult to part on amicable terms.

The second unfortunate consequence results from the fact that, if one’s belief in a particular cause is central to one’s identity, then anything that threatens to undermine that belief is effectively an existential threat to oneself. This therefore makes rational discussion of the belief very difficult if not impossible. For whatever factual evidence or rational argument another puts forward in opposition to one’s belief, in order to protect oneself, one has to resist it at all cost, most commonly through aggressive denial.

In 1989, for instance, James Hansen, then Director of NASA GISS, told Congress that, due to global warming, Artic summer sea ice would be a thing of the past by the end of the century. Apart from normal annual fluctuations, however, the amount of Arctic summer sea ice has remained more or less constant for the last thirty-seven years. It is very doubtful, however, that any climate change activist would accept this as evidence that their beliefs about carbon dioxide and global warming are wrong. They will far more probably dismiss the undiminished presence of Arctic summer sea ice as disinformation and accuse anyone who repeats it of either being stupid, for believing such lies, or a climate change denier and hence immoral.

The third unfortunate consequence of basing one’s identity on a political or social cause then consists in the fact that, unlike roles assigned to us by society, roles or identities we choose for ourselves require constant validation. If I am a father, for instance, I may continually question how adequately I perform this role, but I will not question the validity of the role itself, not because the role of being a father is the inevitable consequence of a biological fact, as may be thought, but because normative social rules often carry as much weight as biological imperatives.

I say this because, up until around 11,000 years ago, most human societies were matriarchal, which meant that fathers were not recognised as such and played no role in bringing up their children. Unlike the role of a child’s mother, therefore, which has a fairly obvious biological basis, the relatively recent role of the father would appear to be largely a social construct, which requires constant social reinforcement, usually in the form of intense public censure of any man who shirks his paternal responsibilities, in order to be maintained. In accepting those responsibilities, therefore, traditionally, men didn’t need society to validate their choice because society didn’t really give them one.

Of course, it will be argued that this just shows how tyrannical traditional roles and values were and how much we have gained by freeing ourselves from them. It may be doubted, however, just how much more tyrannical traditional roles and values were than the values and behaviour demanded by many of today’s activist groups, which can be highly censorious of anyone who deviates from current orthodoxy and absolutely scathing of anyone who actually leaves the fold. This is further compounded by the fact that causes we choose for ourselves tend to be less grounded in everyday life than responsibilities which are thrust upon us, like looking after small children, where keeping them safe and contented is demonstrable evidence that we are doing something right. Choosing what kind of food we should eat to save the planet, in contrast, is always going to be subject to shifting opinion, with which individual members of a group can easily fall out of step, thereby bringing even more censure on themselves.

What makes this even more pernicious, however, is the fact that if one’s identity is dependent on membership of a cause-based group, one may well feel that one has no choice but to conform to the group’s current orthodoxy, not because one is truly convinced by it, but because refusing to conform may well mean being cancelled and therefore stripped of one’s identity. The result is that members of the group come to think and believe what they are told to think and believe in a way that is entirely without substance. They will deny this, of course. They will scream and shout and vehemently affirm that they really do believe what they say they believe, but this, of course, is because to do otherwise would pose an existential threat to who they believe themselves to be.

Not, of course, that Kurt Vonnegut can be blamed for any of this. He wasn’t to know that our abandonment of traditional roles and values in the 1960s would lead to today’s nihilistic nightmare. But Dostoyevsky knew. Writing a hundred years earlier, he knew that the materialist determinism of the 19th century would not only lead nihilism but to the collectivist ideologies of the 20th century, which ultimately led to the gulags and the concentration camps and the fire-bombing to Dresden in February 1945. So, instead  of writing ‘Slaughter House Five’, perhaps Vonnegut’s time would have been better spent reading ‘Notes from Underground’.

 

 

Tuesday, 11 November 2025

Why ‘The West’ Is Broken

 

1.    The Making of ‘The West’

In order to understand why ‘The West’ is broken, we first need to understand what we actually mean by ‘The West’ when we capitalise it between inverted commas in this way. For the one thing it is not, of course, is a geographical location. Nor is this simply because, in common parlance, both ‘east’ and ‘west’ are primarily used to indicate relative rather than absolute position, such that, were I to travel ten miles to the east of my current location, for instance, a point five miles to the east of where I am now would become a point five miles to the west of where I would then be. That is to say that ‘east’ and ‘west’ are primarily expressions of one’s own position or point of view. What’s more, this applies as much to one’s cultural perspective as it does to one’s geographical location. For while ‘The West’, as we commonly understand it, may now extend far beyond the borders of the continent which gave it birth, most Westerners still see themselves as culturally European and therefore define themselves in contradistinction to cultures they regard as foreign, most notably the cultures of Asia, which we commonly refer to as ‘The East’.

This European cultural identity, however, is not drawn from all of Europe. This is because ‘The West’ also has an ideological dimension, which is rooted firmly in the reformation. This is most clearly illustrated by the fact that, while North America, which was initially settled by immigrants from protestant Britain, Germany and Scandinavia, is definitely part of ‘The West’, the Catholic countries of Central and South America are not.

This is because the Roman Catholic Church has always been a fundamentally authoritarian institution, having been intentionally created that way by the very manner in which it was brought into being, which, as I have explained elsewhere, happened on Christmas day in the year 800, when Pope Leo III crowned Charlemagne as the first Holy Roman Emperor in what was essentially an act of mutual recognition. I say this because while, in crowning Charlemagne Holy Roman Emperor, Leo recognised Charlemagne as the legitimate heir to the Roman Emperors of the past, in allowing himself to be crowned by Leo, Charlemagne recognised Leo’s authority as head of the Church to so crown him, with the result that the Holy Roman Empire, the Roman Catholic Church and the Papacy, itself, were all effectively created by this single act.

Of course, it will be objected that Popes existed long before Leo III crowned Charlemagne Holy Roman Emperor, each one being the heir to St. Peter. This widespread misconception, however, is the result of a deliberate rewriting of history by the Roman Catholic Church in order to legitimise the role and status of the Papacy. I say this because prior to Charlemagne’s recognition of Leo’s as head of the Church, there had been no overall leadership or hierarchical structure within the Church as a whole. Each diocese was essentially independent, its bishop chosen entirely by its congregation. This meant that no bishop had any more institutional authority over the wider Church than any other and that those who rose to eminence did so purely on the basis of their erudition, depth of theological understanding or holiness.

In the 4th century, for instance, the bishop who was accorded the greatest authority in the wider Church was Athanasius, Archbishop of Alexandria, who not only wrote an open letter to the Church each Easter, which was copied and distributed to every diocese in Europe, the Middle East and North Africa, but who used one such letter to nominate the books that were to be included in the New Testament. In the 5th century, this position of primacy then fell to Augustine, Bishop of Hippo, who not only hosted the synod which finally ratified Athanasius’ selection of the New Testament texts but was regarded as the greatest theologian of his generation, writing several books of his own, including ‘The City of God’, ‘On Christian Doctrine’, and ‘Confessions’, all of which are still read today. To suppose that either of these giants of the early Church was in any sense subordinate to contemporaries who just happened to be Bishops of Rome but made no impression on the Church beyond their own diocese is therefore simply absurd.

Yes, after the fall of the western Roman Empire, during what are now called the Dark Ages, when bubonic plague wiped out most of the population of Europe, it is true that the Bishops of Roman did take on more secular power and were rewarded by their congregations with the affectionate title of ‘Il Papa’. In a city whose population had fallen from over a million to just 10,000, however, this was largely due to the lack of anyone else with sufficient authority to fulfil this role and was entirely confined to Rome, itself, and the surrounding region, both northern and southern Italy having been conquered by a coalition of the Lombards and other northern tribes and reverted to paganism. In fact, Rome was almost completely isolated from the rest of the Christian world during most of the 7th and 8th centuries, making it quite impossible for any Bishop of Rome to have exercised authority over the wider Church even if that authority had existed. It was only when the Lombards were defeated by Charlemagne’s father, Pepin the Short, in fact, that Rome was finally reunited with the rest of Christendom and it was only when Charlemagne and Leo hatched their master plan for world domination that any Bishop of Rome assumed the role of Christianity’s supreme pontiff.

There were, however, two major flaws in this plan. The first was that, initially, Leo was in no position to assert authority over the wider Church on his own. In fact, he needed Charlemagne to more or less do it for him. Because Charlemagne needed Leo to be recognised as head of the Church in order to legitimise his own position as Holy Roman Emperor, he was, of course, happy to do this, at least within his own domains. The problem was that, while his domains were extensive, covering all of France, most of Germany and parts of both Spain and Italy, they didn’t extend to all of Christendom. In particular, they didn’t extend to ‘The East’, where there was already a long established Roman Emperor based in Constantinople. What’s more, this emperor could trace his lineage back to the Emperor Constantine and didn’t therefore need a bishop to give him legitimacy. More to the point, he certainly didn’t want his own bishops being made answerable to a Bishop of Rome who was nothing more than the puppet of a rival Emperor.  

From the very moment the Roman Catholic Church and the Holy Roman Empire were simultaneously brought into being by Charlemagne and Leo’s act of mutual recognition, therefore, a rift between the Western and Eastern Churches was more or less inevitable. It didn’t happen immediately, of course, not least because it took some time for Leo and his successors to develop the Papacy into a fully functioning organisation with both the ability and confidence to assert its authority without the support of the Holy Roman Empire. Once it reached that point, however, there was only ever going to be one outcome.

This point of inevitable sundering was finally reached in 1054, when, in an attempt to exercise the authority over the eastern Church the Papacy had always pretended it had, Pope Leo IX sent a papal nuncio to Constantinople ostensibly to resolve two matters of doctrine which the Roman Catholic Church had introduced but to which the eastern bishops strongly objected. These concerned the nature of purgatory, which the eastern bishops regarded as far too punitive, and the canonisation of saints, which the Pope claimed to be his sole prerogative. While this made it seem as if the dispute was purely theological, however, in reality, the demand that the eastern bishops accept doctrines imposed on them by the Church of Rome without them even being consulted was actually nothing less than a demand that they recognise the Pope as the head of a universal Christian Church and submit to his authority. With the support of their Emperor, however, this was something they quite predictably declined to do, initiating what became known as the Great Schism between the Roman Catholic Church and the Eastern Orthodox Church which was to have enormous consequences for Europe throughout the centuries that followed, the most significant of which was the failure of the Christian world to come together to defend Europe against Islamic encroachment, leading to the fall of Constantinople in 1453 and the subsequent Muslim invasion of the Balkans, creating fault lines in Europe which are still a source of friction and conflict today.

If the first major flaw in Charlemagne and Leo’s plan led to the Great Schism, the second major flaw had even more significant consequences and arose from the fact that it didn’t just take time for successive Popes to develop the Papacy into a fully functioning operation, it also took a great deal of money. A body of senior clerics had to be developed to help shape policy and doctrine, a secretariat had to be employed to copy documents and have them distributed to every diocese in Europe, courts had to be established to resolve disputes, hear appeals and punish offenders, and a cadre of ambassadors or papal nuncios had to be recruited, not just to keep subordinate bishops in line, but to represent the Church’s interests throughout Europe’s royal courts. It was a bit like setting up and running a multinational corporation, the costs of which continually expanded as the influence and ambition of the Church grew.

Not, of course, that Leo could have realistically foreseen this as a problem when he first embarked upon this course, not least because the Emperor Constantine had left the Lateran Palace to the Roman diocese in his will, which meant that, initially, the Papacy had ample accommodation and office space. What’s more, most of its daily running costs could be met by donations from wealthy parishioners and by rents and taxes levied on the Papal States, which had been gifted to the Church by Charlemagne for this purpose. As the Church continued to grow, however, the unwelcome truth would have gradually crept up on whoever ran the Papal Treasury that these revenues were largely fixed and did not, as a consequence, grow in line with costs. What it needed, therefore, was not just other forms of income, but other forms of income that were based, not on side-lines such as rents on properties, but on its core activities, the most central of which, of course, was spreading the gospel and guiding people towards salvation.

The good news was that the foundations for monetising this service, if one may call it that, had already been laid. As early as the middle of the 3rd century, for instance, theologians such as Basil of Caesarea had begun encouraging priests to hear their parishioners’ confessions and grant absolution on the fulfilment of a suitable penance: a practice which was commended partly on the basis that it kept parishioners forever conscious of their sinfulness, but also because it gave priests the power to bestow that which had once been the sole preserve of Christ himself: forgiveness.

This power was then further enhanced when it began to be realised that absolution was most vital at or around the time of death, to which the Church consequently responded by developing the dual procedures of the Viaticum and Extreme Unction, jointly known as the Last Rites, which enabled the dying to meet their maker with their sins expunged, thereby ensuring that they would be allowed to enter heaven. The problem was, of course, that demanding money from a dying man or his family before a priest agreed to administer the Last Rites was not likely to go down well with most Christians and could not, in itself, therefore, solve the Church’s financial problems. A possible solution, however, became fairly obvious once people started to think about what actually happened to those of the faithful who were unfortunate enough to die without being given the Last Rites and whose sins therefore remained unexpunged: a question which had already been fairly persuasively answered by Augustine in his book ‘The City of God’, in which he suggested that heaven might have a kind of antechamber in which the unexpiated sins of those who had died unabsolved might be purged in some other way: specifically through fire.

Importantly, Augustine’s answer to what he saw as a purely theological question was not entirely original to himself: it had actually been discussed for at least three centuries before he took it up. His reputation, however, not only ensured that, when it was put forward again, sometime during the 10th century, not only was it not rejected out of hand as being too dreadful to contemplate but that someone in the Lateran Palace would recognise it as the perfect solution to all the Church’s financial  difficulties. As, indeed, turned out to be the case. For as soon as the existence of purgatory was officially adopted as Church doctrine, people were so terrified of dying in a state of sin that they were not only prepared to confess to every possible sin they may have committed so as to not leave any out but were willing to make cash payments to the Church in expurgation instead of undergoing one of the more traditional forms of penance.

That we are unable to say when, exactly, this occurred is, of course, slightly unsatisfactory, not least because it has led to a great deal of confusion on the subject. In researching this essay, for instance, I came across one article which suggested that it was still under discussion by the Church as late as the 12th century. We know that this cannot be correct, however, because we know that the punitive nature of purgatory was one of the reasons for the Great Schism in 1054. Not only must the doctrine have been introduced before then, therefore, but there is every reason to suspect that its introduction must also have occurred before 31st January 993, when Pope John XV canonised the first saint, a Prince-Bishop of the Holy Roman Empire called Ulrich of Augsburg.

Again, it will be objected that there were saints before 993, going all the way back to the apostles, in fact. While these saints may have lived before 993, however, they weren’t canonised until after 993. For unlike beatification, which generally happened spontaneously by popular acclaim within the local diocese of the person being beatified, formal canonisation, which was created specifically to allow saints to bypass purgatory and go straight to heaven, only became necessary once the Church’s doctrine on purgatory had, itself, been adopted, which we can therefore assume must have happened sometime before 993. Indeed, it was for this reason that the two issues of purgatory and canonisation were raised together by the eastern bishops in 1054. For if they were ever going to accept the Church’s doctrine on purgatory, they certainly didn’t want the Bishop of Rome to be the only person on earth with the power to circumvent it, almost certainly wanting their own churches to have some say in the matter.

If we do not know when, exactly, the existence of purgatory became official Church doctrine, what we do know, however, is that by the Fourth Lateran Council in 1215, the sale of ‘indulgencies’, reducing the amount time people had to spend in purgatory, had become what was probably the biggest extortion racket in history. In fact, so big had it become that, by then, the Church had already accepted the need to sub-contract the entire business out to professional agents called quaestores better known as ‘pardoners’ – who went from diocese to diocese, parish to parish, terrorising people into parting with their money by describing the agonies they would suffer in purgatory if they did not pay up. The problem was that, by 1215, the Church had become so dependent on this source of income that all the Lateran Council could really do was regulate how the business operated, curbing some of the pardoners’ worst excesses by limiting how much they could charge for each category of sin and specifying what percentage of the proceeds should go to the local diocese and how much should be sent back to Rome.

The result was that, by the early 16th century, the Roman Catholic Church had not only become the wealthiest organisation in the world but, in some places, it was also the most detested. For the lavish lifestyles and conspicuous consumption of the Prince-Bishops of the Holy Roman Empire in particular were inevitably paid for by those who could least afford it, engendering a level of outrage which eventually drove one man, Martin Luther, to send a copy of his ‘Ninety-five Theses’ to Albrecht von Brandenburg, the Archbishop of Mainz, who had recently appointed the infamous pardoner, Johann Tetzel, to sell indulgencies on behalf of his diocese, both in order to raise the money the diocese needed to pay its required contribution to the rebuilding of St. Peter's Basilica in Rome and to pay off the Archbishop’s own debts, to which the Pope had actually agreed.

Apart from the shamelessness demonstrated by this blatant act of corruption, what Luther really objected to, however, was what he believed to be the church’s implicit downplaying of faith. For by selling forgiveness, which Luther believed it was for God alone to grant, and making it dependent on acts of penance and donations to the Church, he believed that the Church was undermining the simple act of surrender of putting oneself entirely in God’s hands which, for Luther, was at the very heart of Christian faith. What he advocated, instead, was a much simpler, more personal form of Christianity, in which the individual’s relationship to God was not dependent upon the intermediation of the Church. And it was to this end that he therefore set about producing the first ever German translation of the bible, which was published in September 1522 and which, far more than his Ninety-Five Theses, is what really made him anathema to the Church. For it was precisely through its intermediation in the relationship between God and man, of course, that the Church gained its power and wealth.

Luther’s real crime, therefore, was not in questioning the Church’s authority, as is sometimes said, but in questioning its necessity: by telling people that they didn’t need it, that they could read the bible for themselves and make up their own minds as to its correct interpretation. And it was this that truly frightened, not just the Church, but everyone in positions of authority they wanted to keep.  Indeed, its why the religious wars of the 17th century, most notably the Thirty Years War, from 1618 to 1648, and the English Civil War, from 1642 to 1651, were as much about politics as they were about religion. For they were about the sovereign rights and freedoms of the individual in the face of an authoritarian Church and State.

What Martin Luther really set in motion, therefore, was not a reformation but a revolution: one which had two main strands, the political and the intellectual, both of which stemmed from the act of reading the bible in one’s own language, which was both a political assertion of one’s right to form one’s own opinions and not be told what to believe and an intellectual commitment to thinking for oneself. More than that, it also sparked a broader cultural revolution. For as printed translations of the bible proliferated throughout Europe, it caused a dramatic increase in literacy, a skill which, once acquired, is not confined to the reading of just one book. Other ancient texts, from Aristotle to Euclid, were also translated into Europe’s modern languages, to be joined by the output of contemporary writers, from courtly poets and political pamphleteers to the pioneers of the emerging new sciences. This in turn prompted the most rapid period of scientific development in history, culminating in the foundation of the Royal Society in London in 1660, the moto of which, as I have mentioned before, is ‘Nullius in Verba’ or ‘by nobody else’s word’, which declared to the world that the work and findings of its members would be based solely on evidence and reason, not on ‘authority’.

This anti-authoritarian stance was then further reinforced by the development of a new political philosophy which rejected the old world order in which everyone had a preordained station in life above which one could not rise. Instead, philosophers like John Locke asserted the existence of a natural right or freedom to make of one’s life what one would. Importantly, this did not mean that one could do anything one liked, as is sometimes suggested. For in a civilized society, Locke argued, one’s own personal liberty had to be bound by a mutual respect for the rights and liberties of others. It also meant that one had to take responsibility for oneself economically. For if one is economically dependent on another, that other may reasonably feel entitled to exert some control over how one lives one’s life, especially with respect to the level of expenses one incurs. All of this, however both the natural rights Locke championed and the responsibilities they entailed appealed enormously to the growing middle class, which this new political philosophy had itself brought about and which was the driving force behind the many new manufacturing industries which emerged during the 18th and 19th centuries as a result of both the technological developments to which the new sciences had given rise and the hard work and entrepreneurial spirit of the new middle class itself.

These new manufacturing industries then gave a further boost to foreign trade, as merchants from industrialised Europe were able to sell advanced manufactured goods to less technologically advanced countries in exchange for the commodities and raw materials which were used in Europe’s factories to produce even more advanced manufactured goods. It also led to the expansion of competing trading empires which eventually found their way into almost every corner of the world, with the result that, by the middle of the 19th century, European civilization or what, with the addition of the United States, we now call Western Civilization more or less dominated the entire planet and could justifiably claim to be the greatest civilization in history.

2.    The Breaking of the West (Part 1): Economics

After attaining such a level of dominance, the question this raises, of course, is ‘What went wrong?’ For while this may not be obvious to the politicians and bureaucrats in London, Brussels and Washington, who still cling to the increasingly outdated belief that their liberal world order still runs the planet, to most people in our increasingly multipolar world, it is fairly clear that ‘The West’ is in decline and is so largely because we have abandoned the very values that made us so successful. Not only have we mostly given up on self-reliance as the price that must be paid for independence but in much of the West we have actually embraced a culture of dependency. Even more baffling is the fact that we have not only stopped thinking for ourselves, believing almost anything those in positions of authority tell us, but have accepted a level of authoritarianism which, in many countries, including the UK, has more or less criminalised free speech and the airing of alternative views.

That we have a problem, not just in understanding this strange regression, but in even recognising it, however, it is not only due to the fact that we are still living through it and cannot therefore see it as clearly as our historical past, but because, unlike the logical progression that took us from Martin Luther’s rejection of Roman Catholic authoritarianism to the scientific discoveries of the 17th century, our regression has had two distinct phases with completely different causal origins.

What makes this even more confusing is the fact that our current rejection of self-reliance and independence actually has its roots in exactly the same melting pot of ideas as John Locke’s classical liberalism. For at almost the same time that Locke was arguing that, if one wanted to exercise personal freedom, one had to take responsibility for oneself economically, there were others who regarded such aspirations as no better than the avarice and greed of the ruling class that had previously oppressed them. In Germany, for instance, the Anabaptists believed that owning property was actually the source of almost every recognised sin, especially greed and envy, and so held all property in common, including, in some cases, women! During the English Civil War, the Diggers held much the same view, especially with respect to land, which they frequently seized from the rightful owners and farmed communally.

Of course, none of these cults or political movements lasted very long. Once the religious wars of the 17th century were over and order had been restored, property rights were once again enforced. After all, Oliver Cromwell, Lord Protector of the short-lived English Commonwealth, was himself a moderately wealthy land owner and had no interest in sharing his property with anyone. More to the point, it was successful middle class businessmen, who, unlike the old aristocracy, had worked hard to amass their wealth, who now dominated much of civic life, thereby creating a new class division between the working and middle classes in which the political philosophy of people like the Levellers became embedded within the working class identity. This then hardened over time such that, by the beginning of the 19th century, the glaring inequalities between the workers who produced the wealth and the business men who owned it had brought about a state of such moral outrage and enmity within the working class that it was only a matter of time before some national calamity, such as the losing of a world war, would trigger another revolution: not this time one that would set people free by teaching them to read and think for themselves but one that would trap them in a prison of tyrannical dependency of precisely the kind that John Locke described.

Nor is it hard to see why this should be so. For as the Anabaptists and Diggers clearly understood, the most obvious solution to the problem of inequality is to bring all property into public ownership. By holding property in common, however, it is the collective rather than the individual that decides where resources are to be allocated: a decision to which the individual is simply obliged to submit. The problem, however, is that the power of a collective, whether it be an Anabaptist commune in 17th century Germany or a collective farm in 20th century Russia, is inevitably vested in actual people.

In the case of an Anabaptist commune, for instance, power almost always lay in the hands of the commune’s spiritual leader, a man whose stern judgment and implacable will very often made him something of a cult figure, whose followers obeyed him both out of fear of eternal damnation and in order to eat. Indeed, it was the cult-like nature of many Anabaptist sects which, in addition to their unorthodox views on property, caused them to be so persecuted throughout the 16th and 17th centuries, both by Roman Catholics and by other Protestants, particularly Lutherans. For, as in the case of cults today, people were frightened that their children would be drawn into them and thus lost to their families. And it was this that ultimately drove the Anabaptists out of most of Germany. For in order to escape both the ostracism and persecution their beliefs brought down upon them, they were more or less forced to emigrate, most of them going to America, where Anabaptist sects such as the Amish still survive today.

Not, of course, that the cult-like status of its leaders was a problem for the Soviet Union, not least because very few of the local party functionaries who came in contact with the general public had the kind of charisma that gives rise to this rather unusual effect. Indeed, it was precisely this that was the real source of the Soviet Union’s most serious problem. For in an economy in which all property was held in common, it raised the question as to how local managers were to get higher productivity out of workers who could neither be incentivised by the promise of greater personal gain nor inspired to make greater sacrifices by the power of their local manager’s personality. Those in charge could always threaten them, of course, and even have them sent to a Gulag the secular equivalent of eternal damnation but all this usually did was teach people to keep their heads down and avoid being noticed. Indeed, it actually deterred people from taking responsibility. For who would want to be a factory manager in a system in which the manager had responsibility for achieving a set quota but none of the commercial leverage required to see that components of sufficient quality actually reached his factory in time to meet that quota.

The result was that, throughout much of the history of the Soviet Union, productivity actually fell, leading to shortages of just about everything. Indeed, I have actually seen this first hand. For while working in Finland during the 1970s, my girlfriend and I made frequent visits to St. Petersburg, where we were constantly struck by the sight of people queuing outside shops, making us wonder what exactly they were queuing for. One day my girlfriend, who spoke a little Russian, consequently decided to go up to someone in the queue and asked them, and was told that they were queuing for oranges, a delivery of which had recently arrived in St. Petersburg for the first time in six months. On another occasion we actually discovered that most of the people in the queue didn’t even know what they were queuing for; they just saw a queue and joined it on the off chance that they might be able to buy some of whatever was on sale before it was all sold out.

Another oddity I remember from our first trip to St. Petersburg is the fact that none of the taxis had proper windscreen wiper blades, which, given the fact that it was snowing most of the time, was more than a little disconcerting. For most of the drivers had been forced to improvise windscreen wiper blades out of bits of cardboard, which didn’t work very well to say the least.

The point, however, is that this is the kind of Alice-in-wonderland world one creates if, instead of allowing thousands of independent businesses to compete for customers by striving to be the best at meeting those customers’ needs, one attempts to implement a centrally planned economy in which no one has a personal stake. For not only is an entire economy just too complicated to be controlled by a group of central planners, but its operation needs the constant monitoring and adjustment of people who have a vested interest in it and therefore care about making it work. Without this kind of attention, what one gets is the kind of chaos in which there are occasional gluts of goods that no one wants but chronic shortages of everything else: a state of affairs which makes the development of a black market one from which its operators stand to make a gain and which therefore functions properly more or less inevitable.

In fact, like many tourists visiting the Soviet Union during the 1970s, my girlfriend and I were among the many beneficiaries of this illegal trade, visiting St. Petersburg not only to take in the sights of one of the most beautiful cities in the world, but to enjoy long weekends consuming large amounts of vodka, caviar and excellent Georgian champagne, all paid for by selling stuff on the black market.

This also brought with it a certain frisson of fear and excitement. The first time we participated in this great Finnish tradition, in fact now sadly defunct I had visions of us being arrested by the KGB and thrown into some dark, damp cellar, where we would be forced to listen to the tortured screams of our fellow inmates. By our second trip, however, I started to suspect that, although the Soviet authorities put on a great show of cracking down on the black market, searching our luggage at the border and sternly impressing on my girlfriend the fact that, while in the Soviet Union, it was illegal for her to sell any of the dozen or so pairs of brand new lacy knickers she had brought with her, in reality they more or less turned a blind eye to the whole business.

Nor was this simply because so many people were in on it. Indeed, they had to be for it to be so well organised. Within minutes of checking into our hotel, for instance, a chamber maid would knock on our door to ask us what we had to sell. As soon as we took a seat in the hotel’s bar, a waiter would come up to us, not just to take our drinks order, but to tell us the black market exchange rate for whatever hard currency we had, which was always substantially higher than the official exchange rate. The operation was so slick, in fact, that it was fairly obvious that the young men and women who worked the hotels were not working alone. They were buying so much merchandise and currency that they had to be part of a larger organisation, the principal activity of which, of course, was selling all this stuff to wealthy Russians who could afford to buy it at what one imagines was a fairly healthy profit. This therefore suggested that the whole operation was almost certainly run by an organised criminal gang, which meant that the police and other Soviet officials also had to be getting their cut.

The more I thought about it, however, the more I began to suspect that a few criminal gangs and one or two corrupt local officials weren’t the only ones benefitting from this trade and that the state, itself, was also, very probably, a knowing beneficiary. For what the Soviet Union was effectively doing was importing western goods of a type or quality which it did not manufacture itself, and paying for them, not in western currencies, but in rubles, which the recipients then spent on vodka, caviar and Georgian champagne, thereby boosting the economy in three ways. Firstly, it increased the supply of manufactured goods which Soviet consumers could buy without shipping money abroad. In fact, far from expending hard currency on these goods, the black market trade in hard currencies actually increased their availability to the Soviet Union itself. For while theses hard currencies initially went to the criminal gangs, they then had to be laundered in some way, which meant that, one way or another, they entered the Soviet financial system where they boosted the Soviet Union’s foreign reserves.

Even more importantly, however, the fact that Finnish tourists were getting long weekend holidays for little more than the coach fare meant that, every Friday afternoon, dozens of Finnish coaches disembarked their passengers onto the forecourts of St. Petersburg’s bustling hotels, greatly boosting the city’s tourist, catering and hospitality industries, from which hundreds, if not thousands of people benefitted. What’s more, it was fairly obvious that the Soviet Union also gained politically from this. For by giving Soviet consumers access to goods they would have otherwise been denied, it was effectively using the black market to make up for the deficiencies in its own centrally planned economy. In fact, without the black market filling people’s pockets and supplying them with luxury goods they couldn’t buy in their own shops, it is highly likely that there would have been a lot more political disaffection than was actually the case and that a combination of shortages and political dissent might even have brought the system down, as, of course, it eventually did.

The problem we had in the West, however, is that we didn’t understand any of this. We thought that the Soviet Union collapsed because of its tyrannical government, endemic corruption and unrestrained organised crime. We didn’t realise that all three of these problems were merely by-products of a faulty economic system. When the Soviet economy finally collapsed, therefore, we didn’t learn anything from it. Even after the Soviet Union broke up, in fact, there were Labour politicians in Britain who still called for more parts of the British economy to be taken into public ownership. Worse still, huge swathes of the population actually agreed with them, partly because many people still harboured some vestige of the Puritan belief that private wealth is immoral, but also, one suspects, for fear of the alternative. For taking responsibility for oneself and standing on one’s own two feet can be scary. Starting one’s own business, for instance, knowing that there will be no one there to catch one if it all goes wrong, requires a great deal of courage. It is only human, therefore, to want there to be someone who will ultimately look out for us. Beneath this perfectly human desire, however, there is a hidden snare. For in an age in which few of us now believe in God and in which families have become far more fragmented, the only institution left to catch us when we fall is, of course, the state.

In the aftermath of the second world war, when people throughout Europe were thinking about what kind of world they wanted to return to, most countries therefore opted for what was effectively a compromise between classical liberalism and socialism which we labelled the ‘mixed economy’, in which most businesses, especially small businesses, remained in private ownership, while most essential services, including health and education, along with certain strategic industries, such as electricity generation and the railways, were nationalised. The problem with this, however, was not just that these state run services and industries eventually became as inefficient, wasteful and dysfunctional as their Soviet counterparts but that, because public services were free at the point of use, people were more than happy to avail themselves of them, with the result that the expansion and improvement of these services was the most common and popular electoral promise made by politicians of nearly every political affiliation.

The problem, of course, is that these services are not actually free; they are paid for out of taxation, which, whether the taxes are levied on income or expenditure, takes money out of people’s pockets, leaving them with less to spend elsewhere. This then has two main effects. The first and most obvious is that workers demand higher wages at a time when increased taxation is already reducing demand. Without being able to increase sales, this means that, in order to pay these high wages, employers have to increase their prices, thereby increasing inflation.

Of course, it will be argued that, if taxation is used to pay public sector workers, then it doesn’t actually take money out of the economy and should not therefore reduce overall demand. Not only does it most definitely reduce the demand generated by those being taxed, however who have to be in the majority in order to make the public sector supportable but most public sector workers do not produce anything which anyone actually buys. That is to say that, by governments redirecting resources away from commercial production and towards publicly funded services, their economies not only produce more of these services than would be in demand if they were not free at the point of use, but, without a compensatory increases in productivity, it reduces the supply of purchasable goods, thereby increasing their cost and hence reducing demand.

This then leads to the second negative effect of raising taxes to pay for public services. For by increasing the cost of producing goods and services in other parts of the economy, domestic industries find it increasingly difficult to compete with foreign imports from countries which spend less on public services. This then leads to industrial decline, making it no accident, for instance, that the West has lost most of its manufacturing industry to China and other countries in the East.

Western governments, of course, have tried to cover up this industrial decline by pointing to the creation of new jobs in the service sector. Not only are many of these new service jobs actually in the public sector, however, but many that are not are in some way dependent on government. Take, for instance, the huge number of management consultants the government constantly employs to improve the performance of the NHS. These may work for private firms but the taxpayer still pays for them.

This, however, has another unfortunate knock-on effect. For while industrial and economic decline in most western countries has effectively reduced the tax base, the increase in public services jobs and jobs dependent on government has further increased public expenditure. In 2023-24, for instance, the UK government actually accounted for 44.7% of all expenditure in Britain. In France, this year, the figure is expected to rise to 57%, the highest in the world. The real problem, however, is that there is an upper limit to the amount any government can tax an economy before raising taxation rates ceases to increase tax revenues. This is because raising taxation rates, whether it be on income or expenditure, changes people’s behaviour. If one raises income tax above a certain level, for instance, people do less overtime, judging that the financial rewards of doing extra work are not commensurate with the loss of free time. If, alternatively, the government increases sales tax, then people either buy things on credit and get into debt, which causes them to reduce expenditure even more sharply later on, or they adjust their expenditure to what they can afford, either by doing without certain items altogether or by finding cheaper alternatives.

No matter how governments try to increase revenues by raising taxes, in fact, eventually they hit a ceiling, which, in most countries is around 38% of GDP. If the governments of those countries are spending anywhere between 44.7% and 57% of GDP, therefore, they can only do so by borrowing the difference, which most western countries have been doing for the last twenty years or so, with the result that the British government’s debt currently stands at around £3 trillion or 97% of  GDP, while the US federal government’s debt stands at a staggering $38 trillion or 120% of GDP. Worse still, the GDP figures of most countries are, themselves, greatly inflated by the fact that they actually include government expenditure. If one measured GDP purely in terms of the income generated by productive industries, government debt to GDP ratios would be much higher.

Needless to say this is unsustainable. At present, however, there is very little sign that any government in the West is seriously trying to cut expenditure, the one part of the fiscal equation over which they have any real control. As increasing tax rates has already become counterproductive in most western countries, this means that they are simply forced to go on borrowing. By some estimates, for instance, British government debt is increasing by around £5,000 per second, while annual interest on the accumulated debt is now over £100 billion. What’s more, bond markets throughout the West are well aware of the dangers inherent in this continued trend, leading to bond prices falling almost everywhere, causing yields to rise and forcing governments to issue new bonds with even higher yields in order to sell them. It is only a matter of time, therefore, before falling demand for government bonds reaches a point at which at least one western government will find itself unable to sell enough new bonds to cover the cost of redeeming those falling due and will therefore be forced to default, unleashing a wave of panic across bond markets which will cause other western governments to fall like dominoes.

Of course, it may be thought that this won’t happen because the IMF will step in to bail out any government on the brink of default. The problem with this argument, however, is that the IMF is largely funded by those very same western governments which are themselves close to bankruptcy. More to the point, the scale of the debt in many cases is well beyond that which can ever be repaid, reducing the efficacy of any bail out to that of sticking plaster on a wound that is simply too large to heal, which rather raises the question as to how we didn’t see this coming. After all, there is nothing in the above explanation as to how we got here that is particularly difficult to understand and wasn’t entirely predictable. So why didn’t we do anything about it while there was still time? How could we have been so stupid as to just let it happen? For mass stupidity, I think, is the only possible explanation.

3.    The Breaking of the West (Part 2): Stupidity

As I have explained elsewhere, there are, in fact, many different types of stupidity, probably the most common of which results from the fact that, for much of the time, we just don’t think. Based on some common misconception or false assumption, which, if we thought about it for even a moment, we would quite clearly see was false, we just blunder ahead and are then surprised when the outcome is not quite what we intended.  

Another fairly common form of stupidity is based on wishful thinking, which is very often combined with self-deception. We want so desperately for something to be the case that, even though it may be intrinsically unlikely, we place all our hopes in it and are often quite distraught when the hoped for outcome does not materialise, not least because we realise how stupid we were to bank so much on something that was so extremely improbable.

Both of these examples of stupidity, however, are largely characteristic of individuals rather than groups. It may, of course, be argued that groups can also suffer from wishful thinking. After all, governments throughout the West have spent the last twenty-odd years telling themselves that they can go on indefinitely spending money they don’t have without it leading to financial collapse. Governments, however, including both politicians and bureaucrats, tend to be closed groups which defend themselves by preventing group members from asking awkward questions. As such, the members  submit to a group think which they deceive themselves into believing is both rational and sound. If a group is large and open enough, however, it is far more difficult to completely suppress critical thinking and therefore prevent the members from realising that they are actually fooling themselves.

There is, however, one form of stupidity which, even though it is most commonly induced by an individual or group of individuals, is essentially collective, being the stupidity of the herd. Typically, it will start when a population is repeatedly told something that is not true by those they not only regard as being in authority but do not regard as having any malign intent towards them. This then leads most open-minded people to at least entertain the possibility that what they are being told might be true, with the result that, if it is repeated often enough, they eventually come to believe it. Once this happens, moreover, it then spreads like a contagion. For we falsely believe that, if everybody else believes something, then it must be true. This then leads us to believe it and so spread the contagion further.

A good example of this is the mediaeval belief in purgatory. Sometime during the 10th century, those in positions of authority in the Lateran Palace decided to adopt Augustine’s terrifying vision as official Church doctrine and did whatever was necessary to disseminate and enforce a belief in purgatory’s existence. Once the population had attained the required threshold of belief, however, the Church didn’t have to do any more to convince people that purgatory was real because everyone now believed it because everyone else did.

Of course, you may say that this kind of herd-like behaviour no longer applies to us. But human nature hasn’t changed over the last thousand years; we are just as gullible today as we were then. A good example is our collective belief that human beings are causing the earth to warm by burning fossil fuels which releases carbon dioxide into the atmosphere. ‘Ah’, you say, ‘but that is true!’ The only reason you believe it to be true, however, is because everybody else does. What’s more, I know with absolute certainty that this is the only reason you believe it. Because I also know that you have no independent evidence to support this belief. And how do I know this? Because no such evidence exists.

Yes, carbon dioxide is a greenhouse gas which absorbs infrared radiation emitted by the sun-warmed surface of the earth, thereby warming the atmosphere. But it is a very minor greenhouse gas, accounting for only a very small percentage of atmospheric warming due to greenhouse gases in total. More than 90% of that warming, in fact, is caused by water vapour, which is thirty times more abundant in the atmosphere than carbon dioxide and contributes more than thirty times more warming to the overall warming effect. What’s more, most of the wavelengths of infrared radiation absorbed by carbon dioxide between 2.7 and 4.3 microns are also absorbed by water vapour. In fact, all the infrared radiation emitted by the earth’s surface at these wavelengths is already absorbed by a combination of water vapour and carbon dioxide, which clearly means that, at these wavelengths, adding more carbon dioxide to the atmosphere cannot lead to the absorption of any more radiation.

The only wavelength at which carbon dioxide uniquely absorbs infrared radiation, in fact, is 15 microns. This, however, is a very long wavelength, which means that the objects which emit it are not actually very warm. In fact, they wouldn’t show up on most night vision goggles. For in order for an object to emit infrared radiation at 15 microns it has to be -70°C.

Admittedly, molecules of nitrogen and oxygen at this temperature are to be found in the upper atmosphere. But there are two reasons why this should not worry us. The first is that carbon dioxide is 50% heavier than air and is therefore concentrated closer to the ground, its concentration decreasing by ten parts per million for every thousand metre increase in altitude. This means that the only carbon dioxide to be found above 40,000 metres is that which is produced there by cosmic radiation striking the nuclei of nitrogen atoms, knocking out single protons and turning what were nitrogen atoms into carbon atoms, which then bond with oxygen atoms to form carbon dioxide. More to the point, the amount of energy stored in objects at -70°C is so small that it could not appreciably warm the earth’s atmosphere no matter how much more carbon dioxide was added to the atmosphere to absorb it.

Not, of course, that most people are aware of any of this. All they know is what the ‘experts’ tell them: that human emissions of carbon dioxide cause global warming. Not really knowing anything about climate science themselves, moreover, they have no basis upon which to suspect that these experts might be lying to them. After all, why would they?

The answer to this question, however, is exactly the same as the answer to the question as to why the mediaeval Church told people that there was this place called purgatory in which they would have to spend years, decades or possibly even centuries in a constant state of torment unless they handed over substantial quantities of their hard earned cash. The only difference between this mediaeval extortion racket and today’s climate change scam is that, during the 1980s, the institution which most desperately needed a new source of income was NASA, which, after the termination of the Apollo programme and the disastrous failure of the space shuttle, needed to repurpose itself in order to secure its continued funding. After more than a decade in the wilderness, it finally did this in 1989, when James Hansen, Director of NASA’s Goddard Institute for Space Studies, convinced Congress that, unless its continued giving NASA billions of dollars each year to monitor the situation, by the end of century much of America’s Atlantic coastline, including Washington DC, would be under water due to anthropogenic global warming (AGW).

The real question we need to ask, therefore, is not why climate scientists in the 1980s should have lied to us about the dangers of global warming I think that is fairly obvious but why, if there is as little foundation to the climate scare as I contend there is, other scientists should have so meekly gone along with it? What happened to ‘Nullius in Verba’? For when it comes to fields of science not their own, it would appear that most scientists today do just accept the word of those they regard as experts in those other fields.

One of the main reasons for this, of course, is thatscience has changed considerably over the last 365 years, not least in the way it has become ever more specialised. When the Royal Society was founded in 1660, most scientists, in fact, were generalists, studying whatever piqued their curiosity in a way that would be more or less impossible in a modern university or research institute. Robert Hook’s initial interest in optics, for instance, enabled him to develop a microscope, which he then used to discover and subsequently study microscopic lifeforms, effectively creating the field of science we now call microbiology. Not only did his lack of any disciplinary straitjacket thus greatly accelerate the growth of science in the 17th century but it also meant that most Royal Society members, like Robert Hook, felt themselves both able and obliged to critically address the work of other scientists, whatever their field of study, something which most scientists today would find it very difficult to do.

This is because specialisation also causes fragmentation, with each specialism developing its own concepts and terminology in a way that can make it more or less impenetrable to scientists in other fields. How many microbiologists today, for instance, would even be able to read a paper on laser optics, let alone offer their views on it? Not only does this lead to a situation in which most scientists today do not even try to communicate their work to scientists in other fields, however, but it also leads to the creation of closed worlds in which those who feel they have proprietary rights over a particular discipline use their established positions within it to influence both funding bodies and publishers to see off interlopers who might take their science in a different direction.

Indeed, this is very similar to the closed groups comprising most governments which I described earlier, where membership depends on submission to a group think which effectively prevents the members from questioning the group’s principal beliefs. The main difference is that, in most fields of science, the only negative consequence of this is that it tends to make the field moribund, in that no new ideas are permitted. Because most scientific research is publicly funded,  however, particularly in Britain, some fields of science have a political dimension and an influence outside of themselves, which, if they have become closed, can effectively mean that the outside world becomes closed as well.

Again, climate science provides the perfect example of this, not least because, for some reason, the idea that human beings might constitute a significant threat to the planet resonates very strongly with the general public, chiming closely with the widespread belief that we are a major source of pollution from which the earth consequently needs to be saved. As a result, governments did not need a lot of convincing before they started funding research into climate change on a major scale, which not only increased the number of scientists with a vested interest in ensuring that the climate change bandwagon continued, but made it more or less impossible for any scientist who questioned the AGW theory to obtain any form of government funding.

Some organisations such as the BBC even banned ‘climate change deniers’ from appearing on their programs, which meant that audiences only ever heard one side of the debate, thus further reinforcing most people’s belief that the threat from climate change was both real and serious. The strongest boost to this belief, however, came when governments started adopting ‘Net Zero’ policies. For the mere fact that they were prepared go to so much trouble and expense to replace perfectly functional coal-fired power stations with intrinsically unreliable wind and solar farms clearly implied that the governments in question thought such steps were necessary. After all, governments would have been insane to give up low cost fossil fuels in favour of highly subsidised renewables if the scientific basis of the AGW theory had not been incontestable. What most people failed to realise, however, is that government ministers not only have no more knowledge of climate science than we do, relying on scientific advisers who are paid to tell them exactly what they want to hear, but that, just like the rest of us, their main reason for believing in climate change is that everyone else does.

What we have created, in fact, is a classic vicious circle in which reciprocal feedback loops constantly reinforce each other. This pattern of reciprocal reinforcement, however, is not just to be found in such classic example as the mediaeval belief in purgatory or our modern belief in climate change, but in almost every instance in which erroneous beliefs and their herd-like adoption infect entire populations. Most significantly, it is to be found in the mass stupidity which has destroyed the West’s economy.

The first loop in this particular vicious circle starts, of course, with the natural human desire to access essential public services, free at the point of use, whenever we need them and to be provided with an adequate welfare safety net should we run into difficulties. The problem is that, while voters consistently say that this is what they want, they don’t like paying the taxes required to support such a system. This therefore places governments in something of a dilemma. For the optimum solution, of course, would be to try to find a balance between these two requirements by providing the best possible public services one can afford while keeping taxes as low as possible. The trouble with this kind of compromise, however, is that it usually pleases no one. Prioritising their desire for re-election above all other considerations, therefore, governments throughout the West have increasingly resorted to hiring obliging economic advisors to show them how to effectively square the circle of improving and expanding public services without raising taxes.

This, of course, is impossible: one simply cannot have high quality public servicers, free at the point of use, without raising sufficient taxes to pay for them. Given that there is a limit to how high one can raise taxes before it becomes counterproductive, this also means, therefore, that there is a limit to how much one can expand and improve public services. No paid economic advisor is ever going to tell their finance minister this, however, for the simple reason that, if they did, they wouldn’t remain a paid economic advisor for very long. So, instead, they advise their ministers to instruct the governors of their only nominally independent central banks to lower interest rates in order to making borrowing cheaper for both the government and consumers, the rationale behind this being that this will stimulate the economy, which, in itself, has two effects. The first is that it increases GDP, which means that increased government borrowing will not result in an increased debt to GDP ratio. The second is that a higher GDP not only results in higher tax revenues without the government having to raise taxation rates, but allows both governments and consumers to then pay off their debts out of the increased wealth created.

The only problem with this is that, like the AGW theory, Modern Monetary Theory (MMT), as it is called, is just nonsense, only gaining what little credibility it has by drawing a false analogy between government borrowing and businesses borrowing, where businesses borrow money to invest in new plant and equipment in order to increase productive capacity and hence revenues, out of which they will then be able to repay their loans. Neither governments nor consumers, however, borrow money to invest in productive capacity from which future revenues flow; they both largely borrow money for current consumption. Yes, there is a resultant increase in GDP. But most of this will be accounted for by increased public expenditure and retail sales. What’s more, most western countries are currently running a trade deficit, especially in manufactured goods, which means that most of the increased amount money spent on retail goods is spent on imports and ends up going abroad rather than to local businesses. Indeed, apart from increasing government and consumer debt, all MMT does is increase the balance of payments deficit while papering over the cracks in a systemically faulty economy.

So why have so many western governments gone down this path? The answer, however, is simple. For most government ministers don’t know any more about economics than they know about climate science; they simply accepts what their expert advisors tell them, especially when their expert advisors tell them what they want to hear, which, of course, they always do.

The more important question, therefore, is why we go along with it. The answer to this question, however, is also very simple. For we no longer understand economics the way we did when people learned about economics, not from economics textbooks, but from hard earned experience, which taught us that we can’t spend what we don’t have and that getting into debt only makes the situation worse. The problem was that we thought we could put an end to this fear-laden economic reality by putting ourselves in the hands of those who told us they had a magical solution to the problem, but whom we now discover are just self-serving charlatans who have no idea what they are doing and, like mediaeval pardoners, don’t actually give a damn about us.