Thursday 14 October 2021

Death, Original Sin and the End of Western Civilization (Part III)

 

In Part II of this essay, I explained how the way in which Christianity was originally spread with evangelists seeding new churches which then remained largely autonomous gave the early church a very flat structure with no significant hierarchy above the level of the individual diocese. At a time when Christianity was still struggling to take hold, this structure clearly had its benefits, not least in allowing each church to operate independently, such that if a church was suppressed in one area, this did not necessarily impact churches elsewhere. As the church became more established, however, this non-hierarchical structure, in which individual churches were only loosely associated with each other, inevitably began to reveal its inherent flaws. For without any overarching organisation to restrict the ways in which individual churches developed, there was always the possibility that these autonomous institutions would begin to diverge on matters of doctrine and hence fragment the wider church into what could easily have become a multiplicity of competing sects.

With the emergence of Arianism in the early fourth century, this, indeed, was already beginning to happen, forcing the Emperor Constantine to intercede on the church’s behalf in order to resolve the crisis and prevent a serious schism. This he did by calling an ecumenical or ‘worldwide’ council of the church’s bishops, which met in Nicaea in north-western Turkey in the summer of 325.

Even within this ecumenical council, however, the structure remained essentially collegial, with each attending bishop having nominally the same status, their position and precedence within the council being determined solely by the regard in which they were held by their peers rather than the diocese they represented. For at this point, no diocese held precedence over any other, not even Rome. In fact, the Bishop of Rome in 325, Silvester I, didn’t even attend the Council of Nicaea and so had absolutely no say in the formulation of the Nicene Creed, the council’s main output. Whenever the people of Rome started to affectionately refer to their bishop as ‘Il Papa’, therefore, the one fact of which we can be absolutely sure is that in 325 the then incumbent had no greater status in the wider church than any other bishop and no jurisdiction beyond his own diocese. In short, he was not ‘The Pope’.

In fact, the first Bishop of Rome who is recorded as acting in anything like a papal capacity was Innocent I (401 to 417). As I further explained in Part II, however, any incipient papacy that may have been starting to emerge in the early 5th century was very quickly snuffed out by climate change and the rapid disintegration of the Western Roman Empire, as its borders on the Danube and the Rhine were repeatedly breached by multiple peoples from the north and east looking for better agricultural conditions in the south and west. The chaos which ensued effectively precluding any centralised leadership of the church was then further compounded in the 6th century when, as a side effect of the worldwide migrations to which global cooling had given rise, bubonic plague arrived in Europe for the first time, killing around 60% of the populations it infected and returning to reinfect them time after time for the next two hundred years, thereby reducing the population of Europe to such extent that there simply weren’t enough people left with the requisite education or skills to adequately record what we now consequently refer to as the Dark Ages.

It was during this period, however, that the foundations for what later became the papacy began to be laid. For the depopulation of Europe, which left vast swathes of agricultural land unworked, quite naturally continued to attract further migrants, especially to Italy, which, after two invasions by the Goths and two attempts at reclamation by the Byzantine Empire, was now invaded once again, this time by a loose confederation of northern peoples led by the Lombards. Far from entering the promised land, however, the Lombards and their associated followers very quickly found themselves in a living hell, as their arrival quite predictably rekindled the plague once again, providing it with a whole new population to feed on and thus reducing the peninsula’s conquerors to the same enfeebled and impotent state as their predecessors. Even more significantly, it weakened the Lombards, themselves, to such an extent that they were no longer able to maintain control over their coalition, which consequently broke up into its constituent parts, none of which, on their own, were strong enough to capture Rome, which was therefore able to maintain its independence for another 150 years even though it was surrounded by a sea of hostile invaders.

The result was that Italy was now effectively broken up into a patchwork of duchies with no overall secular government or none, at least, that was particularly effective in the middle of which sat this one small remnant of the past: an anachronistic relic of the old Roman Empire which was still, in fact, ruled by the old Roman senate, and was equally lacking, therefore, in effective secular leadership. At a time of war, pestilence and famine, it was hardly surprising, therefore, that some measure of leadership should have fallen upon its spiritual figurehead, or that during this whole protracted period,  Rome should have developed a very distinctive constitutional structure: one which might best be described as an oligarchic theocracy, in which the city’s senatorial class effectively shared power with its spiritual leader, who, in any case, was nearly always one of their own. To suppose, however, that this spiritual leader could have also been the head of a pan-European church, operating beyond the confines of this tiny besieged enclave, is surely beyond the bounds of credibility.

It was only in the 8th century, in fact, when the climate began to warm up again and the plague abated, allowing normal communications with the outside world to be restored, that this even became a possibility. And even then, it required a rift between the city’s senatorial class and its spiritual leader, resulting in a breakup of the oligarchic theocracy, followed by an alliance between its spiritual head and a far stronger secular leader from outside, for a pure theocracy in the form of the papacy to emerge. Neither of these two latter conditions, however, happened of necessity or were in any sense predictable. In fact, this whole seismic shift in the structure and nature of the church could easily be regarded as yet another instance of historical happenstance a bit like Yohanan Hyrcanus dying before he could conquer Galilee if it weren’t for the fact that the seminal event which precipitated this transformation was both clearly intentional and well planned. For as I further explained in Part II, it all happened during a very carefully choreographed piece of pantomime which took place on Christmas Day in the year 800, when, in an act of mutual recognition, the Frankish king, Charlemagne, allowed himself to be crowned Holy Roman Emperor by the then Bishop of Rome, Leo III, thus not only acknowledging Leo’s authority to bestow the crown upon him, but legitimising both of the institutionsthe Holy Roman Empire and the papacy – which this act of reciprocity simultaneously brought into being.

Even more significantly, it actually bound them together in a symbiotic relationship which was to last for more than seven hundred years, the ritual of coronation, itself, becoming the repeated symbol of the interdependence between the two institutions: each successive Emperor needing to receive the crown from the Pope’s hand in order to obtain legitimacy; each successive Pope needing the political and sometimes even military support of the Emperor in order to maintain papal authority. And therein lay the basis for all the problems that were to beset, not just the church, but Christianity, itself, throughout the next twelve hundred years. For if future Bishops of Rome really were going to function as the heads of a universal church that is to say, as Popes they not only had to assert their authority over all the other churches nominally under their rule which was difficult enough they also had to attain a certain level of diplomatic influence, not just over the Emperor, who, once crowned, was liable to become a little less subservient to the church’s wishes than before, but over all the secular rulers of Europe who might otherwise have curtailed or even thwarted the Church of Rome’s authority over the churches in their respective realms, all of which cost money.

Not, of course, that the Church of Rome was ever short of a bob or two. For ever since the Emperor Constantine had gifted the diocese the Lateran Palace, many wealthy Romans had made bequests to the church in their wills, allowing it to accumulate a large portfolio of property from which it could garner rents. This in turn enabled it to house, feed and clothe a sizeable establishment, which not only comprised dozens of ordained priests, but domestic staff, craftsmen, artists and scholars such as Jerome of Stridon, who, as described in Part II, was initially hired to translate Athanasius’ recommended list of canonical texts from Greek into Latin, but who ended up staying on for more than eighteen years, doing more or less whatever he liked.

Even during the Dark Ages, moreover, when contact with the outside world was at a minimum, the diocese had also supported a number of senior churchmen who acted as diplomatic representatives at that time referred to as apocrisiarii – whose principal function was to maintain contact with the Byzantine governor of Ravenna, on whom Rome relied for military support, but who also undertook diplomatic missions to some of the city’s less friendly neighbours, thereby helping to maintain its independence for so long.

We also know that in the second half of the 8th century, two different Bishops of Rome made separate appeals for help to the two Frankish kings, Pippin the Short and Charlemagne, which would therefore have involved sending apocrisiarii to either France or Germany. What all this tells us, therefore, is that even during the darkest of times, Rome was not entirely cut off. Even more importantly, however, it also tells us that, due to the isolated circumstances in which it had found itself, the Church of Rome had already started to put in place the kind of organisation a full papacy would later require: one which almost certainly included a substantial secretariat for maintaining correspondence, very probably involved a network of trusted couriers or agents to ensure that this correspondence was securely delivered, and an experienced cadre of apocrisiarii later known as a ‘papal nuncios’ – who would have also needed quite significant entourages, both to provide them with protection and to ensure that papal authority was projected in a suitable manner, all of which would have placed ever-growing demands on the church’s coffers.

Of course, Charlemagne’s gift of land would have helped, in that, from the end of the 8th century onwards, the papacy had the Papal States on which it could levy taxes. While this almost certainly funded its early expansion, however, not only would the tax revenues from the Papal States have been relatively small, even compared to many of the secular states which surrounded them, but they were also essentially vulnerable to the acquisitiveness of these very same neighbours. For while the church may have had a political and military patron in the form of the Holy Roman Emperor and, at various times, in other secular princes throughout Europe it could never really rely on any of them. What it needed, therefore, was a source of income that was entirely within its own purview and thus immune to the vagaries of secular politics.

Of course, one such source of revenue already existed in the form of the tithe: one tenth of the income of every household in each parish which went to maintain the parish church and feed and house parish priest, thereby allowing him to perform his duties without needing to earn an independent living. While some part of this income would have also been passed up to the local diocese, to meet the costs of the bishop and his staff, there was no way, however, that it could have been stretched to fund a papal hierarchy above the diocesan level, especially as much of it was paid in kind rather than coin and therefore had to be consumed locally. Worse still, from the point of view of extracting revenue from its flock, the very existence of the tithe more or less precluded the church from charging its parishioners for any of the regular services such as baptisms, weddings and funerals which it was already the duty of priests to perform.

This meant that any service to be charged for had to be new. It also had to be something which more or less everyone wanted. And this, therefore, pointed in only one direction. For in this worst of all possible moral universes which Christianity had created, in which human beings are deemed to be sinners simply by virtue of being human, and where sinners, without forgiveness, are condemned to eternal damnation, there was one very obvious thing that everyone so desperately feared not being granted that, with a little persuasion, there was every chance that they could be induced to pay for it.

In fact, the church had already started preparing the ground for this as early as the middle of the 3rd century when theologians such as Basil of Caesarea began encouraging priests to hear their parishioners’ confessions and grant absolution on the fulfilment of a suitable penance: a practice which was commended partly on the basis that it kept parishioners forever conscious of their sinfulness, but also because it gave priests the power to bestow that which had once been the sole preserve of Christ, himself, but which through the practice of confession, penance and absolution was now appropriated to itself by the church in a way that almost went unnoticed.

This newly acquired power was then further enhanced when, realising that the forgiveness of sins is most vital in or around the time of death, the church began to develop the dual procedures of Extreme Unction and the Viaticum or Last Rites, both of which served the function of allowing the dying to meet their maker with their sins expunged, thereby ensuring that they would be allowed to enter heaven.

The problem with all of these practices, however from confession to the Viaticum was that while they gave the church enormous power over their congregations, they could not, in themselves, be used to generate an income. After all, demanding money from a dying man or his family before a priest agreed to administer the Last Rites would have seemed a little bit too much like extortion even for the medieval church. What it needed, therefore, was something for which its parishioners would willingly make provision prior to their deaths, or for which their families would willing pay in the aftermath of  a loved one’s demise: some sort of mass being the obvious choice. After all, prayers for the dead have been said in almost every culture throughout history even in those with no specific concept of an afterlife while in Zoroastrianism, for instance, prayers offered up to Vohu Manah, to persuade him that the deceased deserved to be welcomed into the House of Song rather than be cast down into the House of Lies, were seen as an absolutely essential part of the three days of mourning between the deceased death and his arrival at the Chinvat Bridge where judgement upon him would take place.

So what if this gap between death and impending judgement could be extended a little, to allow the family enough time to properly mourn their departed loved-ones by having a mass or, indeed, a series of masses performed for them? What if, too, it were generally believed that the purpose of this extended period was to purge the deceased of whatever sins may still have clung to his soul despite whatever absolution he may have been given by the church in life? And what if this purgation were not quite as simple as the ritual cleansing performed by Jesus in the River Jordan? What if the cleansing medium, for instance, were not water but fire? And what if the deceased's experience of this purgation were far from pleasant? What if it were absolute agony in fact? And what if the masses performed for those in torment could actually shorten their agony? How much would a man or his family pay for such a service?

When exactly the church created purgatory is hard to say, not least because it was an idea that had been going around for some time. In ‘The City of God’, for instance, written between 413 and 426, Augustine distinguishes between the purgatorial fire which purifies a sinner and the everlasting fires of hell designed to punish him: a distinction which implies that even as early as the 5th century, the concept of purgatory had a certain level of currency. Throughout the years of plague, famine and war, during which numerous theologians pondered the relationship between suffering and atonement, the idea was then inevitably developed further. When purgatory actually became an official part of Christianity’s cosmology, however, is a little bit more difficult to pin down, not least, one suspects, because were its introduction to be given a precise date, then the church would either have to confess that, before this date, it didn’t know of its existence, or admit that it didn’t actually exist and that the church invented it.

In order to determine when this might have occurred, therefore, we need to be a bit more imaginative. Instead of trying to locate the date of the event, itself, we need to look for its effects in other changes that occurred around the same time, one of the most obvious of which would have been the first official canonisation of a saint by a Pope. I state it in this way because the veneration of particularly holy members of local churches especially martyrs is something that had gone on almost from the beginning of the church’s history, and many of these local ‘saints’ were later canonised. The significance of this later ‘confirmation’, however, is often missed. For one of its principal purposes and indeed the principal purpose of all canonisations was not just to honour the person concerned, but to allow them to bypass purgatory and go straight to heaven. Knowing when the first official saint was canonised, therefore, gives us a fairly strong indication as to when purgatory started to have such a looming and dreadful presence in Christian consciousness that such a mechanism for avoiding it was deemed both necessary and appropriate in certain instances.

So when did the first canonisation take place? I could, of course, leave you to find out for yourselves, but to save you the trouble, it was on 31st January 993, when Pope John XV canonised Ulrich of Augsburg, thereby strongly suggesting that purgatory first became accepted as part of Christianity’s official cosmology sometime earlier in that same century or late in the century before.

Nor is it a coincidence that it was around this same time that a rift began to open up between the Pope and the bishops of the eastern church, which eventually led to the great schism of 1054, when the eastern Orthodox Church and the western Catholic Church finally went their separate ways. For while this rift is often presented as one based purely on principled, theological differences, it is no accident that one of these differences concerned the nature of purgatory, which the eastern bishops regarded as far too punitive, or that an even bigger bone of contention was the fact that, with the introduction of canonisation, the Pope had made the creation of saints a papal prerogative. For in reality, of course, the dispute was essentially about papal supremacy, a point made all the more clear when the papal nuncio sent to Constantinople in 1054 to resolve these matters demanded that the Patriarch of Constantinople recognise the Pope as the universal head of the Christian Church, thereby presenting the eastern bishops with a very stark and simple question: did they want to retain their still largely collegial structure under the patronage of the Byzantine Emperor, or did they want to submit to an increasingly authoritarian Pope in Rome, a question to which, of course, there was only ever going to be one answer.

All this really did, however, was further consolidate the Pope’s position in the west, where the deal struck between Leo III and Charlemagne already meant that the western bishops were subordinate to the supreme pontiff’s authority, and where a succession of Popes now proceeded to extend their power even further by means of a series of papal edicts, known as Papal Bulls. The first these indeed, the first ever Papal Bull was published in 1059 by Pope Nicholas II and decreed that only cardinal bishops, who were appointed exclusively by the Pope, could vote in papal elections, thereby ensuring that the papacy remained the sole preserve of a self-perpetuating elite who always chose one of their number to fill the office, while the recipient then appointed others of like mind to join this inner circle.

Given the power of the Pope to influence European politics, and the power of the cardinal bishops to elect the Pope with one of their number being themselves elected this then led numerous rich and powerful men, including most Italian dukes as well of the kings of both France and Spain, to have at least one of their sons ordained and appointed to the college of cardinals in short order, while the papacy, of course, was further enriched by the generous donations which these rich and powerful men made to ensure that this happened.

Meanwhile, the existence of purgatory had now given rise to another new practice that was significantly augmenting the church’s income. This was the sale of ‘indulgencies’: a form of absolution, shortening the time a penitent had to spend in purgatory, based on charitable donations to the church rather than the performance of a penance. Ironically, like most corrupt practices, this had started quite innocently enough, when, instead of the repetitive recitation of prayers previously one of the most common forms of penance handed out by priests to their penitent flock, but one which didn’t actually benefit anyone practically-minded priests started specifying charitable acts as the form of penance penitents had to undertake. Among the wealthy, however, these charitable acts very quickly turned into acts of charitable giving, initially to whichever charity the penitent chose, but then increasingly to the church itself, which, in turn, began to regard these charitable donations as the perfect way of raising money for any capital project it wished to undertake, particularly the building of new cathedrals.

In fact, so efficient was this source of income that in order to further facilitate it, the church started commissioning professional third parties called quaestores better known as ‘pardoners’ to go round the country offering pardons for such donations. Not only did these pardoners deduct a very handsome commission for this service, however, but much of the money reaching the church gradually began to be used for current rather than capital expenditure, thereby significantly raising the clergy’s standard of living.

To be fair, the church did try curb some of the worst excesses arising from this practice. At the Fourth Lateran Council in 1215, it even spelt out how much indulgence could be given for any confessed sin and what the proceeds could be used for. By this point, however, not only had a good many churches become highly dependent on this form of income, but the clergy had become so accustomed to the standard of living it gave them that abuses were more or less impossible to eradicate.

Nor was this the only money-making machine to which the institution of purgatory gave rise. For there was also a huge amount of money to be made out of the saints whose entry into heaven the church had expedited as a result of canonisation and who now constituted a significant array of blessed souls in close proximity to God to whom ordinary people could appeal for aid: appeals which it was generally believed were more likely to be heard if the appellants journeyed to a location with which the saint had been associated in life, especially if that site also held some relic of the sainted individual, before which the appellant could therefore pray, after, of course, making a suitable donation to the keepers of the site in question.

By the 14th century, as a result of all of these activities, the Roman Catholic Church had almost certainly become the biggest money-making enterprise of its time, its conspicuous wealth so obvious for all to see that writers such a Geoffrey Chaucer took great delight in satirising its various stereotypical representatives. It had also become one of the most corrupt organisations of all time, second only perhaps to the Soviet Union, which was more or less based on the same model: one which, regardless of the particular historical context in which it is realised, always has the same three main components:

1.      A monopoly supply of a commodity which everyone wants or needs in this case salvation from which the organisation can extract a level of income disproportionate to its overall contribution to the economy, thereby making itself disproportionately richer than the average economic participant.

2.      A ruling elite which not only manipulates the rules of the organisation to keep itself in power but pre-empts any internal dissent by ensuring that the benefits of its corruption are enjoyed at every level within the hierarchy, thus not only making it an extremely attractive organisation for people to join, but rendering all of its members equally guilty, thereby preventing any of them from pointing the finger.

3.      A professed ideology or set of principles and beliefs which, despite the obvious disparity in their relative standards of living, leads the majority of the public to believe that the organisation is both well-intentioned and working for the common good, and that anyone who holds any contrary belief is not only a heretic, traitor and enemy of that common good but must consequently be punished in order to both protect the public and deter others from such heresy.

Most of all, however, such an organisation has to get all of its own members to believe, not just in its ideology, but in its integrity. For most people are not hypocrites and the majority of the medieval clergy were no different. All of them, therefore, would have believed absolutely in Christianity’s core teachings; most of them would have accepted the hierarchy of the church as simply part of the natural order; and only those who chose to make vows of poverty, such as the Franciscans, would have found anything amiss in the disparities of wealth between the church and the common people. With Popes keeping mistresses and fathering children, however, cardinals living like the princes they so often were, and monasteries largely dominating the agricultural economy, much to the disadvantage of the ordinary peasantry who had far fewer resources and still lad to pay the tithe, it was only a matter of time before demands for reform began to be heard, with the sale of indulgencies being high on the list of objectionable practices which would-be reformers wanted to see abolished.

In fact, it was this very issue which, on 31st October 1517, prompted Martin Luther to send a copy of his ‘Ninety-five Theses’ to Albrecht von Brandenburg, Archbishop of Mainz, after the latter had appointed Johann Tetzel the man who is reputed to have coined the slogan: ‘As soon as the coin in the coffer rings, the soul from purgatory springs’ – to sell indulgencies on behalf of the diocese in order to raise the money it was required to pay as its contribution to the rebuilding of St. Peter's Basilica in Rome: something to which, in itself, Luther strongly objected. ‘Why’, he consequently asked in Thesis 86, ‘does the Pope, whose wealth today is greater than that of Crassus, build the basilica of St. Peter with the money of poor believers rather than with his own money?’ What made Luther even more angry, however, was the fact that, in order to incentivize the Archbishop, the Pope had given him a special dispensation which allowed him to keep half of all the money raised for himself, to pay off his debts.

More fundamentally, however, what Luther really objected to was what he believed was the church’s implicit downplaying of faith. For by selling forgiveness, which Luther believed it was for God alone to grant, and making it dependent on acts of penance and charitable giving, he believed that the church was undermining that simple act of surrender of putting oneself entirely in God’s hands which, for Luther, was not only at the heart of all Christian practice and belief but, as I argued in Part I of this essay, was what made Christianity so personally sustaining.

What he advocated, therefore, was a much simpler, more personal form of Christianity, in which the individual’s relationship to God was not dependent upon and did not require the intermediation of the church. And it was to this end that he set about producing the first ever German language translation of the bible, which was published in September 1522 and which, far more than his Ninety-Five Theses, is what really made him anathema to the church. For it was precisely through its intermediation in the relationship between God and man, of course, that it gained its power and its wealth. The last thing it wanted, therefore, was ordinary men and women reading the bible for themselves, especially given the fact that, as a result of the order of books chosen by Jerome for the Vulgate – an order which every succeeding translator has simply followed – one of the first stories these lay readers would have come across when they opened their bibles was that of Adam and Eve being evicted from paradise for the sin of being human: a sin which the average person would have seen all around them every day, not least within the church.

More to the point, this new protestant breed of Christian, having chosen to take responsibility for their own souls, took this responsibility very seriously. The passivity inherent in attending masses performed in a language they didn’t understand, while simply being instructed in how to behave by their parish priest was not for them. They preferred the far more active approach of reading their bibles, discussing the sermons their preachers now gave in the vernacular, and making up their own minds about what God commanded of them. The result was a far more rigorous form of Christianity than had probably ever gone before, in which the faithful no longer needed any external discipline to abstain from sin, but imposed this discipline on themselves, along with a level of austerity which put the church to shame, wearing only the most sombre clothes, abjuring all forms of entertainment, and living as frugally as possible.

The problem, of course, was that those who made such choices did not just make them for themselves. For if one believes that entering a theatre is sinful, one must believe that it is sinful, not just for oneself, but for anyone to do so, thereby pointing an accusatory finger at anyone who does not share this view. This then either puts social pressure on others to conform or incites in them a vehement antagonism. Either way, the effect is polarising, with the result that as Puritan communities grew larger, they tended to divide societies as a whole, thereby forcing governments to eventually take sides, either by supporting Puritanism or attempting to suppress it.

By the middle of the 16th century, the result was that Puritanism was no longer just a religious movement but had become a political one as well, especially in those northern European countries such as England, the Netherlands and certain German states, where the governments were all too well aware that they had little to no influence over the papacy, and where breaking away from Rome would not only free their populations from some of the church’s more oppressive demands, but could potentially bring the governments, themselves, a greater degree of independence.

Not, of course, that this was ever going to be that easy, not least because, when Martin Luther published his Ninety-Five Theses in 1517, both the Netherlands and more or less the whole of Germany were still part of the Holy Roman Empire, which saw Protestantism as a threat not just to the church but to its own existence. As popular Protestant revolutions began to gain momentum throughout many parts of Germany, the wholly predictable result was that its various constituent states began to divide into two camps, with the Protestant states forming a short lived confederation called the Schmalkaldic League, which, despite being defeated at what had seemed like the decisive battle of Mühlberg in 1547, was actually rather successful. For realising that the Hole Roman Empire would never be at peace unless he came to some kind of settlement with its Protestant constituents, the Emperor at the time, Charles V, offered the Schmalkaldic League a compromise which its members eventually accepted at a meeting in Augsburg in 1555, where it was agreed that all German states would be allowed to choose either Lutheranism or Roman Catholicism as their official confession as long as they remained under imperial rule.

No such deal, however, was offered to the Dutch, partly, one suspects, because, not being able to band together with near neighbours for mutual defence, the Empire almost certainly thought that it could crush any Dutch resistance under the sheer weight of its forces. What it hadn’t counted on, however, was just how much the Dutch people were willing to sacrifice in order to rid themselves of Catholic rule. With neither side willing to even talk about negotiating, the country was thus plunged into a state of war which lasted more than eighty years and effectively divided it in two, with Imperial and Spanish troops occupying the Dutch-speaking but largely Catholic region of Flanders, while a fragile Protestant coalition fought year after year to hang on to their independent provinces in the north.

With so much blood being spilt on the continent, it was in England, however, that the polarising effect of Puritanism was arguably at its most divisive, not least because the divisions it created it were entirely internal at least for the first few decades and because the Tudor dynasty could not decide which side of the divide it was on, vacillating between Protestantism and Catholicism with each change of monarch, thereby keeping the flame of division alight.

In part, this had its historical roots in the fact that when Henry VIII initially split from Rome in 1533, it had nothing to do with any religious conviction on his part and everything to do with the fact that, in order for England to avoid another dynastic civil war, Henry believed that he needed a male heir. By 1533, however, his wife of some twenty-four years, Catherine of Aragon, was forty-eight years old and almost certainly beyond having another child. To have any chance of producing a son, therefore, Henry had to marry again. And to accommodate this he needed the Pope to grant him a divorce. The problem was that Catharine just so happened to be the aunt of Charles V, who was not only Holy Roman Emperor but the King of Spain as well, making him the most powerful man in Europe and rendering it quite impossible for the Pope to ever agree to Henry’s request. The only solution, therefore, was for Henry to create his own church the Church of England of which he would be the head, allowing him to annul his own marriage.

Not only was Henry’s Protestantism entirely pragmatic, therefore, but he was also far from being a Puritan. He loved life too much. He also quite liked the pomp and ceremony of the old church. While he was willing to allow certain reforms especially those that made him money, such as the abolition of the monasteries to most outside observers, therefore, the new church remained very much like the old church, which satisfied neither side and succeeded only in creating a new kind of religious and political tension, which was then amplified still further by his next two succeeding heirs, who swung the country from one side of the divide to the other, his son, Edward VI, embracing a more principled form of Protestantism, while his daughter Mary, the next in line, actually started her reign by burning Protestant heretics at the stake.

It was thus left to Elizabeth to attempt to bring her much divided kingdom back together again, which she initially sort to do by allowing her subjects as much religious freedom as her equally divided Privy Council would permit. This policy, however, was soon undermined when, in February 1570, Pope Pius V published a Papal Bull not only declaring Elizabeth illegitimate, but excommunicating her as well, thereby making it the duty of all Catholics to bring her rule to an end, from which point on her reign was blighted by a series of plots, which not only elicited an ever more repressive response from her secret security service, but created an even greater anti-Catholic sentiment among the majority Protestant population, especially after the attempted Spanish invasion of 1588.

With the war still raging in the Netherlands and the streets of London awash with pamphlets describing the atrocities committed by Spanish and Imperial troops, the result was that, by the end of the century, the animosity, acrimony and outright hatred that existed between Catholics and Protestants was reaching a point of criticality, not just in England but all across northern Europe, where a single event was about to set light to a pan-European war. 

Usually referred to as the ‘Defenestration of Prague’, it came about in May 1618, when the Hapsburg Archduke Ferdinand, presumptive heir to the then Holy Roman Emperor, Matthias I, was due to be crowned King of Bohemia. Based on his record within the imperial administration, however, many of the Protestant Bohemian nobility feared that the Jesuit trained Ferdinand would not uphold the religious freedoms granted to them in the Settlement of Augsburg and therefore invited an imperial delegation to visit them in Prague in order to give them certain assurances. Not only did these assurances fall some way short of adequate, however, but the imperial delegates were so arrogantly dismissive of the Bohemian nobles’ concerns and that three of them were literally thrown out of a window of the castle in which the meeting was taking place. Worse still, the Bohemian nobles then decided that in response to such an insult they would offer the crown to the Protestant Elector of the Palatinate, Frederick V.

The result was that, almost overnight, Ferdinand had not only lost a kingdom but had suffered a serious setback to his imperial aspirations. For both the King of Bohemia and the Elector of the Palatinate were among the seven electors of the Holy Roman Emperor, and could therefore have swung the next election against him. Indeed, there was a distinct possibility that the next Holy Roman Emperor could even have been a Protestant, thereby finally ending the symbiotic relationship between the Empire and the papacy created by Leo and Charlemagne some eight hundred years earlier. That this didn’t happen was solely due to the fact that, before Fredrick had time to travel to Prague to be crowned king, the then Emperor, Matthias, died, giving Ferdinand the opportunity to organise a hurried election in which all the remaining electors were present… except, of course, Fredrick. Thus, as fate would have it, Ferdinand was duly elected Emperor and, with Spanish help, fairly quickly took both Bohemia and the Palatinate by force.

With other Protestant states naturally fearing that they would be next and the newly crowned Emperor eager to pursue his advantage, the conflict then quickly spread. Due to Spanish involvement, the Dutch, who had already been fighting Spain for more than fifty years, naturally sided with their German co-religionists, as did the Protestant King of Sweden, Gustavus Adolphus, whose territorial possessions in the Baltic States were also potentially under threat. This brought in Italian troops from both the independent duchies of northern Italy and the Spanish-ruled Kingdom of Naples, which eventually led to France joining in as well. For noting that Spanish and Imperial troops were clearly becoming overextended, the French naturally saw this as an opportunity to end the Spanish and Imperial domination of Europe. Despite France being itself a Catholic country, it therefore attacked Spanish troops in the Netherlands, forming a temporary alliance with the Dutch.

The result was a pan-European war which lasted thirty years and which, relative to the size of the populations involved, was actually more destructive of life and property than the first World War, with somewhere between 4.5 and 8 million people being killed in the fighting alone. What made this war, of all wars, even more devastating, however, was the fact that it coincided with the onset of yet another period of global cooling, generally known as the Little Ice Age, which lasted throughout the 17th century and again saw global temperatures fall by around 2.0°C, reducing crop yields, inducing famine and causing food price inflation.

And if all this wasn’t bad enough, the plague returned once again, sweeping across Europe in two waves starting in 1629 and 1656 respectively, the latter wave reaching London in 1665 where it is thought to have killed roughly half the population. As a result of war, famine and disease, it is estimated that Europe as a whole lost around 30% of its population, with some German states loosing up to 60%.

To Puritans throughout Europe, it all seemed like the ultimate vindication of their convictions. For how else were these afflictions to be interpreted except as God’s punishment for our sinfulness, or, more specifically, for the sinfulness of two groups which, in Puritan minds, were closely related: the Roman Catholic Church and Europe’s hereditary aristocracy, both of which continued to live in a state of relative luxury, enjoying the finest wines and foods, while the continent’s poor – more and more of whose increasingly marginal farms failed every year – were being squeezed into its squalid and overcrowded cities looking for work, only to find poverty, disease and death.

The result was a wholly new form of conflict to add even further torment to an already war-ravaged continent: a form of civil war not based on dynastic rivalries – as most civil wars in Europe had been prior to this point – but on social discord, the most notable examples being the Fronde in France and the English Civil War, which, being fought between an ancient ruling aristocracy and an emerging middle class, makes it seem as if it were primarily political in nature. And, to the extent that one of its antecedents or preconditions was clearly England’s increased trade with the New World, which not only brought increased wealth to that rising middle class, but led to the expansion of manufacturing in and around the urban centres to which the rural poor were now fleeing in their thousands, especially London, it clearly had a political dimension.

To view the Civil War purely in these terms, however, is to make a mistake very similar to that made by Charles I, himself, when he not only tried to foist on his subjects a Book of Common Prayer which many regarded as ‘papist’, but actually contemplated raising a Catholic army in Ireland to put down the resulting Presbyterian rebellion in Scotland. For what he failed to understand was not just the mood of his people, as is often suggested, but the fact that, due to all the terrible afflictions which Europe was suffering, Christianity, itself, had undergone yet another transformation, combining the Puritanism of the Reformation with the Apocalyptic Christianity of the Dark Ages to produce what can only be described as Apocalyptic Puritanism, the opposition of which to Catholicism could no longer be characterised as a mere disagreement over the proper form of worship – if it ever could – or even a moral dispute about how one should live one’s life, but something far more fundamental and, indeed, darker. For while this new form of Puritanism still primarily saw what was happening in the world as God’s punishment for our sins – thereby drawing on the Judaic part of Christianity’s hybrid cosmology – in the unprecedented levels of death and destruction Puritans saw all around them, they now also increasingly saw the hand of the devil: that inverted Bounteous Immortal or fallen Archangel who, in his Zoroastrian incarnation, had been created by Ahura Mazda to give human beings the freedom of moral choice, but who, as depicted in Milton’s ‘Paradise Lost’, was now seen as actively working to undermine God’s cosmic order, not least through the malicious attacks on the faithful mounted against them by his earthly minions: witches.  

The result was that during the first half of the 17th century, tens of thousands of men and women – though mostly women – were executed for witchcraft all across northern Europe. In England, the persecution reached its height in the 1640s, during the Civil War itself, when the putative links between the Royalist cause and Catholicism, and Catholicism and sorcery, led one Essex Puritan, Matthew Hopkins, to declare himself Witchfinder General and lead a band of followers from village to village throughout eastern England, trying and hanging witches.

Nor were such manifestations of this new Apocalyptic Puritanism merely the hysterical reactions of an ignorant and fearful peasantry. For while many of those baying for blood and making accusations against their neighbours may have been incited to do so by visions of horror intentionally designed to play upon their fears, those leading these witch hunts were almost invariably well-educated and otherwise completely rational. For as alien as this whole mind-set is to us today, this is the way in which intelligent and educated people in the first half of the 17th century actually thought. They truly believed that God was punishing us for our sinfulness and that the devil was walking among us, driving us to ever greater depths of depravity while sowing the seed of further chaos, making it entirely rational, therefore, for them to follow the evidence of this chaos wherever it took them, track down the culprits, and put them to death whenever the traditional forms of examination revealed them to be guilty.

More to the point, given the sequence of historical changes in the nature of Christianity I have so far outlined and the historical circumstances in which it now found itself, this whole Puritan mind-set can itself be seen as the entirely logical next step. For starting from that worst of all possible moral universes in which, without forgiveness, human beings are condemned to eternal damnation simply by virtue of being human, Christianity was only made endurable if Christians believed that forgiveness could be obtained by faith alone: by putting themselves entirely in God’s hands and trusting to His mercy. By selling forgiveness in order to meet the papacy’s financial needs, however, the Catholic church had implicitly told its followers that God’s mercy could not be trusted and that, if they wanted forgiveness, they had to pay for it, either with coin in this life or with an eternity of suffering in the next. As Martin Luther had advocated, what Protestants needed to do, therefore, was return to that older, purer form of Christianity in which faith was still central. And, initially, this is what Lutherans did. As the apocalyptic events of the late 16th and early 17th centuries steadily worsened, however, and the apparent determination of God to punish the world for its sinfulness gained ever greater credibility, Puritans, like Catholics, found it increasingly difficult to have faith in God’s mercy alone. The only difference – apart from the fact that Puritans literally believed that the Pope was the Antichrist – was that, while Catholics continued to have faith in the institution of the Catholic Church to save them, Puritans were now even more resolutely certain that they had to take responsibility for their own salvation, either by purging their sins through punishment or through the avoidance of sin in the first place.

The good news was that they were not left entirely on their own to determine how this avoidance was to be achieved. For not only had God sacrificed His only son to provide them with an example as to how they should live their lives, He had also provided them with a written prescription in the form of the bible. The bad news was that following this prescription not only entailed a life of disciplined austerity – which, given the nature of sin, was only to be expected – but a radical change in the nature of what Puritans now called ‘faith’, which, in a way wholly consistent with their self-reliant approach to salvation, had been subtly transformed from a faith in God’s forgiveness to a faith in the bible’s prescriptive power to protect them from the temptations with which the devil constantly assailed them and with which, they believed, he was actually testing their faith.

The problem was, of course, that it wasn’t really their faith that was being tested, but their willpower and self-disciple: an almost wilful misunderstanding of what was actually going on which inevitably plunged them into a state of almost perpetual war with themselves, as they fought against every sinful urge and inclination in order to demonstrate their faith, thereby not only creating a kind of purgatory for themselves here on earth, but setting themselves up for either a shameful fall or a self-deceitful and hence dishonest triumph. For while they may have always taken every care to maintain that it was their faith in the bible, rather than any personal attribute, that protected them – hence the wilful misunderstanding – to those who came through this war against sin unscathed, simple logic dictated that, if it was their faith that had been their shield, then their faith had to be stronger than that of those who succumbed to temptation: a fact which naturally led many a self-satisfied Puritan to wear their ‘faith’ like a badge of honour, while looking down with undisguised disdain on those whose faith had failed them.

Nor was this the only consequence inimical to human happiness and well-being to which the twisted logic of the Puritan mind-set gave rise. For as I outlined in Part I, the concept of original sin had numerous other adverse implications for those with a mind to draw them, especially with respect to property. For if one believes – as most Puritans did – that wealth enables or facilitates sin, allowing us to indulge our most sinful desires, then although ownership of wealth may not, in itself, be sinful, it should clearly be eschewed as leading to temptation. The only problem, of course, is that a certain level of wealth – in the form of food, clothing and shelter – is actually necessary for survival. Given the initial premise – that a surfeit of personal wealth facilitates sin – the obvious solution, therefore, was to hold all wealth in common, so that everyone could be allocated what they need to live, while making it impossible for any one individual to accumulate enough wealth to imperil their soul: an argument so inescapable that in almost every Puritan revolution that took place in the 16th and 17th centuries, there was at least one group which advocated some form of communal living in which all property was shared.

In Germany, for instance, this radical approach had not only been proposed but actually put into practice by various Anabaptist sects even before the widespread social upheavals of the Thirty Years War. Unfortunately, the prevailing belief at that time, that women were property and were also therefore to be shared, tended to undermine the harmony of most these early communes, the jealousies generated almost invariably resulting in power struggles and additional violence.

In England, where there were far fewer examples of this kind of experimental society actually being implemented, there were still, nevertheless, at least three groups with similar ideas, the most well-known of which were the Levellers, who drew most of their support from the army and who were thus very prominent during the Putney Debates, held during the autumn of 1647, in which some of their ideas were actually put forward in the context of a future English constitution. Being so close to power and having every reason to believe that their proposals would be given proper consideration, they were always very careful, however, to lay out their ideas as moderately as possible, arguing that while all private property should be taken into public ownership, this should only happen with the consent of the owners, their view being that this could be obtained through rational argument.

This was in marked contrast to the Diggers, for instance, who, in several locations throughout southern England, including Weybridge in Surrey, Wellingborough in Northamptonshire and Iver in Buckinghamshire, simply took over privately owned land and started farming it communally, while the Ranters, who were a pantheistic sect believing that God is in every living being, rejected the concept of private property entirely, as they did the institution of marriage and the practice – weather permitting – of wearing clothes.

Not, I should say, that any of these ideas ever had any real chance of being widely accepted. For while those fighting for parliament may have objected to being taxed by an unaccountable king, most parliamentarians, including Sir Thomas Fairfax and Oliver Cromwell, the army’s two most senior generals, were members of the landed gentry and had no desire to overturn the existing social and economic order. In fact, Cromwell even watered down the proposals put forward by the Levellers at the Putney Debates, convincing their leaders, including the charismatic Colonel Rainsborough, to accept his own compromise proposal. It wasn’t so much the possibility that one or more of these various factions might actually bring down the existing social order that frightened people so much, therefore, as their very existence. And it was this that ultimately brought the Puritan era to an end – if only temporarily. For to those who simply wanted a return to stability after a decade of civil war, groups like the Diggers and the Ranters represented a world in which all reason seemed to have been lost, making it more or less inevitable, therefore, that when Cromwell died, to be replaced by his ineffectual son, Richard, creating a political vacuum in which a chaotic free-for-all of competing ideas now vied for power, those who simply wanted the chaos to end should choose to restore the monarchy, along with the order, stability and normality which it represented.

Not, of course, that a complete return to the past was either possible or desirable. For the world had moved on since the execution of Charles I, and no one actually wanted to go back to the confrontational politics of those bygone days, least of all Charles II, who had no desire to follow in his father’s footsteps. Almost of necessity, therefore and certainly by popular demand the restoration was characterised by three main attributes, the first of which was a determined if not always fulsome spirit of reconciliation. The result was that, apart from an initial period in which those who had actually signed the old king’s death warrant were hunted down and brutally executed, the new regime did everything it could to bring both sides together, appointing men of moderate views to positions in government regardless of their previous allegiances, while eschewing all forms of extremism. Famously, it was also both liberal and tolerant, not only reopening the theatres and permitting public entertainments, but encouraging public celebrations on high days and holidays. More than anything else, however, it promoted what can only be described as a new rationalism the basing of belief on demonstrable facts rather than notional constructs which was given its clearest and most symbolic expression in November 1660 just seven months of the king’s return by the founding of the Royal Society.

Intended as an antidote to the religious hysteria and superstitious zealotry of the previous hundred years, it was this one act, along with Charles’ lifelong championing, not just of science, but of engineering and architecture, which, in just a few decades, transformed Britain from a country which still believed in witches, to one which saw the publication of Robert Boyle’s ‘The Sceptical Chymist’ (1661), Robert Hook’s ‘Micrographia’ (1665), and Sir Isaac Newton’s ‘Philosophiæ Naturalis Principia Mathematica’ (1687). Even more visibly transformative to most of the country’s still illiterate citizens, it also saw the rebuilding of St. Paul’s Cathedral and much of the rest of London after the great fire in Sir Christopher Wren’s geometric neoclassical style, none of which would have even been conceivable just a few decades earlier.

Nor was this new age of reasoned argument, based on empirical evidence confined merely to the physical sciences. After the political turmoil of the first half of the century, it also now informed political debate, finding its clearest expression in John Locke’s ‘Two Treatises of Government’, in which Locke not only argued that the only legitimate basis for government was consent but developed the concept of a social contract between government and the governed: concepts which went on to influence the ‘Bill of Rights’ drawn up by parliament in 1689, thereby helping to shape England’s constitutional monarchy.

Even more significantly, this bloodless but nevertheless revolutionary political development, which ended the old absolutism which still ruled in France, and made government accountable to the people, gave the English a sense of personal freedom which few people had ever experienced before and which was probably unique prior to the founding the United States, the constitution of which John Locke also heavily influenced. Combined with a raft of new technologies to which the ever-developing physical sciences were constantly giving rise this in turn led to a sense of almost boundless opportunity, whether it be in manufacturing or in trade, both of which now gave enterprising, freeborn Englishmen the chance to make their fortunes and thus make Britain the world’s manufacturing and trading powerhouse for the next two centuries.

To say that all this happened purely as a result of the backlash against Apocalyptic Puritanism would, of course, be a gross oversimplification. There were almost certainly dozens of other important factors which helped transform Restoration Britain in the second half of the 17th century. What one can say with a fair degree of plausibility, however, is that without this backlash, the English enlightenment, British scientific development and the advent of the industrial revolution would not have proceeded at anything like the pace they did, as can be seen if one compares what happened in Europe, and more especially Britain, with what happened in other places in the world that were subject to the same climatological disruption as Europe, but did not share the same religious heritage.

Take China, for instance, where the marginalisation of agriculture north of the Great Wall led to successive waves of migratory invasions by Manchurians, which eventually led to the collapse of the Ming dynasty. In all of China’s 17th century wars, however, there was never any religious or ideological component. No one blamed agricultural failures on the sinfulness of the other side and no one committed atrocities as a result. Indeed, apart from banning the shaved foreheads and long plaited pigtails traditionally worn by male Ming Chinese which the new Qing dynasty regarded as divisive the Manchurians actually allowed themselves to be assimilated into the existing Chinese culture almost entirely, giving up Manchurian traditions that had existed for centuries. Without any cultural change, however, there was no acceleration in scientific and technological development, no industrial revolution and no worldwide Chinese trading empire. And although there were probably numerous other cultural differences which contributed to the divergence in the way in which China and Europe subsequently developed, the pivotal role played by Christianity clearly stands out, not least because, where it played a transformational role in changing the culture, such as in Britain, it also underwent another transformation itself.

In England, in particular, the backlash against Puritanism created a softer, gentler form of Christianity, in which its terrifying cosmology of heaven and hell was not jettisoned exactly for that would have meant repudiating Christianity altogether but was quietly downplayed in favour of what might be called pastoral guidance. Instead of threatening sinners with damnation and eternal torment, the Church of England now sought to instruct its flock on how to live good and productive lives so as to earn their places in heaven. Instead of preaching absolute abstention from anything even remotely sinful, as Puritans had done, it sought rather to instill a leaning towards moderation and self-control. While urging its congregations to practice charity and dutifulness in all their dealings with others, it also taught fortitude or ‘stoicism’ in the face of adversity. In fact, there were many 18th century Church of England sermons which could probably have been lifted straight out of the pages of Marcus Aurelius’ ‘Meditations’, a copy of which would have adorned the bookshelves of most churchmen of that period, its author being regarded almost as an honorary Christian. 

That’s not to say, of course, that ‘hell-fire’ preachers disappeared altogether. They still dominated many nonconformist churches and flourished in other parts of Europe, especially Scandinavia and northern Germany. Even here, however, the repugnance which many people felt towards the excesses of the first half of the 17th century exerted considerable pressure on some of the more extremist sects to conform to more conventional social norms, or simply followed in the footsteps of the Pilgrim Fathers and numerous Anabaptist sects, including the Amish, by making their way to America.

The problem for the more moderate Protestant churches of northern Europe, however, including the Church of England, was that this gentler form of Christianity almost inevitably created the conditions for their own decline. For by downplaying Christian cosmology and embracing or at least accommodating a secular culture which increasingly sought to explain the universe in purely scientific terms, they not only placed themselves in the position of having to maintain two contradictory sets of beliefs, but set themselves on the path down which their own particular set of beliefs would eventually be eroded altogether. For by discovering that storms, for instance, are caused by rising currents of warm air being twisted into vortices by the rotation of the earth, it wasn’t just that we no longer saw this particular natural phenomenon as being under divine control as it had been for our ancestors, who, as described in Part I, made offerings to their gods to induce them not to blow their houses downit was rather that by incrementally coming to see all such natural phenomena as explicable in scientific terms, we eventually removed God from nature altogether, making Him at best the creator of the natural laws by which the universe is governed.

While people may still have continued attending church as part of the fabric of their daily lives, the result was that for all the sense of community and shared values which these churches may have provided, by accepting empiricism as an antidote to Puritan extremism, they effectively hollowed themselves out, undermining their core beliefs in a way that made their decline inevitable, especially at a time when the additional wealth being created by the industrial revolution was now also giving rise to new social and political problems, not only crowding ever more people into Europe’s still insalubrious cities but giving them just enough education to question the justice of the unequal distribution of this wealth which they themselves were creating.

This led to two alternative but subtly related developments. The first was a revival of Puritanism in the form of new non-conformist churches such as the Methodists, who not only railed against injustice and inequality from the pulpit, blaming them once again on man’s avarice and lack of Christian charity, but fostered a new political movement aimed at improving the lives of Britain’s working class, principally through education and self-improvement. The second was an attempt to remove injustice and inequality altogether by restructuring society, not on the basis of any supposedly natural order determined by God, but on the basis of justice and equality themselves.

Reviving many of the ideas of the Levellers which, in truth, had never really gone away political thinkers from Jean Jacque Rousseau in the mid-18th century to Karl Marx in the mid-19th, consequently began, not only to develop theories as to how the old order had come about, but also to put forward their own visions of a fairer, more just society, inspiring political movements which, in many ways, were often more important than the objectives of these movements themselves.

I say this because, while these objectives the realisation of a fairer society were obviously important, to those who actively participated in the struggle to attain them in the toil and sacrifice involved, as well as the camaraderie of being part of collective enterprise directed at this common good the journey to the Promised Land was often of greater importance than the arrival, imbuing their lives, as it did, with a purpose and meaning they might not otherwise have had, and effectively constituting a fourth strategy for dealing with death and life’s consequent apparent meaninglessness, which, for want of a more readily available term, I shall call ‘collectivist sublimation’.

Not, of course, that there was anything new in this. For like most species of pack animals, which have almost universally evolved a capacity for individual self-sacrifice as a way of ensuring the survival of the pack as a whole, human beings have been sacrificing themselves for their collective throughout all of history, with most societies actually building self-sacrifice into their culture, deeming it to be one of its highest virtues. This is only possible, however, because devoting one’s life to a cause greater than oneself even to the point of sacrificing one’s life for it can be as equally rewarding to the individual as it is beneficial to society, conferring a value and significance on the individual’s life which might not have been achievable in any other way.

The Romans recognised this, for instance, when they encouraged the cult of Mithras in their army. For before Mithras was recruited into Zoroaster’s reorganised pantheon as the god of promises and contracts thereby giving us the Mithraic right-handed handshake to signify the sealing of a deal he had actually been a calendar god representing the seasons of the year: the dying back of nature in the autumn and its rebirth in the spring. It is why one of his symbols was the deciduous fig tree, which, like the Roman legion, continued to live on despite the annual loss of some of its constituent parts, these parts being replaced each year by regrowth or the mustering of new recruits, the point being that, like the leaves on the fig tree, individual legionaries were part of something greater than themselves something potentially immortal of which they could not only be proud but to which they could devote and even sacrifice their lives, thereby giving their lives meaning.

While the principle of collectivist sublimation has thus had its place in just about every civilization in history, and has been bred into our very DNA, the new forms which emerged in 18th and 19th centuries, however, had two serious problems, the first of which was that, unlike the Roman legion, the purpose of which was both permanent and constant, the objective of building a better, more equal society as already noted actually defeats its purpose as a form of collectivist sublimation if it is ever actually achieved, thereby requiring all collectivist revolutions to be essentially never-ending. It was the second problem, however, which, while being less immediately obvious, was even more destructive of these movements’ goals. For it flowed from the very dubious assumption, inherent in all such utopian exercises, that a system as complex as a human society, which ordinarily emerges as the result of the multifarious interactions of its autonomous members based on their individual needs and interests, could be designed and engineered without giving rise to unintended consequences: an outcome made all the more unlikely given the fact that the underlying perception of the human beings making up this intended society was still fundamentally Christian.

I say this because, while most forms of utopian socialism nominally repudiated Christianity, and hence the concept of original sin preferring, instead, to believe that human beings were fundamentally good they were nevertheless based on what was essentially a retelling of the story of The Fall, in which they were consequently obliged to explain our less than perfect behaviour as the result of our seduction, not by the Devil, of course because he was simply the product of a repressive order which used superstition to control the masses but by his latter-day replacement, Capitalism, the remedy for which was socialist re-education

Worse still was the assumption, also inherited from Christianity, that private wealth was still the root of all evil and that all private property therefore had to be taken into public ownership, to be allocated to people by the State on the basis of their need. Not only did this make the state the monopoly supplier of absolutely everything, however, it also assumed that the fundamentally good, socially re-educated agents of the State would allocate resources fairly and without regard for personal gain: an outcome made all the more fanciful by the fact that, because the means of production had now been taken out of private hands, instead of relying on market demand to determine what should be produced and how much, this was now given over to central planners. Not only did this result in over-production in some areas and under-production in others, however, but the intricacies of supply chains often meant that even when a targeted level of production was appropriate, it could not be realised due a shortage of components.

This was then further exacerbated by the opprobrium and punishments that were regularly heaped on production managers who failed to meet their quotas, driving all the most able and competent people out of the productive sectors of the economy, where there was nothing to be gained and everything to lose, and into the unproductive administrative bureaucracy, where chronic shortages of just about everything enabled those allocating resources to do other influential people favours and hence receive favours in return. Worse still, the inherent corruption of the system and the ever deteriorating economic circumstances of those outside it meant that those who controlled it did everything in their power to preserve the system and their positions within it, silencing anyone who spoke out against it by labelling them enemies of the state and sending them to re-education camps in places like Siberia, where they died of hypothermia, malnutrition or at each other’s hands, as they fought over what little food and shelter was available.

Apart from not burning dissidents at the stake as heretics, in fact, the only difference between the Soviet Union and the Medieval Catholic Church was that, while the Medieval Catholic Church was entirely parasitic upon the wider economy, it did not try to control that economy. As a result, it lasted more than seven hundred years: from Christmas Day in the year 800, when Leo III crowned Charlemagne Hold Roman Emperor, to October 1517, when Martin Luther published his 95 Theses. By taking control of the economy and eventually causing it to collapse, the Soviet Union, in contrast, lasted just a tenth of that time.

What this meant, however, was that by Christmas Day 1991, when the Soviet Union finally collapsed, European civilization had been failed by two of the four main strategies human beings have standardly employed to deal with the inevitability of their demise and the apparent meaninglessness of their existence. And while both Christianity and Communism still have their adherents, it is doubtful whether either will ever again achieve the kind of mass following they previously enjoyed. With stoicism being generally regarded as an ancient, anachronistic and hence irrelevant moral philosophy which it is too difficult for most people to understand let alone practice, this has meant that, apart from a number of other forms of collectivist sublimation which we will come to shortly one of them old and currently under threat, the rest distinctly new and highly pernicious for the last thirty years or so, all we have had left to ward off existential angst and mitigate our growing nihilism have been various forms of hedonism.

In fact, most western democracies have been using hedonism, in the form of consumerism, to gain popularity with their electorates ever since the second world war, the idea being that people will always vote for governments which increase their standard of living. The problem with this whole approach to politics, however, is that there is a natural limit to how much people can consume.

In the immediate post-war years, this was partly solved by the rebuilding work that needed to be done, especially in Europe. It was also considerably aided by getting women, many of whom had worked in factories during the war, back into the home, both to allow demobilised soldiers to be absorbed into the workforce and to start having children again, the needs of growing families then boosting demand and economic growth. By the end of the 1950s, however, with both construction and the baby-boom beginning to tail off, this dual approach was beginning to lose its effect, with the result that both governments and big business needed a new strategy to keep the economy expanding.

This they did in two ways. The first was by building obsolescence into manufactured goods, especially things like electrical appliances, which could not only be designed to have a limited operating life but could then be replaced by improved models, thus enticing people to constantly buy new ones even when the old ones were still working: something which no one would have done before the war.

The second part of the strategy, however, was even more important and had two interconnected aspects. The first of these was a reversal of the 1940s policy of getting women to return to the home, in that it now involved getting them back into paid employment once again, luring them back into factories with the promise of greater financial independence from their husbands and more income for their families. There was, however, another side to this coin. For being in paid employment, it was now both possible and necessary for women, being the main purchasers for their families, to buy a whole range of goods and services which they had previously produced for themselves without financial remuneration: a shift in the nature of production which had advantages for both business and government. For getting women to make food and clothing in factories rather than in the home, and paying them for their labour, while charging them for the food and clothing which they now had to buy readymade, not only meant that business was able to profit from this economic activity for the first time, but that governments were able to measure it, present it as economic growth, and tax it to pay for the increased public services which women also used to provide outside the monetarised economy, such as care for their elderly parents.

Apart from being one of the biggest confidence tricks of all time, the real problem with this, however, as I explained in ‘Women’s Liberation and the Monetarisation of the Economy’, was that, although there were some increases in productivity as a result of mass production, the natural limitation on how much anyone can consume meant that overall production did not increase by very much. It had simply been shifted from the unmonetarised part of the economy to monetarised part. Monetarising previously unmonetarised economic activity, however, inevitably increased the money supply. And increasing the money supply without increasing production inevitably increased inflation, much of it hidden, in that the additional cost of replacing a homemade item with a shop-bought one did not show up in the Retail Price Index.

The result was that while families with three, four or even five children in the 1950s could live quite happily on the income of just one wage-earner, by the middle of the 1980s, families with only one or two children needed to have both parents working to make ends meet. This, in turn, not only caused demographic decline, but put intolerable pressures on families which many could not withstand, increasing divorce rates and hence the number of one-parent families living in poverty or close to it. Even more importantly, it removed the one thing that people had left which could give their lives meaning. For raising a family investing time and effort in one’s children and making sacrifices for them is, of course, a small scale but extremely widespread form of collectivist sublimation, giving parents something to live and strive for that is more important than themselves and enabling them to feel that they have done something worthwhile with their lives. Take this away and you take away from many people the only possibility for a meaningful life they ever had.

It is somewhat ironic, therefore, that it was yet another unintended consequence of hedonistic consumerism that now came to our rescue. For this consumerist paradise we had built for ourselves was, and still is, immensely wasteful. Manufacturing goods only to throw them away after just a few years has probably consumed more energy and raw materials during the last fifty years than in all our prior history. It has also produced more waste products and pollution than ever before, not least in the mountains of rubbish we pile into land-fill sites. Even by the 1960s, therefore, this had allowed some people to find a new sense of purpose in a new form of Puritanism: one which saw our excessive consumption and pollution as simply another expression of the sin for which we were originally expelled from the Garden of Eden, and which was now implicitly understood as the cause of our heedless destruction of Mother Earth. For while we may have abandoned Christianity, Christian cosmology, including the story of our ejection from paradise for the sin of being human a sin which duly condemned us to the indignity of our human bodily functions, including, of course, eating and defecating, which is to say consuming and polluting is still deeply embedded in our culture. Indeed, so fundamental is it to what Carl Jung called our collective unconscious that when, in 1969, Joni Mitchell sang ‘We are stardust, we are golden | And we’ve got to get ourselves back to the Garden’, no one had any doubt which ‘garden’ she meant, or what she meant by it.

The problem for those who committed themselves to environmentalism as their preferred form of collectivist sublimation back in the 1960s and 70s, however, was that the solutions to most of the perceived environmental problems at that time were not only unobjectionable but eminently achievable. And, for the most part, we achieved them. We cleaned up our rivers, stopped using DDT as a pesticide, and removed fluorocarbons from our refrigerators and aerosol sprays. This meant, however, that the environmentalist cause did not exactly require a great deal of struggle or sacrifice, rendering it one which was thus mainly the preserve of those who were stereotypically characterised as sandal-wearing liberals and not one that was ever going to rock the world. Then in June 1988, James Hansen, director of the NASA Goddard Institute for Space Studies, gave a presentation to the United States Senate Committee on Energy and Natural Resources warning them that manmade increases in atmospheric CO2 and other greenhouse gases were causing the earth’s climate to warm at a rate which could have catastrophic implications, thereby bringing a new form of environmentalism into being: apocalyptic environmentalism.

Whether there is any scientific basis either for linking increases in atmospheric CO2 to increases in atmospheric temperatures or for supposing that the current warm period is going to be any more catastrophic than the Roman and Medieval warm periods which were both slightly warmer than today is, of course, one of the key questions of our age. For myself, as I have explained elsewhere, I am extremely sceptical. For while, as described by the Swedish physicist Svante Arrhenius in 1896, there are some gases, including water vapour, CO2 and methane, which, at certain specific wavelengths, absorb infrared radiation emitted by the surface of the earth after it has been warmed by the sun, as was subsequently demonstrated by another Swedish physicist, Knut Angstrom, five years later, all this infrared radiation is already absorbed by the greenhouse gases already in the atmosphere at their current concentrations. Adding more CO2 to the atmosphere, therefore, has absolutely no effect on atmospheric temperature. What’s more, anyone with any knowledge of history will know that the climate has regularly warmed and cooled in the past without any input from human beings, and that it is actually cold periods, such as those in the 6th and 17th centuries, that we have to worry about.

The problem is that, as the heirs to a culture rooted in centuries of Christian cosmology, such purely empirical considerations are of absolutely no consequence when weighed against the magnitude of this new apparent twist upon one of our most deeply rooted beliefs: one, moreover, which places the blame quite squarely on the shoulders of we Europeans, not only because, as the heirs to this ancient system of beliefs, we are the only people who could possibly think in these terms, but because it was our development of science and technology ironically as an antidote to this way of thinking that gave rise to the industrial revolution, which has not only been responsible for the increase in greenhouse gas emissions that is now supposedly pushing us towards global destruction, but also enabled us to commit the further sin of conquering and enslaving nearly all the other peoples of the earth, for which we must consequently be made to suffer the punishment of deindustrialisation.

For this new form collectivist sublimation, of course, is not just about ‘Saving the Planet’ as paramount and overriding as this cause is presented as being it is also about the purging of our sin. And it is these two things together our saving of the world as an act of redemption, especially if this requires us to make the ultimate sacrifice of our own existence that makes the church of anthropogenic global warming so powerful, not just giving purpose to its followers’ lives but meaning to their deaths.

What’s more, to those who accept its premises and inhabit this particular mind-set, just like 17th century apocalyptic Puritanism, it all seems to make perfect sense. It is only when one steps outside of this mind-set and asks whether any of the premises on which it is predicated have any empirical foundation that, just like the hanging of witches, it all starts to look more like a form of collective madness.

The problem is, of course, that very few of us are actually capable of taking this backward step. For while we all now realise that the beliefs of 17th century Puritans had no foundation in reality and that they therefore constituted a kind of mass insanity, applying this same historical or transcendent perspective to our own way of thinking is far more difficult, not least because, like most people throughout history, we don’t really believe that our way of thinking is historically or culturally determined. What’s more, we our encouraged in this belief by the further conceit that, if we did in fact have a culturally or historically determined mind-set, it would be one based on that which freed us from the superstitions of the 17th century some three hundred and fifty years ago, i.e. science.

In this, however, we are again simply deceiving ourselves. For not only do very few of us have any deep knowledge of science, relying on experts to explain it to us usually in a drastically over-simplified form but very few of us actually ‘think’ scientifically, basing most of our beliefs on the prejudices and false assumptions with which we have grown up along with a fair amount of sheer wishful thinking. Even more crucially, as I explained in my essay ‘Problems in the Culture of Modern Science’, science today has become so compromised by public funding, and hence by the political priorities of the funding bodies, that many of the findings and opinions of the experts on whom we rely can no longer be trusted to be scientifically objective.

The result, especially in the case of climate science, has been a vicious circle in which the unfounded but widespread belief that human beings are responsible for global warming now ensures that all the public money being channelled into climate research which, today, runs into the tens of billions each year has to go to scientists who support the Anthropogenic Global Warming theory (AGW), thereby further reinforcing popular opinion and enabling a highly vocal group of fervent climate change campaigners to put even greater pressure on politicians to support this now dominant narrative.

However, this is not the only generally accepted theory currently animating the western world that is based less on empirical evidence than on the need to make life meaningful. In fact, there are a whole plethora of interrelated forms of collectivist sublimation to which our woke culture has given rise which both give purpose to their activists’ lives while allowing them to force penance on the rest of us for our sin, anti-racism currently being one of the most popular. For while this whole Puritan movement which now extends far beyond environmentalism may give the appearance of being something new, it is fundamentally just a repudiation of everything that came about as a result of our rejection of Puritanism in the second half of the 17th century and which brought into being an industrial civilization which 21st century Puritans now see as the root of all evil, including the patriarchy, capitalism, sexism and racism.

Being thus essentially opposed to the kind of male dominated, imperialist mercantilism which characterised so much of the 18th and 19th centuries, this means that it is also closely allied to the kind of internationalism which has dominated the west’s attitude to the rest of the world since the end of the second world war and by means of which it has sought to find some kind of redemption for our prior narrow nationalism along with the wars to which it so often gave rise by creating a fairer, more just world for all mankind. However, just as our attempts to ‘save the planet’ by reducing our CO2 emissions have already started to reduce the world’s energy supply and hence increase its cost, so the principal consequence of our attempts to create a fairer world through free trade and the free movement of capital has been our own economic decline relative to the developing world. For regardless of what the champions of globalisation have constantly maintained that free trade means more trade and thus higher standards of living for all the far more fundamental economic principle that, without taking protectionist measures to prevent it, capital and economic activity will always flow from high wage, high tax environments to environments with lower labour costs and lower tax regimes has always meant that the fully monetarised economies of the developed world, with their extensive social security systems, were always going to suffer from this globalist agenda. It is just that, in our guilt-ridden desire to make amends for our imperialist past, we chose to tell ourselves something different.

For a while, our relative declined was admittedly stemmed by the growth in personal computing and the advent of the internet, which gave the west a new lease of economic life during the 1980s and 90s and led many western governments to talk themselves into believing that, while they may have lost their manufacturing industries, they could build their economies around technological innovation and content creation instead, thereby allowing them to maintain their liberal, internationalist stance towards the rest of well, while telling their own citizens that this was also the best option for them. The only problem was, of course, that it didn’t take long for the developing world to catch up in both technological innovation and content creation, while its inherently lower labour costs meant that the manufacture of the hardware platform upon which information age was built was also very quickly offshored.

It was during this period of relative stability, however, that in one last desperate attempt to square our almost suicidal relocation of wealth creation to the developing world while attempting to maintain a high standard of living at home, the west came up with the idea which, eventually, will almost certainly lead to our ruin. For at some point, we talked ourselves into believing that, even if we could not stop the transfer of wealth creation from the developed to the developing world, we could nevertheless participate in the profit it generated by helping to finance it.

The result was that, throughout the late 1980s and 1990s, the west totally reorganised its financial services industry, taking dozens of inward-looking financial institutions, each addressing a single sector of the market, such as pensions, mortgages, business loans, etc., and turning them in broad-spectrum conglomerates, offering every financial service possible and operating on a global scale. And it was very successful. In both the United States and the UK, the financial sector grew from just 3% of GDP to over 9% over a fifteen year period, much of the increase coming from overseas earnings. Domestically, however, the reorganisation had two consequences which were probably unforeseen and almost certainly unintended but were absolutely disastrous.

The first of these came about as a result of the fact that, allowing single institutions to offer multiple different financial services in different market sectors, enabled them to cross-finance these different market sectors such that short term deposits, for instance, which had previously only been used to finance short term loans, such as overdrafts, could now be used to fund mortgages, which had previously only been financed by long-term savings, thereby making far more mortgages available but also making them significantly more risky for the lending institutions. Riskier still, the industry also took the quite extraordinary decision that, as long as it charged a high enough interest rate to deter reckless spending and cover any resulting defaults, it could safely give people of even very modest means unsecured loans through the issuance of credit cards.

Not, of course, that the consequences of either of these measures was immediately obvious, not least because it took some time for both the lenders and the borrowers to test the limits of their folly. Nor was it understood what the consequence of these measures might be when combined. Given how much money the lenders were making on credit cards, however, and how much fun the borrowers were having making full use of them, the inevitable result was that eventually just about everyone who wanted one had a wallet full of plastic, on which many people ran up multiple debts, while mortgage brokers, acting on behalf of banks but taking none of the risks, were providing low income earners with 100% mortgages based on multiples of their annual salaries which only liberal recourse to credit cards for all other areas of expenditure made possible.

The result was a consumer boom like none other before it, in which, for the next two decades, the west was actually consuming more than it was producing, making up the deficit through imports while talking ourselves into believing that the mounting debt which this was creating was being offset by increases in house prices resulting from the easy availability of mortgages. The problem, of course, was that it was only a matter of time before the ever more reckless practice of lending money to people who could not even afford the interest payments began to converge with the even more reckless practice of banks borrowing short term money in order fund long term investments particularly mortgage backed derivatives which they were not then able to refinance once it was discovered how shaky many of these investments were, especially when mortgagees started foreclosing on delinquent borrowers, thereby collapsing the over-inflated housing market along with much of the rest of the economy.

Wholly predictable as this was, what is truly remarkable about this whole episode, however, is that most people still do not understand it. For, as a society, we never really tried to get to the bottom of what actually happened, preferring instead to simply blame it on the ‘bankers’. As a result, not only have we never had to face the fact that it was our own refusal to face economic reality in the 1980s that was really to blame, but we have still not learned anything from it. In particular, we have still not learned that creating money is not the same as creating wealth and that, in order to create wealth, one actually has to produce products and services which somebody else wants to buy. And so even after the 2008 crash, instead of concentrating on production and supply, we continued to concentrate on demand and consumption, reducing interest rates to stimulate more borrowing, while printing additional money to ensure that banks had sufficient liquidity to lend, not just to businesses and consumers, but even more especially to governments, which had to increase their borrowing in order to continue providing the benefits and public services which their electorates demanded but which were far beyond what their economies could now actually afford.

Except for the printing of additional money, however, this was exactly the same policy that had caused the problem in the first place, which rather raises the question as to how anyone in their right mind could consider it a possible solution.

The answer, however, is something called ‘Modern Monetary Theory’ (MMT), which holds, among other things, that the issuance of fiat currencies need only be limited by inflation, that governments can issue fiat currencies to pay for goods and services without first collecting taxes, and that such issuance can and should be used to stimulate the economy until all the resources within that economy, including labour, are fully utilized. Thus clearly owing much of its origins to Keynesianism, the main  difference is that, today, MMT is rendered in precise mathematical formulae which are then embedded in the computer models which central banks use to model their economies and which predict that by reducing interest rates to stimulate borrowing, while increasing the issuance of money to provide additional liquidity, this will stimulate economic growth.

The only trouble is that, after more than a decade in which nearly all the world’s central banks have been following this prescription keeping their interest rates low while intermittently or, in some cases, continuously printing money and pumping into their financial systems through the process of quantitative easing it clearly doesn’t work. In fact, there is about as much empirical evidence to support MMT as there is to support the AGW theory. Which is to say, absolutely none. In fact, all the empirical evidence would suggest that it has precisely the opposite effect to that intended. For by buying up financial assets (mostly treasury bonds) which is how QE is implemented while at the same time suppressing interest rates, central banks have been inflating the price of these assets while dissuading other financial institutions from holding cash deposits. This has meant that many financial institutions such as pension funds, which have traditionally kept around a third of their members’ funds in cash, have also had to invest in other financial assets, thereby driving up asset prices still further. Worse still, many large corporations have been taking advantage of the low interest rates, not to invest in new plant and equipment etc., as the central banks had hoped, but to buy back their own shares, thereby reducing the amount of equity in these corporations, while increasing this equity’s unit price, thus also driving up asset prices.

The overall result has been that while the owners of financial assets have all made spectacular amounts of money over the last ten years, very little of this money has actually reached the real economy, which, with the exception of those sectors that are heavily subsidised, such as renewable energy and electric vehicles, has remained in a state of almost total stagnation. In fact, the real effect of MMT has never been more clearly demonstrated than during the current Covid pandemic, during which the lock down measures imposed on the vast majority of economies have resulted in a worldwide recession, while the attempt to alleviate this by pumping even more money into the financial system through QE has resulted in record highs on all financial markets: the clearest possible indication that MMT simply doesn’t work.

So why do central banks persist with it?

Part of the answer is almost certainly that, having become the dominant orthodoxy within the central banking system, it has become more or less self-perpetuating, in that no one is recruited to work in these institutions unless they believe in it.

This, however, is a fairly superficial answer in that it only addresses the current rut. A far more interesting question, therefore, is why this particular orthodoxy should have become so dominant in the first place. And here one cannot help but suspect that the answer has something to do with the fact that MMT, or some variant thereof, is the only macro-economic theory that allows central banks to play the kind of active role in the economy which central bankers, themselves, believe they ought to play and which we, the public, have come to expect of them. For while there may be a number of intermediate or compromise theories which subtly differ from MMT in degree, there is only one theory that advocated by economists such as Ludwig von Mises and Friedrich Hayek of the Austrian school that takes an entirely different and opposing view as to how governments ought to run their economies. And according to this theory, it is questionable whether central banks should exist at all and absolutely certain that they shouldn’t use interest rates as economic levers.

Indeed, according to both Hayek and von Mises, interest rates should be set by commercial banks, themselves, purely on the basis of the demand for money and its available supply, as, indeed, was the case before central banks existed, when commercial banks simply responded to market conditions. Thus when the economy was flourishing, with many companies wanting to borrow money to invest in their businesses, banks quite naturally put up interest rates, both to attract additional deposits from savers, and to weed out those commercial projects which were only marginally viable. When the economy was flat, on the other hand, with very few companies wanting to borrow money, then banks naturally reduced interest rates, both to discourage deposits for which they would otherwise have to find suitable investments, and to encourages businesses with marginal but still potentially viable projects to come forward. In this way, the system was self-balancing, matching the demand for money with its supply. And, for the most part, it worked perfectly well.

The only thing missing was some kind of back-stop or fail-safe system if, for some reason, a bank got into trouble. And it was for this reason that most central banks were set up: both to provide some level of regulation, so that banks did not get into trouble, and to provide a level emergency funding if they did, along with a level of general financial buffering. In this latter regard, however, they were intended to operate purely as a banker’s bank, on the same basis as commercial banks. When business was brisk, they put up their interest rates, both to discourage commercial banks from borrowing too much from them, thereby depleting their reserves, and to attract deposits from banks that were less commercially active. When business was flat, they then reduced interest rates, partly to encourage their client banks to borrow more and thus be more active in their lending, thereby stimulating the economy, but mostly to discourage deposits and avoid bloating their reserves: something which would not only have starved the economy of cash, but would have meant that they were borrowing more than they were lending, thereby undermining their own commercial viability.

What it is important to note here, however, is that while it was clearly recognised that a central bank’s interest rates and level of lending would affect the money supply and hence the overall economy, it was never intended that this potential effect should be used for the purposes of economic manipulation, not least because, while it was understood that a central bank setting its deposit rate too high would attract too many deposits and thus starve the economy of cash, it was also recognised that a central bank setting its interest rates too low would force banks to invest their excess deposits in increasingly marginal projects, thereby encouraging the misallocation of capital and a rise in bad debts, which is precisely what central banks were set up to prevent. All in all, therefore, it was accepted that it was better for commercial banks to set interest rates in response to the market, rather than have central banks set interest rates in an attempt to manipulate the market. And it is this that the Austrian school still advocates.

Equally, according to Hayek and von Mises, it should be the economy rather than central banks which determines how much money is in circulation: something which it does quite naturally, without any human intervention, by increasing or decreasing the speed at which money changes hands. When business is brisk, for instance, with lots of buying and selling, money quite naturally circulates faster, giving the appearance that there is more of it, which, according to some measures of the money supply, is actually the case. When business is flat, on the other hand, it will sit in people’s wallets or bank accounts and flow more slowly, thus reducing its apparent volume. Because the velocity of money is driven by economic activity, which causes the money supply to increase or decrease as required, any attempt by central banks to increase economic activity by printing money and pumping it into the financial system, according to the Austrian school, is not only to confuse cause and effect but to risk destabilising this highly sensitive self-balancing system, as, indeed, we have seen with respect to financial asset prices over the last decade.

What all this means, therefore, is that, according to the Austrian school, not only do central banks have no real levers with which to affect the economy, but their attempts to create and use such levers are counterproductive. To most central bankers, however, this is absolute anathema. For it is an article of faith to them that they both can and should intervene in the economy to keep it growing and on track. To do otherwise, would be to leave the public at the mercy of self-propagating economic downturns. More to the point it would be an admission that a crucial part of our lives one which is at the root of so many of our existential anxieties, as we worry about keeping a roof over our heads and food on the table is beyond governmental control: something to which no politician or central banker in our post-Christian world will ever admit.

For while Christianity may have bequeathed us the deeply embedded belief in our own sinfulness, for which we must continually punish ourselves, the decline of Christianity has left us bereft of any compensatory beliefs such as that in a merciful God to whom we might pray for help in adversity, leaving us with only a secular authority to which to address our demands for protection. Thus, instead of praying to God for divine intervention, we now demand that the government does something about it whatever ‘it’ is giving government a role in our lives it never had before, but which it has been more than happy to accept. For our pleas for it to protect us and solve all our problems have allowed government to progressively increase its powers, especially during periods of emergency, as has been  classically demonstrated during the recent Covid pandemic, when people have happily given up many of their basic freedoms and accepted a level of authoritarian rule which would have John Locke turning in his grave, all in order to be kept ‘safe’.

Having thus taken on the role of God, and garnered more and more powers to itself accordingly, the one thing no government can now do, therefore, is admit that there is any problem it cannot solve, that it is, indeed, impotent, especially if any such admission were to implicitly entail that any intervention it could make might do more harm than good.

Which brings us to the second reason why even those central bankers who know that MMT doesn’t work cannot now abandon it. For they also know that, if they were to discontinue the practice of printing money and buying up financial assets, the price of these assets would rapidly decline towards something approaching their true value as calculated in the old-fashioned way, based on the net present value of discounted cash-flows, which would very probably wipe out all the additional money which central banks have created ex nihilo over the last decade but which doesn’t represent any additional products or services which anyone would want to buy.

And if this were the only consequence, one might not think it such a bad thing. Unfortunately, a drop in the price of almost any financial asset produces an increase in its relative yield, especially in the case of bonds, the nominal yield of which is fixed at the point of issuance. As the price drops, therefore, the relative yield rises. What this means, especially in the case of treasury bonds, is that any new bonds issued have to have a nominal yield that is at least as high as the relative yield on existing bonds, or no one would buy them, thus increasing the cost of borrowing for national governments, which are already heavily indebted having borrowed so much since the 2008 crash.

To talk about the cost of borrowing, however, is to assume that, following such a correction, there would still be functioning financial institutions able to buy their governments’ bonds. But even at this relatively early stage in any new crash, this would be by no means certain. For while all financial institutions insure against the loss of value on their balance sheets, mostly by purchasing Credit Default Swaps (CDSs), many of the banks which specialise in the issuance of CDSs, such as Deutschebank and Citibank, may well be so overwhelmed by claims that they may be the first to fail, leaving other financial institutions to simply write off the losses, thereby, in many cases, rendering themselves technically insolvent.

Meanwhile, the rise in relative yield on bond markets will also be affecting all those corporations which borrowed money to buy back their own shares, effectively exchanging equity, which does not need to be repaid, for debt, which does, or, if not, periodically refinanced. Not only would any such refinancing now be more expensive, however in some cases making the businesses concerned non-viable but the finance may not be available, with the banks or their administrators simply calling in the debts as they fall due, thereby forcing the corporations in question to significantly downsize or close altogether, creating mass unemployment which governments, unable to borrow from the rapidly collapsing banking system, would not be able to support with unemployment benefit.

In fact, a cascade failure on this scale, were it to occur, could easily eclipse the 2008 crash by an order of magnitude, leading to the worst depression in world history, not least because, due to all the money printing and borrowing we have been doing over the last decade, unlike thirteen years ago, governments and central banks have now more or less used up all the ammunition they previously had at their disposal for preventing such a chain reaction. And it is this that is the root cause of the looming carnage. For instead of facing economic reality back in 2008 or, indeed, back in the 1980s, and bringing our consumption back into line with production, we talked ourselves into believing that we could go on consuming more than we produced indefinitely, without ever having to face a day of reckoning: a mass delusion which is arguably even more insane than some of those which brought such torment to the 17th century. The result of this insanity, however, is that total sovereign, corporate and household debt around the world, according to the Institute of International Finance (IIF), has now reached $296 trillion, or 378% of global GDP, an almost inconceivable sum which far exceeds any level of debt we could ever possibly repay and means that, regardless of all efforts of central banks to keep the financial bubble inflated, eventually it is going to burst. It is just a matter of when.

What is even worse, however, is that there are no real grounds to hope that such a catastrophe might actually bring us back to our senses, as it did in the 17th century. For while the 17th century had science to save it, not only have the computer models of its corrupted offspring played a large part in creating the fantasy world in which we all now live, but both the history of the last thousand years and our current mind-set would point to a far less rational reaction. Indeed, given the two main strategies for dealing with the existential problems of life and death to which we still cling, and which economic ruin will make us even more desperate to hold on to, our response is very likely to have two contradictory components. Our collectivist sublimation, based on apocalyptic environmentalism, will almost certainly blame the disaster on our consumerist, capitalist and colonialist sinfulness and demand that we be made to atone, while our need to avoid a now ever-worsening reality through hedonistic consumerism, will continue to demand that we be provided with the means to continue consuming more than we produce.

After a period of considerable chaos, therefore, during which emergency powers are likely to be assumed by most the world’s governments, the result will almost certainly be an attempt to create some new form of ‘socialism’, very possibly of the kind already being advocated by people like Klaus Schwab, Executive Chairman of the World Economic Forum, whose website offers us a vision of the future in which the world will be powered by limitless green energy, where nearly all manual labour will be carried out by robots under the control of intelligent computers, and where human beings will be able to enjoy a veritable Garden of Eden, in which both property and money as the cause of so many of our former problems will have been abolished, and in which we shall ‘own nothing but be happy’, a bit like the Eloi in ‘The Time Machine’ by H. G. Wells.

And, indeed, one suspects that many people will find this vision of a life without want or care, in which all our needs are met by a benevolent artificial intelligence, rather appealing. If it were achievable, this state of permanent infantilisation might even be considered a fifth possible strategy for dealing with the existential problems of life. The trouble is, of course, that, once again, it is a complete and utter fantasy. None of it is based on reality: neither the promise that we might one day obtain endless free electricity from renewable technologies which actually consume more energy in their manufacture, installation, operation and eventual decommissioning than they ever produce; nor the idea that some all-seeing, omniscient computer system could run our lives better that we can ourselves. For no matter how a big a computer you build, reality is bigger and more complicated, with more factors to be taken into account than any computer model could ever encompass. It is why the central planners of Soviet Union constantly failed. It’s why MMT has failed. And it’s why the most prosperous and successful societies that have ever existed have been those in which people have been allowed to live freely, taking responsibility for their own lives and making their own choices and` decisions.

More to the point, it is only through taking responsibility for our lives and striving to make the best life possible for ourselves and our families that life has meaning. For as the stoics understood, a meaningful life is not about either pleasure or longevity; it’s about what we do with it. And first and foremost, that means not hiding from reality. It means recognising our propensity for self-deception and wishful thinking, and having the mindfulness, strength of character and self-discipline to overcome these tendencies. For only then can we accurately assess our true place in the world and determine how best to make a positive contribution to it. And it is only when a significant number of people within a society do this, thereby providing leadership by example, than any society can flourish. And until we understand this, it is not just that we will continue to make the same mistakes. For after nearly four thousand years of history from the birth of Zoroaster to the present day the three main alternative strategies for dealing with our finitude and the consequent apparent meaninglessness of our existence, which I have been attempting to explore throughout this essay hedonism, religion and collectivist sublimation – have all now failed, leaving us with only our deluded beliefs in such fabrications as the AGW theory and MMT to paper over cracks of our crumbling civilization. Sooner or later, therefore, we are going to have to face reality like grown-ups, without illusions, or reality will simply leave us no choice, plunging us into a new darkness which, without the transcendent perspective required to understand how we got there and how to live with the consequences, is likely to be far more unpleasant than it need be.