Tuesday, 11 November 2025

Why ‘The West’ Is Broken

 

1.    The Making of ‘The West’

In order to understand why ‘The West’ is broken, we first need to understand what we actually mean by ‘The West’ when we capitalise it between inverted commas in this way. For the one thing it is not, of course, is a geographical location. Nor is this simply because, in common parlance, both ‘east’ and ‘west’ are primarily used to indicate relative rather than absolute position, such that, were I to travel ten miles to the east of my current location, for instance, a point five miles to the east of where I am now would become a point five miles to the west of where I would then be. That is to say that ‘east’ and ‘west’ are primarily expressions of one’s own position or point of view. What’s more, this applies as much to one’s cultural perspective as it does to one’s geographical location. For while ‘The West’, as we commonly understand it, may now extend far beyond the borders of the continent which gave it birth, most Westerners still see themselves as culturally European and therefore define themselves in contradistinction to cultures they regard as foreign, most notably the cultures of Asia, which we commonly refer to as ‘The East’.

This European cultural identity, however, is not drawn from all of Europe. This is because ‘The West’ also has an ideological dimension, which is rooted firmly in the reformation. This is most clearly illustrated by the fact that, while North America, which was initially settled by immigrants from protestant Britain, Germany and Scandinavia, is definitely part of ‘The West’, the Catholic countries of Central and South America are not.

This is because the Roman Catholic Church has always been a fundamentally authoritarian institution, having been intentionally created that way by the very manner in which it was brought into being, which, as I have explained elsewhere, happened on Christmas day in the year 800, when Pope Leo III crowned Charlemagne as the first Holy Roman Emperor in what was essentially an act of mutual recognition. I say this because while, in crowning Charlemagne Holy Roman Emperor, Leo recognised Charlemagne as the legitimate heir to the Roman Emperors of the past, in allowing himself to be crowned by Leo, Charlemagne recognised Leo’s authority as head of the Church to so crown him, with the result that the Holy Roman Empire, the Roman Catholic Church and the Papacy, itself, were all effectively created by this single act.

Of course, it will be objected that Popes existed long before Leo III crowned Charlemagne Holy Roman Emperor, each one being the heir to St. Peter. This widespread misconception, however, is the result of a deliberate rewriting of history by the Roman Catholic Church in order to legitimise the role and status of the Papacy. I say this because prior to Charlemagne’s recognition of Leo’s as head of the Church, there had been no overall leadership or hierarchical structure within the Church as a whole. Each diocese was essentially independent, its bishop chosen entirely by its congregation. This meant that no bishop had any more institutional authority over the wider Church than any other and that those who rose to eminence did so purely on the basis of their erudition, depth of theological understanding or holiness.

In the 4th century, for instance, the bishop who was accorded the greatest authority in the wider Church was Athanasius, Archbishop of Alexandria, who not only wrote an open letter to the Church each Easter, which was copied and distributed to every diocese in Europe, the Middle East and North Africa, but who used one such letter to nominate the books that were to be included in the New Testament. In the 5th century, this position of primacy then fell to Augustine, Bishop of Hippo, who not only hosted the synod which finally ratified Athanasius’ selection of the New Testament texts but was regarded as the greatest theologian of his generation, writing several books of his own, including ‘The City of God’, ‘On Christian Doctrine’, and ‘Confessions’, all of which are still read today. To suppose that either of these giants of the early Church was in any sense subordinate to contemporaries who just happened to be Bishops of Rome but made no impression on the Church beyond their own diocese is therefore simply absurd.

Yes, after the fall of the western Roman Empire, during what are now called the Dark Ages, when bubonic plague wiped out most of the population of Europe, it is true that the Bishops of Roman did take on more secular power and were rewarded by their congregations with the affectionate title of ‘Il Papa’. In a city whose population had fallen from over a million to just 10,000, however, this was largely due to the lack of anyone else with sufficient authority to fulfil this role and was entirely confined to Rome, itself, and the surrounding region, both northern and southern Italy having been conquered by a coalition of the Lombards and other northern tribes and reverted to paganism. In fact, Rome was almost completely isolated from the rest of the Christian world during most of the 7th and 8th centuries, making it quite impossible for any Bishop of Rome to have exercised authority over the wider Church even if that authority had existed. It was only when the Lombards were defeated by Charlemagne’s father, Pepin the Short, in fact, that Rome was finally reunited with the rest of Christendom and it was only when Charlemagne and Leo hatched their master plan for world domination that any Bishop of Rome assumed the role of Christianity’s supreme pontiff.

There were, however, two major flaws in this plan. The first was that, initially, Leo was in no position to assert authority over the wider Church on his own. In fact, he needed Charlemagne to more or less do it for him. Because Charlemagne needed Leo to be recognised as head of the Church in order to legitimise his own position as Holy Roman Emperor, he was, of course, happy to do this, at least within his own domains. The problem was that, while his domains were extensive, covering all of France, most of Germany and parts of both Spain and Italy, they didn’t extend to all of Christendom. In particular, they didn’t extend to ‘The East’, where there was already a long established Roman Emperor based in Constantinople. What’s more, this emperor could trace his lineage back to the Emperor Constantine and didn’t therefore need a bishop to give him legitimacy. More to the point, he certainly didn’t want his own bishops being made answerable to a Bishop of Rome who was nothing more than the puppet of a rival Emperor.  

From the very moment the Roman Catholic Church and the Holy Roman Empire were simultaneously brought into being by Charlemagne and Leo’s act of mutual recognition, therefore, a rift between the Western and Eastern Churches was more or less inevitable. It didn’t happen immediately, of course, not least because it took some time for Leo and his successors to develop the Papacy into a fully functioning organisation with both the ability and confidence to assert its authority without the support of the Holy Roman Empire. Once it reached that point, however, there was only ever going to be one outcome.

This point of inevitable sundering was finally reached in 1054, when, in an attempt to exercise the authority over the eastern Church the Papacy had always pretended it had, Pope Leo IX sent a papal nuncio to Constantinople ostensibly to resolve two matters of doctrine which the Roman Catholic Church had introduced but to which the eastern bishops strongly objected. These concerned the nature of purgatory, which the eastern bishops regarded as far too punitive, and the canonisation of saints, which the Pope claimed to be his sole prerogative. While this made it seem as if the dispute was purely theological, however, in reality, the demand that the eastern bishops accept doctrines imposed on them by the Church of Rome without them even being consulted was actually nothing less than a demand that they recognise the Pope as the head of a universal Christian Church and submit to his authority. With the support of their Emperor, however, this was something they quite predictably declined to do, initiating what became known as the Great Schism between the Roman Catholic Church and the Eastern Orthodox Church which was to have enormous consequences for Europe throughout the centuries that followed, the most significant of which was the failure of the Christian world to come together to defend Europe against Islamic encroachment, leading to the fall of Constantinople in 1453 and the subsequent Muslim invasion of the Balkans, creating fault lines in Europe which are still a source of friction and conflict today.

If the first major flaw in Charlemagne and Leo’s plan led to the Great Schism, the second major flaw had even more significant consequences and arose from the fact that it didn’t just take time for successive Popes to develop the Papacy into a fully functioning operation, it also took a great deal of money. A body of senior clerics had to be developed to help shape policy and doctrine, a secretariat had to be employed to copy documents and have them distributed to every diocese in Europe, courts had to be established to resolve disputes, hear appeals and punish offenders, and a cadre of ambassadors or papal nuncios had to be recruited, not just to keep subordinate bishops in line, but to represent the Church’s interests throughout Europe’s royal courts. It was a bit like setting up and running a multinational corporation, the costs of which continually expanded as the influence and ambition of the Church grew.

Not, of course, that Leo could have realistically foreseen this as a problem when he first embarked upon this course, not least because the Emperor Constantine had left the Lateran Palace to the Roman diocese in his will, which meant that, initially, the Papacy had ample accommodation and office space. What’s more, most of its daily running costs could be met by donations from wealthy parishioners and by rents and taxes levied on the Papal States, which had been gifted to the Church by Charlemagne for this purpose. As the Church continued to grow, however, the unwelcome truth would have gradually crept up on whoever ran the Papal Treasury that these revenues were largely fixed and did not, as a consequence, grow in line with costs. What it needed, therefore, was not just other forms of income, but other forms of income that were based, not on side-lines such as rents on properties, but on its core activities, the most central of which, of course, was spreading the gospel and guiding people towards salvation.

The good news was that the foundations for monetising this service, if one may call it that, had already been laid. As early as the middle of the 3rd century, for instance, theologians such as Basil of Caesarea had begun encouraging priests to hear their parishioners’ confessions and grant absolution on the fulfilment of a suitable penance: a practice which was commended partly on the basis that it kept parishioners forever conscious of their sinfulness, but also because it gave priests the power to bestow that which had once been the sole preserve of Christ himself: forgiveness.

This power was then further enhanced when it began to be realised that absolution was most vital at or around the time of death, to which the Church consequently responded by developing the dual procedures of the Viaticum and Extreme Unction, jointly known as the Last Rites, which enabled the dying to meet their maker with their sins expunged, thereby ensuring that they would be allowed to enter heaven. The problem was, of course, that demanding money from a dying man or his family before a priest agreed to administer the Last Rites was not likely to go down well with most Christians and could not, in itself, therefore, solve the Church’s financial problems. A possible solution, however, became fairly obvious once people started to think about what actually happened to those of the faithful who were unfortunate enough to die without being given the Last Rites and whose sins therefore remained unexpunged: a question which had already been fairly persuasively answered by Augustine in his book ‘The City of God’, in which he suggested that heaven might have a kind of antechamber in which the unexpiated sins of those who had died unabsolved might be purged in some other way: specifically through fire.

Importantly, Augustine’s answer to what he saw as a purely theological question was not entirely original to himself: it had actually been discussed for at least three centuries before he took it up. His reputation, however, not only ensured that, when it was put forward again, sometime during the 10th century, not only was it not rejected out of hand as being too dreadful to contemplate but that someone in the Lateran Palace would recognise it as the perfect solution to all the Church’s financial  difficulties. As, indeed, turned out to be the case. For as soon as the existence of purgatory was officially adopted as Church doctrine, people were so terrified of dying in a state of sin that they were not only prepared to confess to every possible sin they may have committed so as to not leave any out but were willing to make cash payments to the Church in expurgation instead of undergoing one of the more traditional forms of penance.

That we are unable to say when, exactly, this occurred is, of course, slightly unsatisfactory, not least because it has led to a great deal of confusion on the subject. In researching this essay, for instance, I came across one article which suggested that it was still under discussion by the Church as late as the 12th century. We know that this cannot be correct, however, because we know that the punitive nature of purgatory was one of the reasons for the Great Schism in 1054. Not only must the doctrine have been introduced before then, therefore, but there is every reason to suspect that its introduction must also have occurred before 31st January 993, when Pope John XV canonised the first saint, a Prince-Bishop of the Holy Roman Empire called Ulrich of Augsburg.

Again, it will be objected that there were saints before 993, going all the way back to the apostles, in fact. While these saints may have lived before 993, however, they weren’t canonised until after 993. For unlike beatification, which generally happened spontaneously by popular acclaim within the local diocese of the person being beatified, formal canonisation, which was created specifically to allow saints to bypass purgatory and go straight to heaven, only became necessary once the Church’s doctrine on purgatory had, itself, been adopted, which we can therefore assume must have happened sometime before 993. Indeed, it was for this reason that the two issues of purgatory and canonisation were raised together by the eastern bishops in 1054. For if they were ever going to accept the Church’s doctrine on purgatory, they certainly didn’t want the Bishop of Rome to be the only person on earth with the power to circumvent it, almost certainly wanting their own churches to have some say in the matter.

If we do not know when, exactly, the existence of purgatory became official Church doctrine, what we do know, however, is that by the Fourth Lateran Council in 1215, the sale of ‘indulgencies’, reducing the amount time people had to spend in purgatory, had become what was probably the biggest extortion racket in history. In fact, so big had it become that, by then, the Church had already accepted the need to sub-contract the entire business out to professional agents called quaestores better known as ‘pardoners’ – who went from diocese to diocese, parish to parish, terrorising people into parting with their money by describing the agonies they would suffer in purgatory if they did not pay up. The problem was that, by 1215, the Church had become so dependent on this source of income that all the Lateran Council could really do was regulate how the business operated, curbing some of the pardoners’ worst excesses by limiting how much they could charge for each category of sin and specifying what percentage of the proceeds should go to the local diocese and how much should be sent back to Rome.

The result was that, by the early 16th century, the Roman Catholic Church had not only become the wealthiest organisation in the world but, in some places, it was also the most detested. For the lavish lifestyles and conspicuous consumption of the Prince-Bishops of the Holy Roman Empire in particular were inevitably paid for by those who could least afford it, engendering a level of outrage which eventually drove one man, Martin Luther, to send a copy of his ‘Ninety-five Theses’ to Albrecht von Brandenburg, the Archbishop of Mainz, who had recently appointed the infamous pardoner, Johann Tetzel, to sell indulgencies on behalf of his diocese, both in order to raise the money the diocese needed to pay its required contribution to the rebuilding of St. Peter's Basilica in Rome and to pay off the Archbishop’s own debts, to which the Pope had actually agreed.

Apart from the shamelessness demonstrated by this blatant act of corruption, what Luther really objected to, however, was what he believed to be the church’s implicit downplaying of faith. For by selling forgiveness, which Luther believed it was for God alone to grant, and making it dependent on acts of penance and donations to the Church, he believed that the Church was undermining the simple act of surrender of putting oneself entirely in God’s hands which, for Luther, was at the very heart of Christian faith. What he advocated, instead, was a much simpler, more personal form of Christianity, in which the individual’s relationship to God was not dependent upon the intermediation of the Church. And it was to this end that he therefore set about producing the first ever German translation of the bible, which was published in September 1522 and which, far more than his Ninety-Five Theses, is what really made him anathema to the Church. For it was precisely through its intermediation in the relationship between God and man, of course, that the Church gained its power and wealth.

Luther’s real crime, therefore, was not in questioning the Church’s authority, as is sometimes said, but in questioning its necessity: by telling people that they didn’t need it, that they could read the bible for themselves and make up their own minds as to its correct interpretation. And it was this that truly frightened, not just the Church, but everyone in positions of authority they wanted to keep.  Indeed, its why the religious wars of the 17th century, most notably the Thirty Years War, from 1618 to 1648, and the English Civil War, from 1642 to 1651, were as much about politics as they were about religion. For they were about the sovereign rights and freedoms of the individual in the face of an authoritarian Church and State.

What Martin Luther really set in motion, therefore, was not a reformation but a revolution: one which had two main strands, the political and the intellectual, both of which stemmed from the act of reading the bible in one’s own language, which was both a political assertion of one’s right to form one’s own opinions and not be told what to believe and an intellectual commitment to thinking for oneself. More than that, it also sparked a broader cultural revolution. For as printed translations of the bible proliferated throughout Europe, it caused a dramatic increase in literacy, a skill which, once acquired, is not confined to the reading of just one book. Other ancient texts, from Aristotle to Euclid, were also translated into Europe’s modern languages, to be joined by the output of contemporary writers, from courtly poets and political pamphleteers to the pioneers of the emerging new sciences. This in turn prompted the most rapid period of scientific development in history, culminating in the foundation of the Royal Society in London in 1660, the moto of which, as I have mentioned before, is ‘Nullius in Verba’ or ‘by nobody else’s word’, which declared to the world that the work and findings of its members would be based solely on evidence and reason, not on ‘authority’.

This anti-authoritarian stance was then further reinforced by the development of a new political philosophy which rejected the old world order in which everyone had a preordained station in life above which one could not rise. Instead, philosophers like John Locke asserted the existence of a natural right or freedom to make of one’s life what one would. Importantly, this did not mean that one could do anything one liked, as is sometimes suggested. For in a civilized society, Locke argued, one’s own personal liberty had to be bound by a mutual respect for the rights and liberties of others. It also meant that one had to take responsibility for oneself economically. For if one is economically dependent on another, that other may reasonably feel entitled to exert some control over how one lives one’s life, especially with respect to the level of expenses one incurs. All of this, however both the natural rights Locke championed and the responsibilities they entailed appealed enormously to the growing middle class, which this new political philosophy had itself brought about and which was the driving force behind the many new manufacturing industries which emerged during the 18th and 19th centuries as a result of both the technological developments to which the new sciences had given rise and the hard work and entrepreneurial spirit of the new middle class itself.

These new manufacturing industries then gave a further boost to foreign trade, as merchants from industrialised Europe were able to sell advanced manufactured goods to less technologically advanced countries in exchange for the commodities and raw materials which were used in Europe’s factories to produce even more advanced manufactured goods. It also led to the expansion of competing trading empires which eventually found their way into almost every corner of the world, with the result that, by the middle of the 19th century, European civilization or what, with the addition of the United States, we now call Western Civilization more or less dominated the entire planet and could justifiably claim to be the greatest civilization in history.

2.    The Breaking of the West (Part 1): Economics

After attaining such a level of dominance, the question this raises, of course, is ‘What went wrong?’ For while this may not be obvious to the politicians and bureaucrats in London, Brussels and Washington, who still cling to the increasingly outdated belief that their liberal world order still runs the planet, to most people in our increasingly multipolar world, it is fairly clear that ‘The West’ is in decline and is so largely because we have abandoned the very values that made us so successful. Not only have we mostly given up on self-reliance as the price that must be paid for independence but in much of the West we have actually embraced a culture of dependency. Even more baffling is the fact that we have not only stopped thinking for ourselves, believing almost anything those in positions of authority tell us, but have accepted a level of authoritarianism which, in many countries, including the UK, has more or less criminalised free speech and the airing of alternative views.

That we have a problem, not just in understanding this strange regression, but in even recognising it, however, it is not only due to the fact that we are still living through it and cannot therefore see it as clearly as our historical past, but because, unlike the logical progression that took us from Martin Luther’s rejection of Roman Catholic authoritarianism to the scientific discoveries of the 17th century, our regression has had two distinct phases with completely different causal origins.

What makes this even more confusing is the fact that our current rejection of self-reliance and independence actually has its roots in exactly the same melting pot of ideas as John Locke’s classical liberalism. For at almost the same time that Locke was arguing that, if one wanted to exercise personal freedom, one had to take responsibility for oneself economically, there were others who regarded such aspirations as no better than the avarice and greed of the ruling class that had previously oppressed them. In Germany, for instance, the Anabaptists believed that owning property was actually the source of almost every recognised sin, especially greed and envy, and so held all property in common, including, in some cases, women! During the English Civil War, the Diggers held much the same view, especially with respect to land, which they frequently seized from the rightful owners and farmed communally.

Of course, none of these cults or political movements lasted very long. Once the religious wars of the 17th century were over and order had been restored, property rights were once again enforced. After all, Oliver Cromwell, Lord Protector of the short-lived English Commonwealth, was himself a moderately wealthy land owner and had no interest in sharing his property with anyone. More to the point, it was successful middle class businessmen, who, unlike the old aristocracy, had worked hard to amass their wealth, who now dominated much of civic life, thereby creating a new class division between the working and middle classes in which the political philosophy of people like the Levellers became embedded within the working class identity. This then hardened over time such that, by the beginning of the 19th century, the glaring inequalities between the workers who produced the wealth and the business men who owned it had brought about a state of such moral outrage and enmity within the working class that it was only a matter of time before some national calamity, such as the losing of a world war, would trigger another revolution: not this time one that would set people free by teaching them to read and think for themselves but one that would trap them in a prison of tyrannical dependency of precisely the kind that John Locke described.

Nor is it hard to see why this should be so. For as the Anabaptists and Diggers clearly understood, the most obvious solution to the problem of inequality is to bring all property into public ownership. By holding property in common, however, it is the collective rather than the individual that decides where resources are to be allocated: a decision to which the individual is simply obliged to submit. The problem, however, is that the power of a collective, whether it be an Anabaptist commune in 17th century Germany or a collective farm in 20th century Russia, is inevitably vested in actual people.

In the case of an Anabaptist commune, for instance, power almost always lay in the hands of the commune’s spiritual leader, a man whose stern judgment and implacable will very often made him something of a cult figure, whose followers obeyed him both out of fear of eternal damnation and in order to eat. Indeed, it was the cult-like nature of many Anabaptist sects which, in addition to their unorthodox views on property, caused them to be so persecuted throughout the 16th and 17th centuries, both by Roman Catholics and by other Protestants, particularly Lutherans. For, as in the case of cults today, people were frightened that their children would be drawn into them and thus lost to their families. And it was this that ultimately drove the Anabaptists out of most of Germany. For in order to escape both the ostracism and persecution their beliefs brought down upon them, they were more or less forced to emigrate, most of them going to America, where Anabaptist sects such as the Amish still survive today.

Not, of course, that the cult-like status of its leaders was a problem for the Soviet Union, not least because very few of the local party functionaries who came in contact with the general public had the kind of charisma that gives rise to this rather unusual effect. Indeed, it was precisely this that was the real source of the Soviet Union’s most serious problem. For in an economy in which all property was held in common, it raised the question as to how local managers were to get higher productivity out of workers who could neither be incentivised by the promise of greater personal gain nor inspired to make greater sacrifices by the power of their local manager’s personality. Those in charge could always threaten them, of course, and even have them sent to a Gulag the secular equivalent of eternal damnation but all this usually did was teach people to keep their heads down and avoid being noticed. Indeed, it actually deterred people from taking responsibility. For who would want to be a factory manager in a system in which the manager had responsibility for achieving a set quota but none of the commercial leverage required to see that components of sufficient quality actually reached his factory in time to meet that quota.

The result was that, throughout much of the history of the Soviet Union, productivity actually fell, leading to shortages of just about everything. Indeed, I have actually seen this first hand. For while working in Finland during the 1970s, my girlfriend and I made frequent visits to St. Petersburg, where we were constantly struck by the sight of people queuing outside shops, making us wonder what exactly they were queuing for. One day my girlfriend, who spoke a little Russian, consequently decided to go up to someone in the queue and asked them, and was told that they were queuing for oranges, a delivery of which had recently arrived in St. Petersburg for the first time in six months. On another occasion we actually discovered that most of the people in the queue didn’t even know what they were queuing for; they just saw a queue and joined it on the off chance that they might be able to buy some of whatever was on sale before it was all sold out.

Another oddity I remember from our first trip to St. Petersburg is the fact that none of the taxis had proper windscreen wiper blades, which, given the fact that it was snowing most of the time, was more than a little disconcerting. For most of the drivers had been forced to improvise windscreen wiper blades out of bits of cardboard, which didn’t work very well to say the least.

The point, however, is that this is the kind of Alice-in-wonderland world one creates if, instead of allowing thousands of independent businesses to compete for customers by striving to be the best at meeting those customers’ needs, one attempts to implement a centrally planned economy in which no one has a personal stake. For not only is an entire economy just too complicated to be controlled by a group of central planners, but its operation needs the constant monitoring and adjustment of people who have a vested interest in it and therefore care about making it work. Without this kind of attention, what one gets is the kind of chaos in which there are occasional gluts of goods that no one wants but chronic shortages of everything else: a state of affairs which makes the development of a black market one from which its operators stand to make a gain and which therefore functions properly more or less inevitable.

In fact, like many tourists visiting the Soviet Union during the 1970s, my girlfriend and I were among the many beneficiaries of this illegal trade, visiting St. Petersburg not only to take in the sights of one of the most beautiful cities in the world, but to enjoy long weekends consuming large amounts of vodka, caviar and excellent Georgian champagne, all paid for by selling stuff on the black market.

This also brought with it a certain frisson of fear and excitement. The first time we participated in this great Finnish tradition, in fact now sadly defunct I had visions of us being arrested by the KGB and thrown into some dark, damp cellar, where we would be forced to listen to the tortured screams of our fellow inmates. By our second trip, however, I started to suspect that, although the Soviet authorities put on a great show of cracking down on the black market, searching our luggage at the border and sternly impressing on my girlfriend the fact that, while in the Soviet Union, it was illegal for her to sell any of the dozen or so pairs of brand new lacy knickers she had brought with her, in reality they more or less turned a blind eye to the whole business.

Nor was this simply because so many people were in on it. Indeed, they had to be for it to be so well organised. Within minutes of checking into our hotel, for instance, a chamber maid would knock on our door to ask us what we had to sell. As soon as we took a seat in the hotel’s bar, a waiter would come up to us, not just to take our drinks order, but to tell us the black market exchange rate for whatever hard currency we had, which was always substantially higher than the official exchange rate. The operation was so slick, in fact, that it was fairly obvious that the young men and women who worked the hotels were not working alone. They were buying so much merchandise and currency that they had to be part of a larger organisation, the principal activity of which, of course, was selling all this stuff to wealthy Russians who could afford to buy it at what one imagines was a fairly healthy profit. This therefore suggested that the whole operation was almost certainly run by an organised criminal gang, which meant that the police and other Soviet officials also had to be getting their cut.

The more I thought about it, however, the more I began to suspect that a few criminal gangs and one or two corrupt local officials weren’t the only ones benefitting from this trade and that the state, itself, was also, very probably, a knowing beneficiary. For what the Soviet Union was effectively doing was importing western goods of a type or quality which it did not manufacture itself, and paying for them, not in western currencies, but in rubles, which the recipients then spent on vodka, caviar and Georgian champagne, thereby boosting the economy in three ways. Firstly, it increased the supply of manufactured goods which Soviet consumers could buy without shipping money abroad. In fact, far from expending hard currency on these goods, the black market trade in hard currencies actually increased their availability to the Soviet Union itself. For while theses hard currencies initially went to the criminal gangs, they then had to be laundered in some way, which meant that, one way or another, they entered the Soviet financial system where they boosted the Soviet Union’s foreign reserves.

Even more importantly, however, the fact that Finnish tourists were getting long weekend holidays for little more than the coach fare meant that, every Friday afternoon, dozens of Finnish coaches disembarked their passengers onto the forecourts of St. Petersburg’s bustling hotels, greatly boosting the city’s tourist, catering and hospitality industries, from which hundreds, if not thousands of people benefitted. What’s more, it was fairly obvious that the Soviet Union also gained politically from this. For by giving Soviet consumers access to goods they would have otherwise been denied, it was effectively using the black market to make up for the deficiencies in its own centrally planned economy. In fact, without the black market filling people’s pockets and supplying them with luxury goods they couldn’t buy in their own shops, it is highly likely that there would have been a lot more political disaffection than was actually the case and that a combination of shortages and political dissent might even have brought the system down, as, of course, it eventually did.

The problem we had in the West, however, is that we didn’t understand any of this. We thought that the Soviet Union collapsed because of its tyrannical government, endemic corruption and unrestrained organised crime. We didn’t realise that all three of these problems were merely by-products of a faulty economic system. When the Soviet economy finally collapsed, therefore, we didn’t learn anything from it. Even after the Soviet Union broke up, in fact, there were Labour politicians in Britain who still called for more parts of the British economy to be taken into public ownership. Worse still, huge swathes of the population actually agreed with them, partly because many people still harboured some vestige of the Puritan belief that private wealth is immoral, but also, one suspects, for fear of the alternative. For taking responsibility for oneself and standing on one’s own two feet can be scary. Starting one’s own business, for instance, knowing that there will be no one there to catch one if it all goes wrong, requires a great deal of courage. It is only human, therefore, to want there to be someone who will ultimately look out for us. Beneath this perfectly human desire, however, there is a hidden snare. For in an age in which few of us now believe in God and in which families have become far more fragmented, the only institution left to catch us when we fall is, of course, the state.

In the aftermath of the second world war, when people throughout Europe were thinking about what kind of world they wanted to return to, most countries therefore opted for what was effectively a compromise between classical liberalism and socialism which we labelled the ‘mixed economy’, in which most businesses, especially small businesses, remained in private ownership, while most essential services, including health and education, along with certain strategic industries, such as electricity generation and the railways, were nationalised. The problem with this, however, was not just that these state run services and industries eventually became as inefficient, wasteful and dysfunctional as their Soviet counterparts but that, because public services were free at the point of use, people were more than happy to avail themselves of them, with the result that the expansion and improvement of these services was the most common and popular electoral promise made by politicians of nearly every political affiliation.

The problem, of course, is that these services are not actually free; they are paid for out of taxation, which, whether the taxes are levied on income or expenditure, takes money out of people’s pockets, leaving them with less to spend elsewhere. This then has two main effects. The first and most obvious is that workers demand higher wages at a time when increased taxation is already reducing demand. Without being able to increase sales, this means that, in order to pay these high wages, employers have to increase their prices, thereby increasing inflation.

Of course, it will be argued that, if taxation is used to pay public sector workers, then it doesn’t actually take money out of the economy and should not therefore reduce overall demand. Not only does it most definitely reduce the demand generated by those being taxed, however who have to be in the majority in order to make the public sector supportable but most public sector workers do not produce anything which anyone actually buys. That is to say that, by governments redirecting resources away from commercial production and towards publicly funded services, their economies not only produce more of these services than would be in demand if they were not free at the point of use, but, without a compensatory increases in productivity, it reduces the supply of purchasable goods, thereby increasing their cost and hence reducing demand.

This then leads to the second negative effect of raising taxes to pay for public services. For by increasing the cost of producing goods and services in other parts of the economy, domestic industries find it increasingly difficult to compete with foreign imports from countries which spend less on public services. This then leads to industrial decline, making it no accident, for instance, that the West has lost most of its manufacturing industry to China and other countries in the East.

Western governments, of course, have tried to cover up this industrial decline by pointing to the creation of new jobs in the service sector. Not only are many of these new service jobs actually in the public sector, however, but many that are not are in some way dependent on government. Take, for instance, the huge number of management consultants the government constantly employs to improve the performance of the NHS. These may work for private firms but the taxpayer still pays for them.

This, however, has another unfortunate knock-on effect. For while industrial and economic decline in most western countries has effectively reduced the tax base, the increase in public services jobs and jobs dependent on government has further increased public expenditure. In 2023-24, for instance, the UK government actually accounted for 44.7% of all expenditure in Britain. In France, this year, the figure is expected to rise to 57%, the highest in the world. The real problem, however, is that there is an upper limit to the amount any government can tax an economy before raising taxation rates ceases to increase tax revenues. This is because raising taxation rates, whether it be on income or expenditure, changes people’s behaviour. If one raises income tax above a certain level, for instance, people do less overtime, judging that the financial rewards of doing extra work are not commensurate with the loss of free time. If, alternatively, the government increases sales tax, then people either buy things on credit and get into debt, which causes them to reduce expenditure even more sharply later on, or they adjust their expenditure to what they can afford, either by doing without certain items altogether or by finding cheaper alternatives.

No matter how governments try to increase revenues by raising taxes, in fact, eventually they hit a ceiling, which, in most countries is around 38% of GDP. If the governments of those countries are spending anywhere between 44.7% and 57% of GDP, therefore, they can only do so by borrowing the difference, which most western countries have been doing for the last twenty years or so, with the result that the British government’s debt currently stands at around £3 trillion or 97% of  GDP, while the US federal government’s debt stands at a staggering $38 trillion or 120% of GDP. Worse still, the GDP figures of most countries are, themselves, greatly inflated by the fact that they actually include government expenditure. If one measured GDP purely in terms of the income generated by productive industries, government debt to GDP ratios would be much higher.

Needless to say this is unsustainable. At present, however, there is very little sign that any government in the West is seriously trying to cut expenditure, the one part of the fiscal equation over which they have any real control. As increasing tax rates has already become counterproductive in most western countries, this means that they are simply forced to go on borrowing. By some estimates, for instance, British government debt is increasing by around £5,000 per second, while annual interest on the accumulated debt is now over £100 billion. What’s more, bond markets throughout the West are well aware of the dangers inherent in this continued trend, leading to bond prices falling almost everywhere, causing yields to rise and forcing governments to issue new bonds with even higher yields in order to sell them. It is only a matter of time, therefore, before falling demand for government bonds reaches a point at which at least one western government will find itself unable to sell enough new bonds to cover the cost of redeeming those falling due and will therefore be forced to default, unleashing a wave of panic across bond markets which will cause other western governments to fall like dominoes.

Of course, it may be thought that this won’t happen because the IMF will step in to bail out any government on the brink of default. The problem with this argument, however, is that the IMF is largely funded by those very same western governments which are themselves close to bankruptcy. More to the point, the scale of the debt in many cases is well beyond that which can ever be repaid, reducing the efficacy of any bail out to that of sticking plaster on a wound that is simply too large to heal, which rather raises the question as to how we didn’t see this coming. After all, there is nothing in the above explanation as to how we got here that is particularly difficult to understand and wasn’t entirely predictable. So why didn’t we do anything about it while there was still time? How could we have been so stupid as to just let it happen? For mass stupidity, I think, is the only possible explanation.

3.    The Breaking of the West (Part 2): Stupidity

As I have explained elsewhere, there are, in fact, many different types of stupidity, probably the most common of which results from the fact that, for much of the time, we just don’t think. Based on some common misconception or false assumption, which, if we thought about it for even a moment, we would quite clearly see was false, we just blunder ahead and are then surprised when the outcome is not quite what we intended.  

Another fairly common form of stupidity is based on wishful thinking, which is very often combined with self-deception. We want so desperately for something to be the case that, even though it may be intrinsically unlikely, we place all our hopes in it and are often quite distraught when the hoped for outcome does not materialise, not least because we realise how stupid we were to bank so much on something that was so extremely improbable.

Both of these examples of stupidity, however, are largely characteristic of individuals rather than groups. It may, of course, be argued that groups can also suffer from wishful thinking. After all, governments throughout the West have spent the last twenty-odd years telling themselves that they can go on indefinitely spending money they don’t have without it leading to financial collapse. Governments, however, including both politicians and bureaucrats, tend to be closed groups which defend themselves by preventing group members from asking awkward questions. As such, the members  submit to a group think which they deceive themselves into believing is both rational and sound. If a group is large and open enough, however, it is far more difficult to completely suppress critical thinking and therefore prevent the members from realising that they are actually fooling themselves.

There is, however, one form of stupidity which, even though it is most commonly induced by an individual or group of individuals, is essentially collective, being the stupidity of the herd. Typically, it will start when a population is repeatedly told something that is not true by those they not only regard as being in authority but do not regard as having any malign intent towards them. This then leads most open-minded people to at least entertain the possibility that what they are being told might be true, with the result that, if it is repeated often enough, they eventually come to believe it. Once this happens, moreover, it then spreads like a contagion. For we falsely believe that, if everybody else believes something, then it must be true. This then leads us to believe it and so spread the contagion further.

A good example of this is the mediaeval belief in purgatory. Sometime during the 10th century, those in positions of authority in the Lateran Palace decided to adopt Augustine’s terrifying vision as official Church doctrine and did whatever was necessary to disseminate and enforce a belief in purgatory’s existence. Once the population had attained the required threshold of belief, however, the Church didn’t have to do any more to convince people that purgatory was real because everyone now believed it because everyone else did.

Of course, you may say that this kind of herd-like behaviour no longer applies to us. But human nature hasn’t changed over the last thousand years; we are just as gullible today as we were then. A good example is our collective belief that human beings are causing the earth to warm by burning fossil fuels which releases carbon dioxide into the atmosphere. ‘Ah’, you say, ‘but that is true!’ The only reason you believe it to be true, however, is because everybody else does. What’s more, I know with absolute certainty that this is the only reason you believe it. Because I also know that you have no independent evidence to support this belief. And how do I know this? Because no such evidence exists.

Yes, carbon dioxide is a greenhouse gas which absorbs infrared radiation emitted by the sun-warmed surface of the earth, thereby warming the atmosphere. But it is a very minor greenhouse gas, accounting for only a very small percentage of atmospheric warming due to greenhouse gases in total. More than 90% of that warming, in fact, is caused by water vapour, which is thirty times more abundant in the atmosphere than carbon dioxide and contributes more than thirty times more warming to the overall warming effect. What’s more, most of the wavelengths of infrared radiation absorbed by carbon dioxide between 2.7 and 4.3 microns are also absorbed by water vapour. In fact, all the infrared radiation emitted by the earth’s surface at these wavelengths is already absorbed by a combination of water vapour and carbon dioxide, which clearly means that, at these wavelengths, adding more carbon dioxide to the atmosphere cannot lead to the absorption of any more radiation.

The only wavelength at which carbon dioxide uniquely absorbs infrared radiation, in fact, is 15 microns. This, however, is a very long wavelength, which means that the objects which emit it are not actually very warm. In fact, they wouldn’t show up on most night vision goggles. For in order for an object to emit infrared radiation at 15 microns it has to be -70°C.

Admittedly, molecules of nitrogen and oxygen at this temperature are to be found in the upper atmosphere. But there are two reasons why this should not worry us. The first is that carbon dioxide is 50% heavier than air and is therefore concentrated closer to the ground, its concentration decreasing by ten parts per million for every thousand metre increase in altitude. This means that the only carbon dioxide to be found above 40,000 metres is that which is produced there by cosmic radiation striking the nuclei of nitrogen atoms, knocking out single protons and turning what were nitrogen atoms into carbon atoms, which then bond with oxygen atoms to form carbon dioxide. More to the point, the amount of energy stored in objects at -70°C is so small that it could not appreciably warm the earth’s atmosphere no matter how much more carbon dioxide was added to the atmosphere to absorb it.

Not, of course, that most people are aware of any of this. All they know is what the ‘experts’ tell them: that human emissions of carbon dioxide cause global warming. Not really knowing anything about climate science themselves, moreover, they have no basis upon which to suspect that these experts might be lying to them. After all, why would they?

The answer to this question, however, is exactly the same as the answer to the question as to why the mediaeval Church told people that there was this place called purgatory in which they would have to spend years, decades or possibly even centuries in a constant state of torment unless they handed over substantial quantities of their hard earned cash. The only difference between this mediaeval extortion racket and today’s climate change scam is that, during the 1980s, the institution which most desperately needed a new source of income was NASA, which, after the termination of the Apollo programme and the disastrous failure of the space shuttle, needed to repurpose itself in order to secure its continued funding. After more than a decade in the wilderness, it finally did this in 1989, when James Hansen, Director of NASA’s Goddard Institute for Space Studies, convinced Congress that, unless its continued giving NASA billions of dollars each year to monitor the situation, by the end of century much of America’s Atlantic coastline, including Washington DC, would be under water due to anthropogenic global warming (AGW).

The real question we need to ask, therefore, is not why climate scientists in the 1980s should have lied to us about the dangers of global warming I think that is fairly obvious but why, if there is as little foundation to the climate scare as I contend there is, other scientists should have so meekly gone along with it? What happened to ‘Nullius in Verba’? For when it comes to fields of science not their own, it would appear that most scientists today do just accept the word of those they regard as experts in those other fields.

One of the main reasons for this, of course, is thatscience has changed considerably over the last 365 years, not least in the way it has become ever more specialised. When the Royal Society was founded in 1660, most scientists, in fact, were generalists, studying whatever piqued their curiosity in a way that would be more or less impossible in a modern university or research institute. Robert Hook’s initial interest in optics, for instance, enabled him to develop a microscope, which he then used to discover and subsequently study microscopic lifeforms, effectively creating the field of science we now call microbiology. Not only did his lack of any disciplinary straitjacket thus greatly accelerate the growth of science in the 17th century but it also meant that most Royal Society members, like Robert Hook, felt themselves both able and obliged to critically address the work of other scientists, whatever their field of study, something which most scientists today would find it very difficult to do.

This is because specialisation also causes fragmentation, with each specialism developing its own concepts and terminology in a way that can make it more or less impenetrable to scientists in other fields. How many microbiologists today, for instance, would even be able to read a paper on laser optics, let alone offer their views on it? Not only does this lead to a situation in which most scientists today do not even try to communicate their work to scientists in other fields, however, but it also leads to the creation of closed worlds in which those who feel they have proprietary rights over a particular discipline use their established positions within it to influence both funding bodies and publishers to see off interlopers who might take their science in a different direction.

Indeed, this is very similar to the closed groups comprising most governments which I described earlier, where membership depends on submission to a group think which effectively prevents the members from questioning the group’s principal beliefs. The main difference is that, in most fields of science, the only negative consequence of this is that it tends to make the field moribund, in that no new ideas are permitted. Because most scientific research is publicly funded,  however, particularly in Britain, some fields of science have a political dimension and an influence outside of themselves, which, if they have become closed, can effectively mean that the outside world becomes closed as well.

Again, climate science provides the perfect example of this, not least because, for some reason, the idea that human beings might constitute a significant threat to the planet resonates very strongly with the general public, chiming closely with the widespread belief that we are a major source of pollution from which the earth consequently needs to be saved. As a result, governments did not need a lot of convincing before they started funding research into climate change on a major scale, which not only increased the number of scientists with a vested interest in ensuring that the climate change bandwagon continued, but made it more or less impossible for any scientist who questioned the AGW theory to obtain any form of government funding.

Some organisations such as the BBC even banned ‘climate change deniers’ from appearing on their programs, which meant that audiences only ever heard one side of the debate, thus further reinforcing most people’s belief that the threat from climate change was both real and serious. The strongest boost to this belief, however, came when governments started adopting ‘Net Zero’ policies. For the mere fact that they were prepared go to so much trouble and expense to replace perfectly functional coal-fired power stations with intrinsically unreliable wind and solar farms clearly implied that the governments in question thought such steps were necessary. After all, governments would have been insane to give up low cost fossil fuels in favour of highly subsidised renewables if the scientific basis of the AGW theory had not been incontestable. What most people failed to realise, however, is that government ministers not only have no more knowledge of climate science than we do, relying on scientific advisers who are paid to tell them exactly what they want to hear, but that, just like the rest of us, their main reason for believing in climate change is that everyone else does.

What we have created, in fact, is a classic vicious circle in which reciprocal feedback loops constantly reinforce each other. This pattern of reciprocal reinforcement, however, is not just to be found in such classic example as the mediaeval belief in purgatory or our modern belief in climate change, but in almost every instance in which erroneous beliefs and their herd-like adoption infect entire populations. Most significantly, it is to be found in the mass stupidity which has destroyed the West’s economy.

The first loop in this particular vicious circle starts, of course, with the natural human desire to access essential public services, free at the point of use, whenever we need them and to be provided with an adequate welfare safety net should we run into difficulties. The problem is that, while voters consistently say that this is what they want, they don’t like paying the taxes required to support such a system. This therefore places governments in something of a dilemma. For the optimum solution, of course, would be to try to find a balance between these two requirements by providing the best possible public services one can afford while keeping taxes as low as possible. The trouble with this kind of compromise, however, is that it usually pleases no one. Prioritising their desire for re-election above all other considerations, therefore, governments throughout the West have increasingly resorted to hiring obliging economic advisors to show them how to effectively square the circle of improving and expanding public services without raising taxes.

This, of course, is impossible: one simply cannot have high quality public servicers, free at the point of use, without raising sufficient taxes to pay for them. Given that there is a limit to how high one can raise taxes before it becomes counterproductive, this also means, therefore, that there is a limit to how much one can expand and improve public services. No paid economic advisor is ever going to tell their finance minister this, however, for the simple reason that, if they did, they wouldn’t remain a paid economic advisor for very long. So, instead, they advise their ministers to instruct the governors of their only nominally independent central banks to lower interest rates in order to making borrowing cheaper for both the government and consumers, the rationale behind this being that this will stimulate the economy, which, in itself, has two effects. The first is that it increases GDP, which means that increased government borrowing will not result in an increased debt to GDP ratio. The second is that a higher GDP not only results in higher tax revenues without the government having to raise taxation rates, but allows both governments and consumers to then pay off their debts out of the increased wealth created.

The only problem with this is that, like the AGW theory, Modern Monetary Theory (MMT), as it is called, is just nonsense, only gaining what little credibility it has by drawing a false analogy between government borrowing and businesses borrowing, where businesses borrow money to invest in new plant and equipment in order to increase productive capacity and hence revenues, out of which they will then be able to repay their loans. Neither governments nor consumers, however, borrow money to invest in productive capacity from which future revenues flow; they both largely borrow money for current consumption. Yes, there is a resultant increase in GDP. But most of this will be accounted for by increased public expenditure and retail sales. What’s more, most western countries are currently running a trade deficit, especially in manufactured goods, which means that most of the increased amount money spent on retail goods is spent on imports and ends up going abroad rather than to local businesses. Indeed, apart from increasing government and consumer debt, all MMT does is increase the balance of payments deficit while papering over the cracks in a systemically faulty economy.

So why have so many western governments gone down this path? The answer, however, is simple. For most government ministers don’t know any more about economics than they know about climate science; they simply accepts what their expert advisors tell them, especially when their expert advisors tell them what they want to hear, which, of course, they always do.

The more important question, therefore, is why we go along with it. The answer to this question, however, is also very simple. For we no longer understand economics the way we did when people learned about economics, not from economics textbooks, but from hard earned experience, which taught us that we can’t spend what we don’t have and that getting into debt only makes the situation worse. The problem was that we thought we could put an end to this fear-laden economic reality by putting ourselves in the hands of those who told us they had a magical solution to the problem, but whom we now discover are just self-serving charlatans who have no idea what they are doing and, like mediaeval pardoners, don’t actually give a damn about us.

No comments:

Post a Comment