Thursday 21 July 2022

Rebuilding The World After The Next Crash

1.    The Pernicious Consequences of Good Intentions

All government tends towards authoritarianism even when this is not what is intended and when the intentions of those in government are essentially benign. Indeed, it is the good intentions of those in office which more than anything else fosters this tendency. For while one should not underestimate the number of those who seek power purely for their own ends, it would be the height of cynicism to suppose that most people who go into politics do so for anything other than entirely laudable reasons, the most common of which, of course, is to change something about the world they think is wrong. The problem with changing the world in government, however, is that it nearly always entails enacting new laws, which, in turn, require administration and enforcement, thereby expanding the functions and institutions of government in ways which mean they seldom if ever shrink again. Indeed, it is a very rare government that actually gets smaller. For in our imperfect world, there are always more iniquities and injustices to be rectified, more people who need government help, and more changes which therefore need to be made.

Building a better world is thus an essentially endless task, for which one suspects that most politicians are secretly grateful. For as long as they remain in office, not only does it give them indefinite employment, but it almost certainly endows them with a sense of purpose which they would be hard pressed to find in any other walk of life. For the rest of us, however, providing politicians with the opportunity to do something worthwhile with their lives comes at a very high price, with at least three extremely pernicious consequences.

The first of these is the curtailment of our liberty. For nearly every law and regulation that is put in place either stops someone from doing something they want to do or requires them to do something they don’t. Sometimes, of course, this is perfectly justified. If someone is doing something that is harming someone else, or which might potentially harm someone else, then it is only right and proper that they be required to desist. Other times, however, the only person who might possibly be harmed by allowing someone to do what they want to do such as riding a bicycle without a helmet, for instance, or driving a car without a seatbelt is that person themselves, the justification for the new law or regulation therefore being that it is for our own good, which it very often is. In fact, it’s why we generally go along with it. However, it is also why many such rules and regulations are wholly unnecessary and redundant, in that they enjoin us to behave in ways we see as entirely reasonable and in which would have behaved anyway.

In fact, viewed from this perspective, many such rules and regulations can also be seen as slightly insulting. For they seem to imply that without such regulatory guidelines, we would all behave irresponsibly. And although there is always a small minority of people who, with or without such external guidance, tend to act foolishly, to apply rules which can only be justified in the case of the few to everyone is not only to treat everyone as if they cannot be trusted to exercise their own judgement, but is to deny them the right to do so and treat them as children.

I say this because whether or not someone has the capacity and hence the right to make their own decisions is one of the criteria we apply in deciding whether someone is an adult or a child. It is even recognised as such in most courts of law. To deny someone this right, therefore, is to deny them one of our most fundamental rights as autonomous and therefore sovereign human beings. Worse still, the denial of this right has, in itself, a pernicious effect. For in many areas of our lives, our ability to make sound judgements is at least partly the result of experience, which, in turn, very often comes from either making errors of judgement ourselves or witnessing others do so. By imposing rules upon ourselves which limit the mistakes we are able to make, we may therefore be retarding our ability to make just the kind of judgements that would render such rules redundant.

Of course, it may be argued that imposing such rules upon ourselves may still be in our own best interests. After all, many of the mistakes we make have serious consequences, both for ourselves and others. If they didn’t, we wouldn’t regard them as mistakes and wouldn’t learn anything from them. Indeed, it is the negative consequences of our poor judgements that leads us to make better judgements in future. What this means, however, is that while the practice of imposing restrictive rules upon ourselves to help us avoid the negative consequences of our poor judgements may seem like a good idea, it may actually do more harm than good.

More to the point, reducing our ability to make mistakes does not mean that all such mistakes are eliminated from our lives. For no matter how many rules we put in place, unless we want to hand all our decision-making over to some omniscient being who always gets it right, poor decisions will still be made. Indeed, it’s why our well-intentioned politicians are not content with simply passing laws to protect us from ourselves but also try to mitigate some of the negative consequences of the less than optimal choices they know we are still going to make by providing a safety net of help and support for those who, whether from misfortune or errors of judgement, still find themselves in difficulty.

In as far as this is intended to eliminate moral hazard, however, this too is fraught with pitfalls, not least because if the recipient of the help is still to learn from his mistake, he must still feel as if he’s been exposed to some sort of risk. Ideally, in fact, he should feel relief that the negative consequences of his poor decisions have not turned out to be as bad as they might have been. This would not be the case, however, if the provider of the help were under an obligation to provide it. For under these circumstances, the recipient may well feel that he is entitled to the help and always was, which would mean that there was never any real chance of the potentially negative consequences of his actions being realised. Because there was never any real moral hazard, therefore, there is no real or meaningful lesson for the recipient of the help to learn from his behaviour, leaving him perfectly free, therefore, to do exactly the same thing again with impunity.

If the help given to those in need should thus be given freely, it should also be unconditional. This is because conditional help always takes away the recipient’s freedom to make his own choices, at least in those areas of his life to which the conditions apply. And if he is not free to make his own choices, he cannot be held responsible for them. If one wants him to take responsibility for his life, therefore, one must give him the freedom to choose how he lives it. In fact, looked at in this way, freedom and responsibility can be seen as two sides of the same coin. For if we, as a society, want people to act responsibly, then we have to allow them the freedom to make their own choices even bad ones while if we, personally, want the freedom to live our lives the way we choose, then we have to take responsibility for the choices we make.

Relieving the individual of this responsibility through the imposition of rules, on the other, has quite the opposite effect. Worse still, by erecting a safety net of help and support to protect ourselves from the negative consequences of our poor decisions, we not only encourage greater irresponsibly but thereby justify the imposition of more restrictive rules to prevent us from making further poor decisions. Whereas the encouragement of freedom and responsibility generates an essentially benign circle, therefore, in which the need to take responsibility for our lives leads us to make better choices, the removal of this responsibility and the imposition of rules instead creates an entirely vicious circle in which both our ability and our need to make wise decisions are diminished.

Indeed, it is this vicious circle which, under certain circumstances, can rapidly accelerate our spiralling descent into authoritarianism, as our failure to reduce bad decision making through ever more restrictive regulation leads to ever more arbitrary and punitive responses, all of which, of course, are represented as being for our own good: a phrase which has been used to justify just about every form of tyranny in the modern era. Take Nazi Germany, for instance, where each incremental curtailment of liberty was announce to the population as being Für Ihre Sicherheit, ‘For your safety’. The mistake we so often make when hearing this, however, is to assume that, because this was said by the Nazis, it was always said cynically. The fact is, however, that there have been very few people in history who have knowingly and deliberately acted in ways they thought were evil. Did Hitler think that what he was doing was bad? Of course not. After what he regarded as the betrayal of Germany by certain sections of German society at the end of the first world war, he thought he was rebuilding a better Germany for the benefit of ordinary Germans. And the majority of ordinary Germans, especially women, agreed with him. How else do you think he got elected?

Nor is this unusual. For at times of instability and uncertainty, when people are scared and struggling to make ends meet, they will vote for, or simply go along with any leader, party or government that promises to end their woes and presents a credible vision for doing so. Indeed, we saw something of this during the Covid pandemic, when, having decided that the government’s measures were for our collective benefit and safety, people not only went along with them but, in many cases, did so with alarming enthusiasm, snitching on their neighbours for holding family get-togethers during lockdown, applauding the decision to close down schools, pubs and a large portion of the economy, and even demanding that the government suspend one of our most fundamental freedoms that of deciding, ourselves, what gets put into our bodies by forcing or otherwise coercing reluctant recipients into being injected with untested, experimental vaccines which have since been found to have numerous adverse side effects.

Thus, while the good intentions of those who think they know what’s best for us may constitute the seeds from which all authoritarian regimes grow, it is the fertile soil of our own acquiescence and desire to be kept safe that actually makes them possible.

2.    The Pernicious Consequences of Government Interventions in the Economy

If most human beings thus seemingly prefer authoritarian security to freedom and responsibility, and are actually willing to give up their freedom in order to have responsibility lifted from their shoulders, you may well ask, therefore, who am I to force freedom and responsibility upon them. For if one advocates, as I do, that as long as one is not hurting anyone else, one should be allowed to live one’s life however one chooses, then it would be somewhat hypocritical of me to object to humanity’s choice once it has been so clearly made. The problem, however, is that the choice of authoritarianism as a solution to life’s vicissitudes not only imperils the freedom of others to choose differently authoritarian regimes being particularly intolerant of those who espouse liberty but the expansion of government, the curtailing of freedom and the discouraging of responsibility have one further pernicious consequence: their cost.

In the UK, for instance, the state now consumes more than 50% of GDP, the highest proportion of national income spent by any government in the developed world, although the fact that public services such as health and education are included in this figure have led some people to argue that it is slightly misleading, especially when compared to countries in which privately based health and education services constitute a significantly higher proportion of the total. What this fails to take into account, however, is the fact that the way in which such services are funded has a direct effect upon the demand for them, with publicly funded services being far more in demand than services people have to pay for themselves, thus increasing their total cost beyond that of a privately based system.

Take, for instance, education. If families had to pay directly for their children’s education, it is virtually certain that, except for those for whom the cost is not an issue, most families would buy less of it. This is not because they don’t love their children or want what’s best for them, but because they would also want value for money and would therefore be unwilling to pay for something which did not contribute, not just to their children’s future employment prospects, but to whatever aspirations for their children parents may have. These, of course, vary quite considerably, not just with the aptitudes and abilities of the children, but with the attitudes of the parents, many parents, for instance, wanting a better education for their offspring than they had themselves. Indeed, it’s why one suspects that the majority of parents would not willingly give up state funded education. For in the absence of state funding, it is economic reality, of course, that would ultimately determine how much any family spent on the education of their children, with the result that, no matter how great the parents’ aspirations may be, the one thing of which one can be fairly certain is that no family would ever spend more on a child’s education than it could actually afford.

That, in the absence of state funding, affordability would thus always be one of the main criteria any family would apply in deciding what type and level of education to purchase for their children is something which many people would say, of course, is, in itself, an argument in favour of state education. For if parents, themselves, always had to pay for their children’s education, it would inevitably mean that not all children would be given the same opportunities. My point here, however, is not about fairness and hence politics but about economics and the effect that the state funding of education has on the economy as a whole. For if it is the case that no family would ever purchase more education than it could afford, then it follows that no society, comprised of such families, would ever purchase more education than it could afford. 

This, however, is not the case when the state funds education, especially when the guiding principle is ‘equality of opportunity’. For while the aptitude and ability of each individual child may still determine, to some extent, the level and type of education he or she receives, the need to provide every child with the same educational opportunities and the absence of any economic considerations in determining which opportunities should be taken advantage of makes it statistically inevitable that a certain proportion of all children will be educated beyond the point at which further education, especially of the type generally on offer, has any appreciable effect upon their prospects, with the result that the state not only ends up spending more on education than the families of the children would have done, themselves, but that much of the increased expenditure is actually wasted.

To make matters worse, this can also have negative consequences for the recipients of the education. For without the economic pressure on families to purchase only such education as would actually enhance their children’s prospects, and faced with a wide variety of seemingly free options, it is also more or less inevitable that less well thought through educational choices are made, including the very common one today of students taking degrees in subjects which do not qualify them for any subsequent employment at all.

It is, however, the possibility that, through state funding, a society can purchase more education, or, indeed, more of any public service than it can collectively afford that is the more serious problem. After all, choosing the wrong degree course may be the cause of some later regret, but it is seldom ruinous. For a society as a whole, however, spending more money than it can afford on public services can and frequently does lead to economic disaster, not least because a society may not realise that this is what it is doing until it is too late. And even then, it may refuse to accept that this was the cause of its downfall.

This is partly, of course, because government is departmentalised, with the result that, even if some of those involved were prepared to admit that aggregate government spending was out of control, no individual minister would ever admit that overspending in his or her department was the cause of a national catastrophe, especially not in a department as small as education, the output of which, moreover, is generally recognised as being universally beneficial. More to the point, it takes a long time, under normal circumstances, for government over-expenditure to have a deleterious effect upon the economy as a whole. All that may be noticed, indeed, is a gentle decline, the cause of which may be attributed to multiple different factors. What’s more, the expenditure side of a government’s profit and loss account is only half the story. And while it may be what drives an economy towards the edge of the cliff, it could not drive it off the cliff unless there were also some seriously reckless and misguided judgements made with respect to how the government financed its expenditure, the least pernicious option the one least likely to lead to disaster being to fund all public spending entirely out of taxation.

I say this because there is a limit to how much tax a government can extract from an economy before further increases in taxation become counter-productive, producing less revenues than would otherwise have been the collected had the rate of taxation been lower. This is due to the very obvious fact that the only parts of an economy that can be taxed whether in the case of individuals or organisations are those in which income exceeds expenditure, and which can thus be characterised as producing surpluses or profits, which, when not taxed, are not only available to be spent on whatever the individual or organisation sees fit to spend them on, but can also be reinvested in the economy to produce more surpluses or profits. If more of these surpluses are therefore taken in taxation to pay for public services,  which do not, themselves, produce taxable surpluses, then there are less profits available for reinvestment. As taxation rises, investment therefore falls, until a point is reached at which the economy goes into decline, production falls and there are less surpluses and profits to be taxed.

To make matters worse, a fall in production without a commensurate contraction in the money supply also produces inflation: something which many governments in the developed world discovered to their cost in the 1960s and 70s, when their plans to build a better, fairer world after the end of the second world war ran into the intractable obstacle of economic reality.

The problem was that, even then, few governments actually cut their spending plans. After all, they were building a better world. Having discovered that raising taxes beyond a certain level killed the goose which laid the golden eggs, they therefore decided to cover the resulting deficits by borrowing, justifying this by redefining current expenditure as investment and comparing themselves to businesses. The trouble was, of course, that businesses produce surpluses out of which borrowings can be serviced and repaid; governments don’t. Of course, they argued that ‘investing’ in education, for instance, produced a better educated population, which was then able to generate greater wealth from which additional tax revenues could be gathered, thus enabling the government to pay back its borrowing. Not being able to demonstrate how any of this was true, however, more recent governments have tended to confine their ‘investment’ strategy to things like infrastructure, arguing, somewhat more credibly, for instance, that a better transport system increases productivity, thereby boosting overall wealth creation.

The problem with this argument, however, is that, if something is genuinely needed, people are usually willing to pay for it. And if people are willing to pay for it, someone is usually willing to build it without government getting involved, one of the best examples of this being the UK’s railway network, most of which was financed entirely through the sale of equity in competing railway companies during the 19th century. In fact, investors were so eager to buy shares in these companies that, during the 1840s, there was actually a railways ‘bubble’, which collapsed in 1846 when the Bank of England put up interest rates, making UK Treasury bonds a more attractive investment. Even so, railway stocks were among the best performers on the London Stock Exchange throughout the 19th century with the companies themselves generating consistent profits, not just from the sale of passenger tickets, but, far more significantly, from the carrying of freight, including agricultural produce, which could be brought to London overnight from almost anywhere in Great Britain and sold fresh in Covent Garden the following morning, thereby opening up new markets for producers all across the country and demonstrating the benefits of such infrastructure projects to the economy as a whole.

While this would therefore seem to support a governmental policy of encouraging infrastructure development, however, it has also led to a contradiction at the heart of just about every government’s infrastructure strategy. For if the general principle is true, that if something is genuinely needed, people are usually willing to pay for it, with the corollary that, even without government involvement, someone will usually be willing to build it, then it follows that, if an infrastructure project is viable, government involvement should not be necessary. In fact, it more or less follows that if government involvement is necessary, then government should stay well away from it. For if no one will invest in something without government participation, it is probably for a good reason, that good reason usually being that the project is unlikely to make a profit, usually because people are unlikely to pay for the resulting service, usually because there is no real need for it.

What has happened, however, is that governments have turned the logic of this argument entirely on its head, arguing, instead, that if a project is commercially viable, then government need not get involved because the private sector will take care of it whereas, if a project could not proceed without government funding, then this is precisely the kind of project it should fund, one of the best examples of this being the UK motorway network.

Of course, seeing’s the UK’s crowded motorways today, most people would say that, far from them being unnecessary, they are absolutely essential to the UK’s economy. At the time when Britain’s first motorway, the M1, was opened in 1959, however, it was a very long way from being essential, with hardly any users. In fact, I remember watching a documentary at the time in which the commentator eulogised about the freedom of the open road and how much easier the motorway made driving, not least because, throughout the entire documentary, the camera did not alight once upon another single vehicle.

So why was it built? One possible answer has to do with the fact that by the end of the 1950s there were already two other motorway networks in existence, the autobahn in Germany and America’s interstate highways, both of which were originally built to facilitate military transports, the former by the Nazis in the 1930s, the latter by President Eisenhower during the cold war, when the interstate highways were designed to link up all of America’s main military bases. While their primary purpose was thus the rapid deployment of troops and equipment during wartime, the fact remained, however, that, at all other times, they were available to be used by civilian traffic, which may well have been seen in Britain as giving both Germany and America a competitive advantage. Either that or it was simply a matter of national pride, motorways being seen as modern, efficient symbols of technological progress, whereas railways were just so 19th century.

The problem, of course, was that there was no business model for the operation of motorways that would guarantee private investors a return on their investment. In fact, the only realistic model available was to charge a toll. This, however, would not only have added to the cost of building and operating motorways and caused delays, as vehicles queued at toll booths, but would have deterred drivers from using motorways if there were alternative routes available, as, of course, there always were. In fact, from a purely commercial point of view, motorways were more or less guaranteed to make a loss. And so successive governments throughout the 60s and 70s took it upon themselves, or rather their taxpayers, to shoulder the burden, thereby somewhat ironically creating a demand for motorways that wasn’t previously there.

I say this because, from a freight haulier’s point of view, the motorway network could be seen as having three advantages over the railway network. Firstly, it was door to door rather than hub to hub, thereby requiring two less loadings and unloadings of the freight being carried as it was transferred from lorry to train and train to lorry again at each end of the journey. Secondly, it was more convenient in that journeys could be made whenever it suited the haulier, rather than as dictated by a railway timetable. More than anything else, however, it was perceived to be free. For being paid for out of taxation, users did not think of themselves as paying for it directly in the way they would have done had they been charged a toll. Indeed, in this respect, the way in which we now regard our use of ‘free’ motorways is very similar to the way in which we regard ‘free’ state funded education.

As more and more freight was transferred from rail to road, however, this had a disastrous effect on the profitability of the railways, with many branch networks, particularly in rural areas, now being closed down in order to cut costs, thereby reducing rail freight traffic even further. The result was that while bulk users, such as ICI on Teesside, continued to receive a specialised service, many smaller users, such as market gardeners in the Tamar Valley in Devon, who used to send trays of strawberries to Covent Garden overnight, now had their services cut altogether, forcing even more traffic onto the roads and further accelerating the spiralling decline of the railways. By the 1960s, as a result, taxpayers were not only paying for the building and operating of motorways, they were also now subsidising a once profitable rail network.

Nor did the costs of this revolution end there. For on leaving the motorway network or in order to get on to it heavy goods vehicles often had to pass through small towns and villages, thereby leading to local calls for bypasses to be built. This then meant that thousands more acres of the British countryside were covered in tarmac. All this really did, however, was shift the problem into the cities, in that the urban destinations of the lorries now became the pinch points, leading to more and more industrial parks being built along the urban fringes so as to keep traffic out of city centres. In fact, the use of motor vehicles as our predominant form of transport has probably done more to reshape our landscape than any other technology in history. It has also meant that, globally, we have burnt through a substantial proportion of the earth’s reserves of oil in little more than fifty years, leading many governments throughout the West to now not only demonise the internal combustion engine, but make the ownership and use of private cars so prohibitively expensive that soon only the very rich and, of course, government ministers will be able to afford them, thereby very possibly making motorways redundant and their building one of the worst investment decision ever made.

3.    The Inevitability of the Coming Crash

The real tragedy in all this, however, is the fact that, if western governments really did invest in all this unnecessary infrastructure in order to keep their economies competitive, they manifestly failed. For all this additional cost, not just on infrastructure, but on our ‘free’ public health and education services, along with our generous welfare safety net, has made the productive part of our economy that which produces taxable surpluses so uncompetitive that much of it has either been closed down or offshored to lower cost locations, thereby not only adding to global transport costs and further reducing our fossil fuel reserves, but overextending complex and vulnerable supply chains such that, when the next crash occurs, they will almost certainly collapse, making the effects of the crash that much worse.

The actual cause of the crash, however, when it eventually comes, should not only be seen as the result of government over-expenditure, caused by well-intentioned but misguided attempts to make the world a better place, but as a consequence of the fact that this post-war model of how to run the world has seemingly become the only one that we can now imagine, the idea of people taking responsibility for their lives and making their own choices having seemingly become not just inconceivable but something, in itself, to be feared, and from which government must therefore  protect us. I say this because I see no other way to explain why, when western governments saw the productive parts of their economies being hollowed out and their productive capacity being shipped abroad, they did not try to rectify this by cutting expenditure and reducing the economy’s tax burden so as to unleash the entrepreneurial spirit of their people, but instead merely tried to mitigate it through a process of financialisation, in which, instead of producing real goods and services, we now simply produced money in the form of credit.  

As I have explained elsewhere, this process started with the deregulation of the financial system in the 1980s, which allowed different types of financial institution types which had previously been kept separate to combine in ways which allowed them to indulge in some very profitable but also very risky practices, one of the most reckless of which was the financing of long term loans such as mortgages and mortgage backed derivatives with short term interbank borrowing, thereby maximising the spread between the lending and borrowing rates: mortgages bearing a high rate of interest; overnight interbank borrowing being one of the cheapest sources of funding available.

This produced enormous profits for those financial institutions willing to take the risk and, for a while, almost made up for the decline in other parts of the economy. It also made far more money available for house purchases than had previously been the case when building societies, the traditional lenders to home buyers, had required their members to save with them for up to five years before they were extended a mortgage. With more money being lent to home buyers, however, this meant a greater demand for houses, which, without a commensurate increase in supply led to a rapid rise in house prices. It also had the effect of allowing people to borrow in other ways, especially on credit cards, the accumulated outstanding balances on which could be periodically paid off by borrowers taking out second mortgages against the rising value of their homes.

This, however, had two unfortunate effects. The first was that it created an illusion of prosperity which disguised the decline in the real economy, as a lot of the things we bought with this avalanche of credit were imported. The second, of course, was that it created and depended upon an inflated housing market which, like all financial bubbles, could only be sustained by more buyers coming into the market and pushing up prices. This, in turn, required mortgage lenders to lend to increasingly less credit-worthy borrowers, which they could only do by bundling up thousands of mortgages into financial derivatives called Collateralised Debt Obligations (CDO), in which they then sold shares to third parties in order to spread the risk. What they were really doing, however, was spreading a kind of contagion one, moreover, which was largely hidden with the result that when the bubble finally burst, as all bubbles do, no one knew which parts of the system had been infected, with the further consequence, therefore, that banks immediately stopped lending to each other, bringing the entire financial system to a grinding halt.

In fact, it was this precipitous interruption in the flow of credit, rather than the losses incurred by any individual bank, which was of most concern to most governments and central banks. Needing to get credit flowing again, particularly to governments, themselves, they failed to take note, therefore, not only of how vulnerable the new deregulated financial system had become to the creation of such bubbles and their subsequent collapse, but how dependent the underlying economies of the West were on the financial services industry and how weak they had therefore become. The result was that most governments in the developed world did almost nothing to solve these underlying problems. Instead of imposing more regulation on the banks in order to prevent something similar happening again, they merely papered over the cracks with loans and new capital injections. Worse still, instead of reshaping their underlying economies to make them more productive by reducing government expenditure and taxation, in many cases, including the UK, they actually increased public spending in order to stimulate economies which were now in recession.

Because their economies were in recession, however, and were not, therefore, producing the same level of tax revenues as before, the only way governments could spend more was by borrowing more. Which gave them another problem. For despite the initial tranche of new money leant to commercial banks by the central banks, because they still couldn’t borrow short term from each other, most of them still lacked liquidity. The only way that governments could therefore borrow more money was if the central banks actually printed more and used it to buy treasury bonds in a roundabout process known as quantitative easing (QE).

The biggest mistake governments and central banks made at this point, however, was to suppose that, because it did not show up in the CPI, all this extra money printing wasn’t causing inflation. It was. It just wasn’t causing it where they were looking. The reasons for this were twofold. The first was that, after years of thinking that the money train would never end, the general population had been taken aback by the crash, many losing jobs or businesses, with the result that despite the efforts of central banks to stimulate borrowing and spending by reducing interest rates to near zero, the instinctive response of most people was to take less risk, actually paying down credit cards and even saving money in deposit accounts. Indeed, it was this that eventually restored liquidity to the banking system. The second reason why all this money printing did not appear to cause inflation, however, was even more telling in that it demonstrated just how difficult it is to stimulate a stagnant economy using only money. For if consumers weren’t borrowing to spend, then businesses could not borrow to invest, with the result that, for all the newly printed money which the central banks were pumping into the financial system, the real economy remained obstinately flat and has done so for more than a decade.

So where has all the money gone? Well, the short answer is nowhere. For having first been used to purchase government bonds and then employed to finance government spending, via various routes, it then made its way into people’s bank accounts and hence back into the financial system, where it has not only remained but where it has wrought its inflationary mischief. For if the banks weren’t lending to consumers to consume or to businesses to invest, they had to do something with all this money that was swirling through their books. And so they used it to purchase financial assets: stocks, bonds, derivatives, crypto currencies, NFTs, you name it. In fact, the financial industry has had to work overtime to continually invent new asset classes to soak up all the money, with the result that, up until now, every single financial market in the world has continually hit new records, making the owners of these assets exceptionally rich while most ordinary people have seen their standard of living stagnate or even fall.

Even where businesses outside the financial system have actually borrowed money, it has seldom been to invest in new plant and equipment in order to increase the production of real goods and services and hence real wealth. More often than not, corporations have simply taken advantage of the current low interest rates to buy back their own shares. This has two advantages. Firstly, the interest corporations pay on the money they borrow is much less than the dividends they would have to pay to shareholders. Even more importantly, it reduces the number of shares in businesses which are still generating the same earnings. The value and hence price of the shares therefore goes up, which not only triggers bonus payments for the directors but adds further to the overall inflation of financial markets.

This has consequently had three further effects, not just on governments and central banks but, in many ways, on the general public. For the record prices on financial markets have been portrayed and therefore seen as a sign, not of a financial bubble, but of a booming economy. As a result governments and central banks have not only congratulated themselves on the apparent recovery they engineered, but are very reluctant to do anything that might bring markets down, including the raising of interest rates and the tapering of QE. Indeed, whenever they have tried to do either of these things, they have been met with howls of opposition from those with a vested interest in keeping the financial bubble inflated, along with most of the media who do not realise that record financial markets are not necessarily an indication of a flourishing economy.

Worse still, because we still haven’t recognised the inflationary effects of QE on financial markets, after twelve years of money printing we have come to regard it as safe, with the result that, during the Covid pandemic, governments throughout the world did what no other governments in history have ever done, or would even have contemplated. They not only closed down large parts of the economy but they printed money in order to compensate people for a percentage of the losses they consequently incurred, thereby ensuring that they could go on spending money on those parts of the economy that were still open.

Importantly, this did not mean that people spent more. With bars, restaurants, cinemas and nightclubs closed, in most cases, in fact, people spent less. Due to the resulting decrease in the rate of monetary circulation, this also meant that there was little appreciable increase in the money supply. With so many people not working, however, there was a major drop in production, not just locally but throughout the world, thereby causing shortages in supply chains. With no diminution in the amount of money available, this has meant that, for the last two years, waves of inflation have been gradually working their way through the system, first in factory prices, then in wholesale prices and now, finally, in retail prices.

Of course, western politicians are now busily trying to deflect any responsibility for this by blaming it instead on the villainous Vladimir Putin, especially for soaring energy prices. As I pointed out in ‘The Folly of Renewables’, however, wholesale gas prices actually rose by 80% in the spring of 2021, not as a result of the war in Ukraine, which was still a year away, but as a result of western governments ending the burning of coal and oil to generate electricity while banning the exploitation of new gas fields in places like the North Sea. The result has been both an increase in the demand for gas and a decrease in supply. Compounding these follies, we have now also banned the importation of a whole range of commodities from fertiliser to wheat by imposing sanctions on Russia for a war which the west deliberately provoked by courting and arming Ukraine. By this coming winter, therefore, the result will almost certainly be a perfect inflationary storm, in which the underlying inflation caused by money printing, along with already soaring energy prices, will be further stoked by food and fuel shortages, all of which are now unstoppable.

I say this not only because each of these forces has already been unleashed, but because, due to the state of our inflated financial system, governments and central banks no longer have the weapons with which to fight their inflationary effects, the most important of which, of course, is the ability to raise interest rates, thereby reducing borrowing, slowing down the rate at which money circulates within the economy and effectively reducing the money supply. This is what both the Reagan and Thatcher governments did in the early 80s to eliminate the inflation which had been fuelling itself throughout the 1970s. In America, the then chairman of the Federal Reserve, Paul Volker, actually raised interest rates to 15%. This, however, is simply not possible today. For not only would it make government borrowing prohibitively expensive, forcing governments to cut public expenditure   something which the Reagan and Thatcher governments also did but it would greatly reduce the value of existing bonds with lower yields, thereby wiping out billions of dollars in assets.

Nor would the carnage stop there. Mortgage rates would go up to perhaps over 20%, thus triggering another collapse in the housing market, with a similar effect on mortgage backed derivatives to that seen in 2008. As many of these long-term mortgage backed derivatives are still being financed by short-term interbank borrowing, particularly on the repo market, their regular refinancing would also become more expensive, if not impossible.

Nor would  stock markets would be immune. For with bond yields up in the high teens, the returns on inherently riskier stocks would also have to rise, causing their prices to drop, while those corporations which borrowed money to buy back their shares may find that refinancing their loans would drive them out of business. In fact, any significant raising of interest rates by central banks today might not only bring down the entire financial system, but the entire economy.

Thus while we may see central bankers put on a show of raising interest rates over the next few months, putting up rates by a half a percent here and a quarter of a percent there, they are not going to go all in like Paul Volker. Besides which, there is the suggestion that many governments would not mind a sustained period of inflation. For while it would wipe out the pensions and savings of their populations, if one thinks of inflation not as an increase in prices but as the devaluation of the currency, which it primarily is, it would also erode the massive mountains of debt which most governments have been building up over the last couple of decades.

The only problem with this strategy, of course, even if governments did indeed decide to adopt it, is that it wouldn’t actually work. For if interest rates remain low while inflation soars, people will stop investing in paper assets and turn instead to property and precious metals, such as gold and silver, which retain their inherent value. This in itself then places a downward pressure on the price of paper assets until the rate of inflation is factored in. If one buys a ten-year treasury bond with a nominal value of £1,000 and a nominal yield of 2.5%, for instance, the net present value of this bond, with inflation running at 10% per annum, is actually £495.23. This is made up of the principal, which, while it will eventually be redeemed at its nominal value, will only be worth £348.68 at today’s prices in ten years’ time, and the £25 annual coupon, which, discounted for inflation at the same rate, will have a total value over its ten yearly payments of just £146.55.

That’s not to say, of course, that the bond would ever actually be traded at £495.23. Bond valuations vary with yield, risk, proximity to maturity, and their relative value against other assets. During periods of high inflation, moreover, investment strategies often become defensive, with investors buying assets not because they are not going to lose value in real terms but because they may lose less value than some other assets. The point, however, is that during such periods, there is always a downward pressure on asset prices with the result that there is always a corresponding upward pressure not just on effective or relative interest rates but on nominal rates as well.

Suppose, for instance, that at some point in its lifecycle, the above ten year treasury bond was traded at £800. This would mean that its nominal yield of 2.5% on £1,000 was turned into an effective yield of 3.125% on the £800 actually invested. This would further mean, therefore, that anyone looking to buy UK treasury bonds would naturally choose the older bonds with an effective yield of 3.125% than a newly issued bond with a nominal yield of 2.5%. In order to sell such bonds at their nominal value, therefore, the treasury would have to raise their nominal yield to an equivalent 3.125%

Of course, the central bank could intervene in the market to keep prices high and yields low by printing money and buying bonds in further tranches of QE. But all this would do, of course, is further stoke inflation, requiring even more QE to keep prices up. It is essentially a vicious circle, the result of which is that, no matter what central banks do, in the kind of high inflationary environment into which we have now entered, and which will only get worse, one way or another, interest rates will rise, with the further consequence that, given our very fragile financial system, another financial and economic crisis is on the way and cannot now be prevented.

4.    The Immediate Aftermath

The only question, therefore, is how severe this coming crisis is going to be: a question which, of course, it is almost impossible to answer. The only thing of which one can be fairly certain is that it will be worse than 2008. This is not only because, in 2008, we were still in a low inflationary environment, allowing central banks to print money without fanning the inflationary flames, but because government debt was still relatively low and serviceable, allowing governments to borrow money to nationalise financial institutions if necessary. Neither of these conditions applies today. In fact, government debt around the world is now so high that, when the crash comes, governments may not even be able to borrow enough to service their debts and may thus be among the defaulters.

That’s not to say, of course, that some countries won’t attempt a 2008 style solution. In a world in which supply chains may well have broken down, leading to shortages of just about everything, printing even more money may however risk turning inflation into hyperinflation, which von Mises describes as a state in which people no longer trust or wish to hold their nation’s currency and will therefore buy anything they can get their hands on, whether they want it or not, so as to have something with which to later barter. This, however, leads to people refusing to take money for goods, resulting in the total collapse of the currency, as we have seen in Zimbabwe and Venezuela in recent years.

It is for this reason that the World Economic Forum (WEF), for instance, has been making plans for a new global digital currency issued by the IMF as part of their ‘Great Reset’. The idea is that were large numbers of fiat currencies to collapse, as is highly likely, the entire population of those countries wishing to be part of this new global order, would be issued this new digital currency in the form of a universal basic income. Because all payments to recipients and all transactions using the currency would be recorded in a digital wallet, stored on one’s smart phone, to be used in shops and restaurants etc., governments would thus be able to ration and control what we buy.

Not only is this a very blatant and rather sinister form of authoritarianism, however, but the whole scheme depends upon a network of block chain servers to manage the digital wallets, terminals at every point of sale to record the transactions, and a functioning electricity grid to power it all, which may not be available in any widespread and reliable manner once the crash has occurred.

If there is to be an international solution to the monetary chaos, therefore, it is far more likely to be based on the currency or currencies of a group of countries which are not themselves heavily indebted and which can back their currencies with a basket of commodities, such as oil, natural gas, wheat and copper, which everyone needs to buy and which will therefore keep the value of these currencies stable. An obvious contender for one such currency is, of course, the Russian Rouble, making it no coincidence, therefore, that Vladimir Putin actually spoke on this subject at the recent BRICS conference in China, suggesting that the BRICS nations as a group could provide a new reserve currency .

The problem, of course, is that while this might work for the BRICS countries themselves, without being able to generate endless streams of credit, it is difficult to see how Europe and America, with their probably now near worthless currencies could obtain the BRICS nations’ currencies in order to buy the BRICS nations’ commodities, let alone China’s manufactured goods, which could actually see China, itself, in as difficult a position as the west.  

It is in the west, however, that the worst problems will arise. For without the money to pay for imports and without the productive capacity to produce enough food, clothing and other essential items ourselves, it is highly likely that both Europe and America will descend into a period of looting, rioting and other forms social disorder, which, in turn, will force governments to declare states of emergency and impose extreme authoritarian measures on their populations, which most people will of course accept as being for their own safety. Worse still, instead of allowing or even encouraging people to solve their problems themselves growing food to sell at farmers’ markets, for instance, or setting up small business to produce items in short supply using their skills, talent and ingenuity to rebuild the economy organically from the bottom up, with or without the WEF and their global digital currency, most governments will almost certainly attempt to reorganise their economies centrally, imposing strict rationing on all essential items, licensing and hence controlling who is allowed to supply them, and even nationalising many essential industries without compensation: a measure  which the public will again largely accept. For the root cause of the crisis will not be seen, of course, as the decades of government interventionism which is what will have actually led us to this point along with the profligacy, waste, cronyism and corruption which government intervention always brings with it but capitalism.

Just like in 2008, in fact, it will not be governments that are blamed for relaxing credit restrictions and encouraging people to borrow cheap money in order to cover up their own shortcomings in handling the economy; once again, it will simply be the bankers. And, not knowing any better, we, of course, will believe it, with the result that governments will just go on doing more of the same in the confident belief that their economies cannot actually function without their constant interference. Instead of springing back to life with renewed vigour in the hands of a new generation willing to embrace freedom and responsibility in order to rebuild the world anew, we will consequently cling to the same old beliefs and structures which haven’t worked for the last fifty years and which, whether or not they avert an immediate economic catastrophe, will nevertheless see the economy continue to decline until, like the old Soviet Union, it collapses under the weight of its own bureaucratic inefficiency and corruption.

5.    A Return to Minimal Government

Unlikely as it is, therefore, that we will ever learn enough from our mistakes to rebuild the world on a better model, having tried once again to explain what those mistakes have been, I would nevertheless like to end this essay by outlining a few of the fundamental principles upon which I believe that better model should be based.

The first of these is the simple idea that government should only be allowed to do what only government can do. That is to say that if one wants to prevent the continual expansion of government at the expense of the governed, then if something can be done by the people themselves, or by any institution other government, then the people themselves or that other institution should do it, not government.

One of the main consequences of this is that no government would ever have more than four ministries or departments, the first of these being a Foreign Office. This is because no private individual can speak for the country in its dealings with other nations. Only a duly elected government can do this, which means that it has to be the government that does it.

In that it is also only government that can realistically organise the country as a whole to resist foreign aggression, making it a duty of government to therefore do so, the second essential ministry is thus a Ministry of Defence, which may or may not be subsumed under the Foreign Office, but which, as a constitutional imperative, should be entirely outward looking, protecting the nation from external threats rather than allowing itself to be directed against domestic opposition.

All internals matters, in fact, should rather be the province of a Home Office, the principal function of which is to enforce the law but which, given the power of government, must also have another, possibly more overriding responsibility. For in as far as the law is about protecting people from others, and in as much as one of the ‘others’ from whom people need to be protected is the government, itself, this means that, in addition to overseeing the police, the judiciary and the penal system, the Home Office must also undertake the very difficult and unenviable task of policing itself along with all other branches of government both with respect to the constitution, ensuring that people’s constitutional rights are protected, but also with respect to the individual and collective conduct of government officers, especially with regard to such issues as corruption. For without scrupulousness in government, all government not only tends towards authoritarianism but towards authoritarianism of the most self-serving kind.

This then brings us to the fourth and final ministry any government must have, which is, of course, the Treasury. This Treasury, however, is not the same as the one we have today. Its job would not be to run the economy, which can look after itself perfectly well without government interference, but simply to collect sufficient taxes to cover the operating costs of the Foreign Office, the Ministry of Defence, the Home Office and, of course, itself.

And that’s it. This constitutes the total set of functions which any central government should undertake because only central government can undertake them. Even where some form of governmental involvement would otherwise seem required, all else can and should be devolved down to a local level, preferably to the people themselves.

Take education, for instance, which I have been using for most of my examples throughout this essay and which, in all practical respects, can only be administered locally, and not just by local government. It can be organised just as well by people, themselves, either individually or collectively or in conjunction with local institutions such as the church, businesses and charities. Nor do such responsible agents necessarily require direction or guidance from above. Based on their own experience and whatever reading they deem necessary, school governors, including parents and others with an interest in local educational outcomes, are perfectly capable of deciding on the curriculum their children need, what teaching methods should be employed, the size of classes and even the length of the school day. That this will result in a great variety of schools, both in what they teach and how they are managed, is of course inevitable. But if the schools meet the requirements of the children, the parents and the local economy, why should uniformity be seen as preferable.

Similarly, without state funding, there will be a multiplicity of ways in which schools are funded. Most schools, of course, will be run on straightforwardly commercial lines. Even so, it will be very much in their interest to encourage charitable bequests, both to finance school buildings and equipment and to fund scholarships and bursaries. Another funding option might be the creation of educational mutual societies a bit like building societies into which parents would start paying when a child is born, earning compound interest which could cut the cost of education by up to half while spreading that cost over five years longer than the period of education itself. Indeed, with a little ingenuity, there is almost no limit to the ways in which education can be organised and financed. By not just allowing but requiring parents to get involved in their children’s education, moreover, it is very possible that both the parents and their children will have a much more satisfactory educational experience.

But what about higher education, you ask. Surely you can’t leave that to the parents. And I’m not  suggesting that one should. The current model for higher education, however, in which around 50% of all young people now go to university, and in which universities have been specifically designed to process such numbers, is only a few decades old. Moreover, it is arguable that it is already broken, serving little purpose other to lower unemployment by keeping around half of all young people out of the jobs market for three years while they rack up huge amounts of debt obtaining degrees from which many of them will never benefit. The more important point, however, is that before central government intervened to create these pointless degree factories, institutions of higher learning had developed entirely on their own for centuries without government direction, and could quite easily do so again.

Up until the 17th century, for instance, universities had existed almost entirely to train people for just two professions the church and the law teaching them Latin and Greek, the classics, theology and ethics. With the development of science in the 17th century, however, the rather old-fashioned, didactic method of teaching, which these old universities employed, based on formal lectures and the authority of the classical authors taught, was no longer regarded as appropriate, and a new kind of institution, based on the model of the Royal Society, began to appear all across Europe. These academies, in fact, didn’t do any teaching at all, but simply provided forums in which their members could share and discuss ideas, principally by presenting papers which were then discussed and published in the academy’s proceeding. It was only in the 18th century, when universities also started to teach science, that these two different models for propagating advanced learning then began to converge on something resembling the kind of university which people of my generation attended, with university teachers giving fewer formal lectures while providing guidance and direction to their students’ studies through more informal tutorials and weekly assignments.

Not that I am suggesting that universities should return to any of these older models. My point is rather that, once the dust has settled and the world discovers that it can no longer afford the wasteful and largely unnecessary system of higher education we currently have, individual institutions be allowed to find their own way again whatever that might be without the interference of government central planners, who would undoubtedly get it wrong again. And the same goes for just about every other area of our lives, from industry to health care, energy generation to agriculture. Let people work out how to do it themselves and they’ll do just fine.

6.    The Abolition of Central Banks and Fractional Reserve Banking

If, in order to rebuild a more stable and better functioning world, we need to keep government out of it as much as possible, there is one area, in particular, where government involvement must be strictly limited: the financial system. In particular, governments must be prevented from manipulating interest rates and the money supply in order to spend money they don’t have. And the easiest way to do this is to abolish central banks, most of which were originally established to bring stability to the banking system, but which have largely achieved the exact opposite as a result of being turned into instruments of government policy.

In order to understand this, however, one must first understand how central banks work, starting with the little known fact that, even in cases such as the Bank of England, which was nationalised after the second world war, most central banks were originally set up as purely commercial enterprises. Even when they are no longer required to make a profit, moreover, they still have to operate commercially, which is to say that they have to make enough money on their banking activities to cover their operating costs. And they have to do this in the same way as other banks: by charging more interest on the money they lend to borrowers than the interest they pay on the money deposited with them. The only difference is that, for  central banks, the only clients they have are other banks.

What it is also important to note here is the fact that, for most central banks, making enough money to cover their operating costs was originally the most important determining factor in setting interest rates. In fact, the basic principles of banking dictate that it is still an important factor. For if banks deposit too much money with a central bank while borrowing too little from it, the central bank in question, like any other bank, will lose money. Like any other bank in this situation, therefore, it will have no choice but to reduce interest rates to discourage deposits and encourage borrowing.

The problem, however, is a little more complicated when the situation is reversed, when a central bank finds that its client banks are borrowing too much while depositing too little. For unlike other banks, it doesn’t have the option of borrowing more itself to make up the shortfall. For if its client banks are not lending it enough in the form of deposits, from what other banks is it going to borrow? However, it still has a choice. For if its client banks are borrowing too much while depositing too little, thus giving rise to the possibility that it might actually run out of money, it can simply print more.

This, however, is where the problems arise. For as I have already explained, money printing is the ultimate cause of inflation. To avoid this and still rebalance its loan and deposit accounts, therefore, whenever possible a central bank should employ the alternative strategy of increasing interest rates so as to encourage deposits and discourage borrowing. The problem with this, however, is that its client banks would also have to raise interest rates, thereby discouraging borrowing by businesses, reducing business investment and slowing overall economic growth.  It would also force up mortgage rates and lower house prices, all of which would be unpopular with voters. Even when the economy is seriously overheating, therefore, with excess borrowing due to money being too cheap, most governments will prefer their central banks to print more money rather than raise interest rates. And although, in most cases, central banks are nominally independent of government, in most cases it is the government that appoints the chairman or governor of the central bank, who is not going to keep his job for long or be reappointed when his term is up if he continually says no to what is in the government’s interests.

There is thus a fundamental flaw in the system. And it is this that is the root cause of so much that has gone wrong in western economies over the last three or four decades. And the only sure way to cure it is to abolish central banks.

But who will then set interest rates, you ask. This question, however, is based on the widespread assumption that some central authority is required to guide and direct the economy and betrays an equally widespread failure to understand not just how banks work but free market economics. For if one allows market forces to operate without manipulation, commercial banks, themselves, are quite capable of setting their own interest rates. This is because like central banks, commercial banks also have to balance their deposit and loan accounts. If their customers are depositing more than they are borrowing, they have no choice, therefore, but to reduce interest rates so as to encourage borrowing and discourage deposits. Similarly, if their customers are demanding to borrow more than they are depositing, then they will put up interest rates so as to encourage deposits and discourage borrowing. It is simply market forces at work.

But what about the other functions which central banks perform, such as providing a safety net for banks which find themselves in trouble, as happened in 2008. It is the provision of this safety net, however, that actually creates greater instability within the banking system. For if a bank knows that it is always going to be bailed out if things go wrong, if there are no negative consequences for bad decisions no moral hazard then they are likely to take far greater risks than if the consequence of a poor lending decision, or indeed a whole culture of poor lending decisions was bank failure.

But what about the depositors in a failed bank who may lose all their money? Fair question. As in my earlier discussion of mortal hazard, however, the reason why there have to be negative consequences to poor decisions is not to make people suffer, but to ensure that less poor decisions are made. In this case, however, there is also a better solution. This is the abolition of fractional reserve banking: a practice wherein banks cover only a fraction of their deposits with their reserves, these reserves being assets which are not themselves at risk. Currently, this is standardly around 10%, but during the 2008 crash, some of the banks which had to be bailed out had as little as 2% cover in their reserves, which meant that they only had to lose 2% of their loan book to become technically insolvent.

This was recklessness bordering on negligence and explains why, in the wake of the crash, most of the reforms discussed by regulators centred on changing the rules to make banks hold larger reserves. The problem with this, however, is that it is difficult to determine how large a bank’s reserves should be, not least because, in part, this depends on the level of risk to which the bank is exposed. Moreover, holding more reserves invariably increases a bank’s confidence that it will be able to cover any losses it may incur, which, in turn, may well lead it to take more risks. Worse still, the more of its money it places in reserve, the less it has to lend. The fewer loans it makes, the higher yield it therefore has to obtain on each one in order to maintain its profit margin. In a competitive market, however, higher yields only come from riskier investments.

Initially, therefore, increasing a bank’s reserve requirement can actually be counterproductive. There is a point, however, at which the psychology of this inverts or reverses, when increasing a bank’s reserve requirement does not lead to more risky lending but to more risk aversion. This is because there comes a point when, although a bank may be able to cover any losses out of its reserves, its profit margin cannot stand any significant defaults. Even though banks have to compete with each other on the interest rates they offer to borrowers, the inevitable result of forcing banks to hold larger reserves, therefore, is that, collectively, they start to put up interest rates, not just on risky loans, which they now no longer make, but on high quality loans. In fact, given the reduced amount money available for lending within the overall system, business now have to compete for loan funding, with only the best, least risky and potentially most lucrative projects those which can afford higher interest rates because their potential return is also so high being funded. Weaker projects, those which might potentially default, are thus weeded out of the system, leading to fewer defaults and higher profits for the banks, making the maintenance of high bank reserves commercially viable.

So what is this inversion point when an increase in a bank’s reserve requirement leads to the bank taking less rather than more risk? Because it is a psychological inversion or even a change in banking philosophy, rather than a purely mathematical tipping point, one cannot, of course, answer this question with any certainty. However, we do know of a figure, which, because it is at the top of the range, will certainly achieve this objective. And that figure, of course, is 100%. That is to say that, under this regime, banks would be required to hold sufficient reserves to cover a full 100% of their deposits, which may sound a little extreme but is actually quite achievable and makes it absolutely impossible for a bank to fail.

Nor is this some idle pipe dream. In fact, it has actually been suggested on a number of occasions, the most notable being during the great depression when a group Chicago based economists proposed it to President Roosevelt. Known as the Chicago Plan, I actually wrote about it in this blog in the wake of the 2008 crash. The problem, of course, is that it doesn’t go down very well with bankers, not least because the assets comprising a bank’s reserve have to be ‘real’ assets, such as gold and silver, rather than paper assets, such as mortgage backed derivatives, for instance, which, during a financial crash, may themselves, of course, become worthless. This means that a lot of a bank’s capital would be tied up in assets, like gold and silver, which do not produce a revenue stream, which no banker likes to see.

There is, however, at least one class of asset which, while it as real and enduring as gold, can be made to produce a steady source of income. This is property. A bank with a large and well managed portfolio of property as part of its reserves would not only therefore meet the reserve requirements of the Chicago Plan but would produce a sufficient return on capital to overcome most bankers’ objections.

Of course, it would still leave banks with less money to lend to businesses than under the fractional reserve system. It would also, therefore, reduce the rate of economic growth, which is probably why Roosevelt declined to adopt it, preferring instead the Keynesian solution of trying to stimulate growth by reducing interest rates and increasing the money supply, even though we all now know where this leads. Worse still for the Chicago Plan, reducing the rate of economic growth is not the only seemingly adverse effect its implementation would have. For if there is less money available for borrowing, and most of what is available is inevitably allocated to businesses with sound business plans, then there would be even less money available for unsecured loans, which would very probably mean the end of credit cards. Not only would this then slow down growth even further, but it would also change people’s spending habits. For if, beyond regular daily expenditure, people had to save for the things they wanted to buy rather than buying them now on credit and paying for them later it is likely that they would want whatever they had purchased to last at least as long as it would take them to save the money to buy a replacement. This would therefore lead to purchasers having a natural preference  for higher quality, longer lasting goods rather than what is merely fashionable.

This is not to say, of course, that fashion would disappear. It is likely, however, that people would start to prefer more classical designs which did not so quickly betray their age. Instead of throwing things away, it is also likely that we would start to have things repaired. In fact, this transformation of our economy, from one based on credit to one based on savings, would not only force us to become more financial responsibility but more thrifty and less wasteful. Abolishing central banks and fractional reserve banking, therefore, would not just amount to a small technical change within the financial system but, just like the deregulation of the banking system in the 1980s, it would actually change the way we lived.

It would also change us or, perhaps more accurately, the things about ourselves and others we most valued. For if we had to work for all the things we wanted and, indeed, needed one of our most important attributes might well be considered industriousness, to which one might also add ingenuity, resourcefulness and perseverance. Without the welfare state to fall back on and only our friends and family on whom we could rely, other highly regarded attributes would therefore likely be hospitableness and generosity, along with all those attributes, such as respect, courtesy and mindfulness, which enable people to rub along together in a harmonious fashion. Above all else, however, that which we would most value in others and seek to foster in ourselves might best be described as a respect for reality. For in a world in which we could only rely upon ourselves and those close to us, and in which imprudent choices and thoughtless actions could therefore have seriously negative consequences, a responsible, considered and thoughtful approach to life would surely be the most valuable character trait of all.