Friday 31 May 2019

The End of an Era (Part II)


The overarching objective of the Bretton Woods Conference, which met at the Mount Washington Hotel in New Hampshire in the last months of the second world war, was to create a blue print for a new world order that would ensure or so it was hoped that such world wars would never happen again. To this end it laid the foundation for a new financial and monetary system designed to prevent the kind of economic depression the world had experienced in the 1920s and 30s and which had provided such fertile soil for extremist politics. The problem, as I pointed out in Part I of this essay (which you can find here) was that, based on gold, this system itself stored up monetary and financial problems of its own, the first of which the tide of inflation that swept around the world when the system finally collapsed took more than a decade to cure, while the second the need for the USA to maintain the reserve status of the dollar at any cost has still, I believe, to reach its inevitable dénouement, when it too will surely collapse under the accumulated weight of US Treasury debt, which currently stands at over $21 trillion and is growing at a rate of around $1 trillion per year, much of it used to fund the vast military and intelligence machine needed to defend what, in time, its very cost will ultimately destroy.

However, these were not the only problems which the Bretton Woods Conference set in train. Of potentially even greater destructive force was the political path upon which the conference set the world: a path which, at the time would probably have been called ‘internationalism’ but which we now refer to as ‘globalism’, and which, from its inception, had two main aspects.

The first, defined by what it was against, was what one might call anti-nationalism. Because both World War I and World War II were perceived as having been caused by either national rivalries or the need to assert or reassert national pride, nationalism, both as a political ideology and as a state of mind, were now seen as forces to be curbed. To achieve this, it was also therefore thought that both the role and the power of nation states had to be curtailed or counter-balanced by new international institutions principally the UN which would not only attempt to control how nation states behaved on the international stage but how they behaved domestically as well.

That this was not only inherently very difficult but essentially at odds with the even more important political principle that nation states should be both self-governing and democratic thereby providing a bastion against authoritarianism will be the subject of the final part of this essay The End of an Era (Part III) in which I shall discuss the consequences of this contradiction in the light not just of Brexit but of the numerous other nationalists movements throughout Europe which are currently attempting to wrest back sovereignty from an undemocratic and increasingly authoritarian EU. In Part II, however, I want first to concentrate on what, on the face of it, might be seen as the more positive aspects of this new internationalism, especially the efforts made, both at Bretton Woods and in the years that followed, to eliminate the kind of protectionism that had precipitated a fall of 65% in world trade during the great depression and which now led to the first ever signing of a General Agreement on Tariffs and Trade (GATT): a comprehensive template for future world trade which came into force on 1st January 1948 and which had three main objectives:

1.      To make it illegal for one country to ban or restrict imports from another except under a small number of clearly defined conditions;
2.      To prevent countries from discriminating against some trading partners by providing preferential tariffs to others except under approved, formal trade agreements;
3.      To progressively reduce or eliminate tariffs altogether on as many goods and commodities as possible.

Even more importantly, it changed the nature of the relationship between the developed nations of the world and many developing economies which had previously been held back by asymmetrical tariff structures. Gone were the days, for instance, in which colonial or former colonial powers like Great Britain could trade manufactured goods for commodities without their own manufacturers coming under pressure from manufactured goods flowing in the opposite direction. Indeed, under the GATT, developing nations could now impose higher tariffs on imports from Europe and America in order to protect their own fledgling industries: a provision which effectively forced first world manufacturers wanting to sell to developing countries to physically set up factories in those countries themselves, thereby bringing into being the first truly multinational corporations, with US firms like Ford and Coca-Cola, in particular, setting up production plants all around the world, not just in order to service the local markets but to make use of low cost labour in order to export more cheaply elsewhere.

In fact, it didn’t take long for US multinationals to start offshoring production for importation back into the USA: a development which also very quickly revealed some of the negative consequences of this new globalist pattern of trade. For while the offshoring of manufacturing might have been good for both multinational corporations and consumers providing the latter with cheap  imported goods it was far less benign with respect to first world employment and became even less so after 1979 when, under the leadership of Deng Xiaoping, China finally opened itself up to the world, undertaking a number of economic reforms which ultimately led to a US/China accord that was greatly to China’s advantage. For in order to be able to buy the western technology that China needed for its widespread programme of modernisation, the US agreed to aid China’s own exports to the west by allowing the Chinese to peg their currency the renminbi or yuan to the dollar at an exchange rate set solely by the Chinese themselves.

As mistakes go, I believe that this will one day be seen as one of the worst in post-war history, exceeded in its short-sightedness only by the decision at Bretton Woods to tie the rest of the world’s currencies to the price of gold. For the inevitable consequence of giving China this licence was a period of almost three decades up until 2006, in fact during which the yuan was traded at anywhere between 30% and 40% below its true value, making Chinese exports to the rest of the world so cheap that even the EU’s high import tariffs couldn’t protect European industry from the unfair competition. 

For this wasn’t the kind of free trade championed by the likes of Margaret Thatcher and others at that time. This was price manipulation by a communist government which suppressed the wages of its workers in order to produce consumer goods for export which its own people could not themselves afford. It could almost be described as a form of economic warfare, designed to destroy the industrial foundations of an enemy. Yet neither the multinational corporations which benefited from moving their production to such a low cost environment, nor the guardians of our new internationalist world order were inclined to do anything about it, not least, in the case of the latter, because they clearly regarded the political integration of China into the new global economy as being of far greater importance than the continued economic health of the West, which, to be fair, they probably didn’t even realise was under threat. For like everybody else, they told themselves The Big Lie: that growth in China meant greater prosperity for the entire world and that any loss in low-grade manufacturing jobs in Europe and America would be more than compensated for by the growth in higher grade jobs in the new technology sectors that were then emerging and in the services industries, especially the financial services industry, which duly took this opportunity to plead its case for deregulation, thereby preparing the ground for the financial crash of 2008 and the perilous state in which we still find ourselves even today. 

I say this because although banking deregulation is seldom cited as one of the causes or necessary preconditions for what happened in 2008, there was a very good reason why, up until the 1980s, the banking industry had been so closely controlled, with its operations strictly separated into five main types of financial institution which can be listed as follows:

  1. Commercial or High Street Banks which provided short term loans, mostly in the form of overdrafts, to both retail and business customers, funded by short term deposits in the form of current or chequing accounts.
  2. Building Societies (in the UK) or Savings and Loan companies (in the US) which provided long term loans in the form of mortgages to retail customers, funded by long term deposits in the form of personal savings accounts. In the case of many UK Building Societies, in fact, customers were often required to have saved with them for a period of up to five years before they were eligible for a mortgage, thus not only ensuring that these borrowers were reliable but providing the long term financing which this kind of banking required. 
  3. Merchant or Investment Banks which provided medium term loans to businesses, mostly in the form of medium term debentures, funded by the capital of the banks, themselves: a high-risk form of lending which naturally tended to make these banks extremely cautious with respect to whom they lent their money.
  4. Pension Funds which provided long-term income streams to long term savers, funded by long term investments in equities and bonds, the very nature of these long term liabilities making these funds also very conservative in the kind of investments they made.
  5. Stockbrokers who made medium to long term investments in equities and bonds on behalf of individual clients whose own attitudes to risk largely determined the type of investments made.

Note that with the exception of stockbrokers who never incurred any real risk to themselves except with regard to their reputations the very structure of most of these institutions made them extremely risk-averse, with the result that getting a loan or investment out of any one of them required a very good business case and a lot of persuasion. The real importance of these divisions, however, was that they matched different forms of lending to different forms of funding, thereby avoiding the most serious mistake any banker can ever make, which is to borrow short to lend long. Once deregulation had taken place, however, nearly all the new financial institutions that were formed whether by organic expansion or mergers and acquisitions now covered most or all of the above functions, especially the first four, with the result that borrowing short to lend long not only became possible but, due to other, parallel developments which I shall outline below became more or less normal.

When Bear Stearns collapsed in 2008, for instance, it is reported that while almost 20% of its $365 billion in assets were in the form of highly illiquid mortgage-backed derivatives, nearly 40% of its funding was in the form of overnight loans which had to be renewed every day and which also vastly exceeded its own capital of just $11.1 billion. When the short term lenders began to worry about whether the bank’s long term mortgage-backed derivatives were really worth as much as they were supposed to be, and consequently refused to renew the overnight loans, it thus became instantly insolvent.

Another consequence of deregulation was the increased issuance of credit cards, with every former building society and other new financial institution eager to get in on this highly lucrative business. The trouble was that this was lending of a completely different type to any that had gone before. For prior to deregulation, nearly all lending had been for investment, whether in property or a business. Credit card lending, in contrast, is almost entirely for consumption. Indeed, it is this that justifies such high credit card interest rates. For whereas a borrower who borrows money to invest hopefully ends up in possession of an asset which he can then sell if necessary to repay the debt, the borrower who borrows to consume ends up with nothing: the meal having been eaten; the holiday having become a distant memory. Thus while borrowing to invest is still intrinsically risky in that the investment may not turn out to be worth as much as one might have hoped borrowing to consume is sheer reckless irresponsibility. And prior to deregulation, no bank manager would have ever allowed a customer to act in this way, both for the customer’s sake and the bank’s. Indeed, he would not only have regarded it as bad banking practice but as morally reprehensible. So lucrative was this business, however, with interest rates on many credit cards above 30%, that moral scruples were simply brushed aside, allowing yet another formerly abjured practice to become the norm.

Still, none of this would have led to the perfect financial storm to which it eventually gave rise without one further ingredient. Unfortunately, this was duly supplied in 1987 when the newly appointed chairman of the US Federal Reserve, Alan Greenspan, announced to the world the end of ‘Boom and Bust’, a phrase which the UK Chancellor of the Exchequer, Gordon Brown, was to repeatedly use a decade later to describe the intended consequences of his own economic strategy. The idea was to use monetary and fiscal policy to iron out the peaks and troughs in what is called the ‘business cycle’: a natural sequence of ups and down in economic activity brought about by the fact that all businesses tend to expand until they saturate their existing market or markets. The problem is that they do not usually know what the saturation point is until they have actually reached it, their first  indication of this very often being a build-up of unsold stock in their warehouses. Their immediate reaction, therefore, is to cut production and perhaps even lay off staff, which consequently has a knock-on effect on other businesses, both in their own supply chain and in the local economy at large. As a result, these ‘business cycle’ down-turns tend to be synchronised right across the economy and only come to an end when a new point of equilibrium is reached, thereby allowing the whole cycle to start again.

The new policy advocated on both sides of the Atlantic was therefore to intervene, both fiscally and monetarily, whenever the first signs of a general down-turn were detected: fiscally by increasing public expenditure to compensate for the decrease in business spending; and monetarily by reducing interest rates so as to encourage continued consumer spending. And initially it worked: so well, in fact, that during the Clinton administration and beyond, Alan Greenspan was thought to be a genius.

There were, however, a number of problems with this whole policy. On the fiscal side, the main problem was that increasing public expenditure very quickly began to seem like a general panacea for dealing with any economic down-turn and therefore soon became a habit, leading to chronic deficit spending: the very thing that the Bretton Woods convention was set up to prevent. On the monetary side, however, the problems were more acute. For reducing interest rates to maintain consumer spending effectively brings consumption forward. People buy today to consume today what they might otherwise have bought and consumed tomorrow. Which, in many cases, means that they won’t now be buying the item in question again for some time. After all, if you buy a new car this year, you are unlikely to buy a new car next year, especially if you have had to borrow the money for the purchase. For apart from anything else, you’ve now got this debt.

As a general policy, therefore, reducing interest rates to boost consumer spending in order to avoid an economic down-turn has diminishing returns. The first one or two times it is used, it works well. But there is only so much consumption you can bring forward and only so much debt you can load on to the consumer’s shoulders before it loses its attraction. For people just won’t borrow more money if they are already over-indebted.

Worse still, this policy works like a downward ratchet on interest rates. For while central bankers are quick to reduce interest rates whenever an economic down-turn is detected, they are generally much slower to put them back up again when the economy recovers, citing worries over whether a premature increase might ‘choke off the recovery’ as justification for the delay. Throughout the 90s and 00s, as a result, interest rates fell steadily all around the world, making it cheaper and cheaper to borrow money, especially for banks who found themselves able to borrow cheap short term money from other banks at 2 or 3%, while lending it at 30% on credit cards or, more importantly, at 11 or 12% on mortgages. 

I say ‘more importantly’ because while, by then, banks were providing 100% mortgages at up to six times annual salary to anyone who could sign their name regardless of whether they had sufficient income to service the interest on them, let alone repay the principal property backed mortgages were still inherently less risky than unsecured credit cards, and were made even less so by their massively increased availability. For their very cheapness and the fact that they were now available to many more people than was traditionally the case meant that demand for property was also greatly increased, which, in turn, increased house prices. This meant that even if an over-extended borrower were to default, the mortgage could simply be foreclosed, the house sold and the money recovered without loss. 

As a further consequence, this also meant that lending on credit cards to anyone who owned a property was also less risky. For if their credit card debt became unpayable or even unserviceable they could simply take out another mortgage on their rapidly appreciating property and start afresh. Indeed, for many, it actually took the worry out of credit card spending, allowing them to hit the High Street with gay abandon, running up debts they would never have otherwise dreamed of incurring.

But surely, you say, someone had to know that this was going to end badly. The government for instance? Couldn’t they see where this was headed? Almost certainly yes. The problem, however, was that, while on the surface, the economy (in the UK at least) may have looked rosy largely as a consequence of rising house prices and all this consumer spending underneath things were slightly different. This can be seen from the GDP figures for the seven years leading up to the financial crash. For while, according to the Office of National Statistics (ONS), overall GDP in the UK rose by an average of 3.01% per annum, more than half of this was due to growth in just two sectors:


  1. The financial sector, itself, which grew at an average rate of 9.08% per annum, exceeding 9% of the entire economy by 2008, and
  2. Public expenditure, which grew at an average annual rate of 6.41% during the same period.


In  contrast, the real economy that which produced the goods and services that people were actually willing to pay for and which employed 85% of the working population grew at an average rate of just 1.49% per annum. Hollowed out by having exported its manufacturing core to the far east, the part of the economy which created the country’s real wealth was thus in a state of near stagnation.

From the point of view of the government, this therefore posed a number of problems. For there was no way that it could afford to rein in the financial sector. Not only was it providing a large part of the country’s economic growth, but it was also supplying a large slice of the taxes that were paying for the increases in public expenditure which comprised the other significant portion of what appeared to be a booming economy. On top of that, it was also UK banks that were financing most of the government’s annual deficit which, by 2007, had already reached around £50 billion. 

Worse still, the population at large, unaware of the growing financial bubble upon which their apparently affluent life-style was based, were partying like never before. With house prices having doubled in less than a decade and set to go rising, people felt as if they’d won the lottery. And there was no way that any politician was going to pour cold water on that. 

But surely the banks, themselves, saw the danger? Of course, they did. But they weren’t going to curtail their lending. They were making too much money for that. Making use of the fact that financial institutions which sold mortgages could now also create and sell such exotic financial instruments as mortgage-backed derivatives something which building societies could never have done they decided instead to simply spread the risk by packaging up their sold mortgages into blocs and then selling shares in these blocs to other financial institutions. 

And for a while, indeed, these Collateralised Debt Obligations, or CDOs as they were called, were actually very popular. For based on a percentage of the mortgage interest coming into the original lending bank, not only did they produce a good annual rate of return, they allowed secondary investors to share in this lucrative business without having to sell mortgages themselves. Even more importantly, with property prices continuing to rise, investors seemed certain to get their money bank upon maturity. Indeed, so safe did these long term mortgage-backed derivatives seem that banks like Bear Stearns were even prepared to borrow short to invest in them, making a significant profit on the differential interest rate. 

The problem, of course, was that, as with any rising market in which buyers have to borrow to purchase what is sold, there comes a point at which the price exceeds that which borrowers can practically manage: a problem which was further exacerbated by the fact that, as with all such tipping points, its exact location wasn’t known until it was actually passed and borrowers started to default. Worse still, defaults meant that lenders started tightening their lending policies, such that there were now fewer buyers to purchase the foreclosed properties, leading to a drop in prices in order to ensure disposal. This, in turn, then led to problems for borrowers who had been sold mortgages with graduated payments, in which the bulk of the burden only kicked in when the borrower was supposed to have already had enough equity in the property to dispose of it at a profit if required. The result was an increase in fire-sales and a crash in property prices right across both America and Europe, followed by a cascade of defaults, which now brought into question the real value of all the CDOs that had been issued and the solvency of those banks that had borrowed to buy them.

In fact, so bad was the contagion which now flowed through the banking system that many people will be surprised to discover just how little money the banking industry as a whole actually lost. In the four years from 2008 to 2011, for instance, the four largest UK banks HSBC, Barclays, Lloyds HBOS and RBS lost a total of just £47.34 billion, with RBS responsible for around three quarters of this amount. Of course, this figure, which is merely an aggregate of the losses stated on the four banks’ P&L accounts, doesn’t take into account lost profits. But even projecting profits forward from earlier years, the total figure would still not have exceed £100 billion. The real contagion, therefore, was not one of actual losses, knocking over banks one at a time like a row of dominoes, but one of fear and hence paralysis. For not knowing which financial institutions were solvent and which were not, banks around the world did not know who they could lend to and who they could not, and so simply stopped lending to one another altogether, freezing the circulation of money in a way that caused the collective balance sheets of the same top four UK banks to shrink by a massive £1.8 trillion.

If, like me, you find ‘money’ a bit of a mystery, you may of course wonder how this is possible. If the banks didn’t ‘lose’ the money, where did the £1.8 trillion go? What you have to remember, however, is that under normal circumstances only around 3% of the money supply is ‘base’ money issued by the central bank. The other 97% is largely created by the banking system itself, as I shall endeavour to explain.

Suppose, for instance, that Mr. Brown wants to buy a house from Mr. Green. He therefore goes to his bank (Bank A) and borrows £250,000. Bank A doesn’t actually have £250,000, but once the deeds to the house have been exchanged, it simultaneously borrows the money from Mr. Green’s bank (Bank B) and transfers it back again so that it can be credited to Mr. Green’s account. We now have a situation in which Mr. Brown owes Bank A £250,000 (in the form of a mortgage); Bank A owes Bank B £250,000 (in the form of an inter-bank loan); and Bank B owes Mr. Green £250,000 (in the form of an account from which, in principle, he may withdraw the money at any time but, in practice, probably won’t). The upshot is that, depending upon which way you look at it, either £250,000 or £500,000, which didn’t exist before, has now been created out of thin air, expanding the balance sheets of both banks involved by this amount, with each bank gaining an extra £250,000 in both liabilities and assets.

Thus it is that, while banks keep lending to each other, the money supply expands. When they stop and more especially when they start to unwind their respective positions, calling in debts in order to repay debts of their own it contracts. And that is precisely what happened here. Unable to borrow money from other banks but with liabilities falling due, each bank had to call in money lent, not just from other banks, but also from commercial customers as well, in many cases reducing or even removing overdraft facilities from businesses, thereby not only shrinking the money supply but adding even further to the economic downturn which, in Britain at least, was to become the deepest recession since the early 1920s, with a fall in GDP of over 6%. 

What this revealed more than anything else, however, was not just the anarchic dysfunctionality of our deregulated financial system but the inherent weakness in the underlying economy, which, as a result of two decades of accumulated debt, lacked the kind of robustness necessary to bring about an early recovery. Worse still, with interest rates already at rock bottom and government spending already at record highs, it also very quickly became apparent that the previously relied-upon policies for dealing with such economic downturns reducing interest rates and increasing public expenditure – had all been used up.

Not, of course, that this prevented governments all around the world from attempting to apply these policies anyway. In a coordinated effort, central banks worldwide duly cut interest rates once again, in many cases reducing them to below zero in an attempt to deter commercial banks from placing money on deposit with them, thereby forcing them or so it was hoped to lend to consumers and businesses instead. The problem was, of course, that with consumers already up to their eyes in debt and actually starting to pay down their credit cards and businesses disinclined to invest in the middle of a recession in which demand was still declining, there just weren’t that many takers. And even though politicians still attempted to morally shame banks for not lending enough especially after all the public support they, themselves, had received the fact is that you can’t force people to borrow money if they don’t want to. 

So that left increasing public expenditure as the only other weapon available. And, as in many other countries, that’s what many people in the UK also called for. The problem was that, due to the recession, UK tax revenues had fallen massively, tripling the annual deficit from £50 billion to £150 billion in just one year. Any additional borrowing to finance increased public expenditure, therefore, would have risked lender resistance, very probably forcing the Treasury to increase the interest paid on its bonds or risk the failure of a bond issue altogether. 

In fact, with the exception of the USA, most countries that tried to borrow and spend their way out of the recession most of them in Europe found opposition in the bond markets and, in many cases, had to turn for support to what became known as the troika: a combination of the IMF, the ECB and the EU, which duly forced supplicant nations to cut public expenditure as a condition of their help.
With an annual deficit of £150 billion around 9% of GDP at the time this might also have been the fate of the UK, even without an increase in borrowing to finance increased public expenditure, had it not been for the early introduction of a programme of quantitative easing (QE) by the Bank of England, which duly printed additional base money with which to purchase UK Treasury bonds from subscribing commercial banks, thereby not only ensuring that these banks had enough liquidity to subscribe to the next Treasury issue, but guaranteeing that the price of the bonds stayed high while their corresponding yield remained low. 

Not, of course, that it was ever admitted that this was the programme’s primary purpose: an admission which, had it occurred, would have severely dented confidence and would therefore have been self-defeating. Instead, the programme was usually presented as simply another tool in the BoE’s armoury for increasing general liquidity within the banking system and therefore for helping to stimulate economic growth. The fact remains, however, that without the BoE’s purchase of £325 billion worth of UK Treasury bonds from UK commercial banks between 2008 and 2011, it is very doubtful whether UK banks, on their own, would have been able to purchase even half of the £520 billion worth of bonds issued by the Treasury during that period. Even more tellingly, all this money printing and purchasing of bonds had absolutely no effect on the real economy which remained stubbornly stagnant throughout the entire three and half years in which the programme was in operation. 

Indeed, as a tool for stimulating economic growth, QE has proven itself to be spectacularly ineffective almost every time it has been used. Since 2015, for instance, the European Central Bank (ECB) has been pouring between 60 and 90 billion a month into the Eurozone financial system and, to date, has purchased more than 2.4 trillion in financial assets, including stocks and corporate bonds as well as Eurozone treasury bonds. And yet economic growth in the Eurozone has not managed to climb above 2% during this entire period, with some countries, like Italy, remaining in almost perpetual recession. For the problem in Europe, as in most of the developed world, is not actually one of liquidity. As shown in the above example, the banking system, itself, is quite capable of producing the necessary finance if there is a requirement. It doesn’t need a central bank to print more base money to do this. What it needs is a demand for finance from the real economy. And this is what is lacking. For with stagnant real incomes, over-indebted consumers provide very little incentive for businesses to invest in increased production or improved productivity. And without growth in either of these areas, real incomes, in turn, remain suppressed. This is the vicious circle we are in. And printing money and pumping it into the financial system does nothing to solve this problem. 

In fact, in many ways, it makes it worse. For once the money has been pumped into the financial system, it has to be invested somewhere. And as it is not being invested in the real economy, it ends up being invested in either property or financial assets. As a result, property prices have risen once again, in many places to levels at which those living on suppressed median incomes simply cannot afford to buy a house or apartment of their own and are therefore forced to spend an even larger proportion of their income on rent, thereby depressing demand in other parts of the economy even more. 

Worse still is the state of our financial markets. For with central banks buying up so many financial assets, and so much money swirling around within the financial system itself, owners or managers of this money are desperate to find any financial asset in which they can profitably invest. The result is that just about all financial markets whether they be for stocks, treasury bonds, corporate bonds or ever more exotic derivatives are at peak highs. Nor is this helped by the fact that, unable to find profitable investments in the real economy, the only way for many businesses to secure increased shareholder value is through cash based mergers and acquisitions and share buybacks, the technical simplicity of the latter making them particularly attractive to beleaguered CEOs who have nothing better to do with all the cheap money that is available to them. 

To see why this is, suppose that one such CEO is the manager of a company making £1 million per year in pre-tax profits, which, at an admittedly rather low but numerically helpful price/earnings ratio of ten to one, would give it an overall value of £10 million. Suppose, too, that the company has one million issued shares which are therefore valued at £10 each. Now suppose that the CEO decides to buy back 20% of the shares. So he goes to his bank and borrows £2.2 million at a rate of interest significantly lower than he is currently paying out to his shareholders in annual dividends, and offers to buy 200,000 shares from his shareholders at £11 each, an offer which most of them jump at. The result is that the company now only has 800,000 shares but is still making £1 million per year in pre-tax profits, which, at a P/E of 10:1 still makes it worth £10 million. This means that each of the remaining shares is now worth £12.50, an increase of 25% which so delights the shareholders who have already made a 10% gain on the shares they sold that it earns the CEO a hefty bonus even though he has done nothing to improve the underlying value of the company or, indeed, the overall condition of the economy.

And the same thing occurs in the case of cash based mergers and acquisitions. Twenty years ago, most purchases of one company by another were made in shares. The acquiring company issued more of its own shares and exchanged them for shares in the acquired company. In this way the total equity in the new combined corporation was not reduced. Today, however, with money so cheap to borrow, most acquisitions are executed partly or entirely in cash, thereby wiping out some or all of the equity of the purchased company. When German pharmaceuticals giant, Bayer, bought Monsanto in June 2018, for instance, it paid $66 billion for the American GM specialist entirely in cash, increasing its own share price astronomically but saddling itself with an extra $66 billion in debt as a consequence. 

Nor was this exceptional. Mergers and acquisitions have been steadily increasing in both number and value over the last six or seven years. In 2018, there were more than 49,000 mergers and acquisitions worldwide, at a total value of $3.8 trillion, most of it paid in cash.

The problem is that this increased leverage makes companies vulnerable. For while costing more in annual dividends than the interest paid on bonds, equity never has to be redeemed. Moreover, unlike interest, dividends can be cut or cancelled if the company gets into financial difficulties. Thus while funding acquisitions through debt may look attractive today, with interest rates very low, it may seem rather different tomorrow if interest rates were to rise. Indeed, it is estimated that up to 25% of all listed corporations around the world would face some degree of difficulty if interest rates rose by any significant amount. Not only would they not be able service their debts, but in many cases they would not be able to refinance themselves when the debts fell due, thereby earning for themselves the soubriquet of ‘Zombie Corporations’, or corporations that are actually already dead but just don’t know it yet. 

Not only has very little of this ‘financial engineering’ therefore been to the benefit of the corporations, themselves, it has also done nothing for the wider economy. Indeed, for the most part it has no effect upon the wider economy at all. Its only benefit has been to the guardians of this vast financial machine who have made the ‘financialisation’ of the economy their business: the hedge fund managers and heads of investment banks who, along with their clients have made billions out of gaming the system in this way, while the real economy, along with the incomes of all those who labour within it, has remained more or less stagnant.

Indeed, it is as if the economy, itself, has been divided into two: the real economy, which produces the goods and services which we all need to live, but which is in slow decline; and the financial economy, where people are turned into billionaires overnight, and which seemingly continues to expand with endless ease. For creating money without having to create any extra real wealth is easy. All you have to do is press the ‘enter’ button on your keyboard and there it is: another million in someone’s bank account. The trouble is that creating money without creating any extra real wealth, as demonstrated by the Weimar Republic in the early 1920s and as the participants at the Bretton Woods Conference tried to teach us in 1944, is the ultimate recipe for disaster. 

In April this year, as if in reminder of this fact, the Institute of International Finance reported that in 2018, total global debt sovereign, corporate and household reached a staggering $243 trillion, about four times global GDP: an amount so vast it can never be repaid and must eventually bring down the world’s entire financial and monetary system. Indeed, so enormous has this problem now become that it is hard to understand how we could have got ourselves into such a position, especially as there must be those in power who saw this is happening and yet did nothing about it. It almost seems like a repeat of the years leading up to the 2008 crash when politicians turned a blind eye to all the excessive mortgage lending that was then taking place and which speaks, therefore, to a kind of moral corruption: one which is apparently not just confined to elected politicians desperate to keep their electorates sweet, but which also extends to the guardians of our new globalist order, who would seem to be just as determined to preserve the status quo as everyone else, even while it pushes the world ever closer to destruction.

And it is this, I believe, that people throughout the West are now beginning to sense: that our globalist masters that small, unelected coterie of non-governmental technocrats, global financiers and heads of multinational corporations, whom we watch descend upon Davos each year in their private jets – are not only not in control of what is happening but are either in denial still believing us to be on course for that the brave new world envisioned at Bretton Woods or are cynically content to oversee an End of Days they know they cannot prevent but from which they are nevertheless determined to squeeze the very last drops of power and prestige. And it is this fecklessness, I believe, combined with an air of entitlement and a total lack of care or concern for those they have betrayed and on whom they have turned their backs, against which populist movements all across Europe are now beginning to rebel.

The only question that remains, therefore, is whether it’s too late. And it is this question, along with how we allowed it to happen, that I shall therefore be attempting to answer in the final chapter of this essay: ‘The End of an Era (Part III)’.

Saturday 4 May 2019

The End of an Era (Part I)


To many of those seeking to preserve the economic and political structures of the last seventy years, the two seismic events of 2016 the decision of the British people to leave the EU and the election of Donald Trump to the presidency of the United States are held, as a matter of faith, to be aberrations: unaccountable departures from the norm which will be overturned or reversed in due course, thereby allowing the world to return to its natural and proper state. Even though the Mueller report failed to find President Trump guilty of any wrong-doing, the US establishment and media are thus still determined to find some way to impeach him, while the British government continues to delay Brexit in the hope that its supporters will eventually become exhausted and lose the will to fight, thereby allowing Article 50 to be revoked. 

Whether or not either of these ‘reversal’ events occurs, however, I believe that in the longer term the changes which the UK referendum and the 2016 US presidential election presaged will not only continue inexorably but will do so at an accelerating pace. For far from being the inexplicable anomalies which others believe them to be, I believe that they constitute the first clear indications that the post war era fashioned at the Bretton Woods Conference in New Hampshire in the autumn of 1944 is on the point of collapse and is so due to two fundamental errors actually built into the Bretton Woods Agreements themselves. 

The most glaring of these was a contradiction in the political consensus upon which such international institutions as the UN and, later, the EU were built: a contradiction which is at the heart of the Brexit debate and to which I shall be returning in Part III of this three part exposition. In Parts I and II, however, I want to start by looking at what was possibly an even more catastrophic error: one which was built into the very foundations of the post-war monetary and financial system, which not only included the creation of the World Bank and the IMF, but also saw the re-establishment of a gold standard in which the price of gold was (permanently) set at $35 an ounce, with all other currencies being pegged to the dollar at fixed, though not immutable exchange rates. 

The purpose of this was to impose fiscal discipline on governments around the world by preventing them from simply printing money to finance deficits, as had happened in Germany after the first world war, leading to a period of hyperinflation which, in turn, led to the great economic depression which allowed National Socialism to come to the fore. What it also did, however, was effectively make the dollar a reserve currency which other countries could hold in lieu of gold due to the commitment made by the USA that it would convert dollars to gold on demand, thereby making the dollar ‘as good as gold’.

All very well, you might think. The trouble was that the US consequently had a responsibility to keep the price of gold fixed at $35 an ounce, if necessary by adjusting the domestic supply of dollars in order to maintain confidence in future gold convertibility. This meant that the US, more than any other country, had to maintain fiscal discipline, thereby greatly limiting the flexibility with which American presidents could manage the US economy to their own political advantage: a flexibility which only the most optimistic idealist could have imagined American presidents would forego.

In fact, it is a wonder that this new monetary order lasted as long as it did: a miracle which was almost entirely due to the probity of President Eisenhower, probably the most fiscally disciplined president the US has even known. Not only did he end the wasteful Korean War within six months of taking office but, even at the height of the Cold War, he then set about cutting military expenditure the only post-war president ever to do so such that by the end of his first year in office he had actually balanced the federal budget, which remained in balance for the next seven years of his presidency.

The result was what was probably the golden age of the American economy. With very little growth in base money, there was almost no inflation, while underlying economic growth remained virtually constant at between 3% and 4% per annum, meaning that, even taking into account the rise in population, the average standard of living for American citizens during Eisenhower’s presidency rose by around 35%. 

Nor were these good times restricted to the USA. All across the developed world, economies were booming. In the UK, for instance, the then Prime Minister, Harold Macmillan, famously told the British people at that time that they’d ‘never had it so good’. And he was right. All of which suggests that the system, itself, worked and would have gone on working had all US presidents been as honest and responsible as Ike. The sad truth is, however, that the participants at Bretton Woods were simply naïve to think that they would be.

Indeed, the problems started almost as soon as Eisenhower left the White House, when a new war in Southeast Asia, combined with the cost of a space race and additional welfare spending in response to the Civil Rights movement forced the administrations of both John F. Kennedy and Lynden B. Johnson to increase borrowing: something which they could only do by also increasing the base money supply, which they therefore did. The result was that even by as early as 1966, the earliest year for which I have managed to find figures, there were almost twice as many dollars in circulation as the US held in gold reserves. According to an IMF publication from 2014, in that year there were a total $24 billion in existence issued by the Federal Reserve, $14 billion of which were held by foreign central banks as dollar reserves, while the US bullion repository at Fort Knox held only $13.2 billion worth of gold priced at $35 an ounce.

What this also tells us is just how big the US trade deficit had become by this point. For apart from foreign aid and loans issued under the Marshal Plan, what this $14 billion held by foreign central banks represented, of course, was a massive imbalance of trade with those other countries, especially West Germany and Japan, whose economies and exports to the US were by then flourishing. In fact, so acute had the trade imbalance between the US and West Germany become that eventually the US demanded that West Germany increase the value of the Deutsche Mark in order to make its exports more expensive, something which the West German government naturally declined to do, thereby effectively leaving the Bretton Woods convention. 

Everywhere one looked, in fact, the system was now creaking, with the price of gold inexorably rising on all the metals exchanges on which it was traded, while the dollar, itself, was repeatedly brought under pressure by countries trying to unload their dollar reserves in the very genuine fear and belief that they would eventually become devalued. Despite numerous international attempts to shore up the system, principally by creating a two-tier market for gold with one tier for commercial traders and another for central banks and the use of currency swaps rather dollar conversion in international settlements, it was fairly clear, therefore, that it was only a matter of time before the system collapsed: an inevitability which duly came to pass in the summer of 1971, when, to the consternation of all those who knew what it meant and the utter obliviousness of all those who didn’t, President Nixon announced to the world that, as a temporary measure which inevitably became permanent the United States of America would no longer be exchanging dollars for gold.

In the years since, much has been written about this cataclysmic event, with much of blame, quite unfairly I think, being attributed to Nixon, himself. In his seminal work, ‘The Great Deformation: The Corruption of American Capitalism’, David Stockman, for instance, argues that with the Vietnam war still dragging on and Nixon needing to boost his popularity ahead of the 1972 presidential election principally by dolling out various form of largesse to various sectors of the public his motives for acting how and, more especially when he did were purely political and selfish. This is to ignore that fact, however, that by the summer of 1971, the situation had simply become untenable. The United States simply didn’t have enough gold to cover its international commitments and refusing to face up to this reality was only adding to the pressure. More to the point, Nixon didn’t start the deficit spending and excess money-printing. John F. Kennedy and Lyndon B. Johnson were just as much to blame, if not more so. More than anyone else, however, the true culprits were the participants at Bretton Woods, who not only didn’t see the inevitability of this outcome but failed to grasp the magnitude of its truly catastrophic consequences. For if other countries around the world had little reason to continue holding dollars before the closing of the gold window, they had even less reason to hold them afterwards. Which meant that unless the US could find some other reason for countries to hold dollars, not only was a crash in the value of the dollar more or less inevitable but a gradual repatriation of dollars to the US was also very likely, thereby fueling an already high rate of inflation, something which Nixon just couldn’t allow.

His now famous solution, therefore, was to send Henry Kissinger to Saudi Arabia, the largest oil producer in the world, to convince the Saudis to continue selling their oil for dollars, promising in return that the USA would provide diplomatic and military protection for Saudi Arabia for as long as this arrangement was maintained. Thus the petro-dollar was born, along with an alliance which has continued to see America defend Saudi Arabia ever since, no matter how shockingly and brutally it behaves. Even more significantly, however, it also established what has probably become the most central plank in American foreign policy, which is to defend the reserve currency status of the dollar at all costs. For if the consequences of losing that status in 1972 would have been disastrous at a time when there were only a few billion dollars held in foreign banks they became even more unthinkable as the years passed and those billions turned into trillions.

Of course, it is very difficult to say how much this consideration has influenced individual American interventions around the world over the last forty-five years, especially as none of these interventions have ever been advertised as being for this purpose. When the preservation of the dollar’s status as a reserve currency is a consequence of one of these interventions, however, it is equally difficult to regard it as being merely coincidental. Do we really believe it to have been a mere coincidence, for instance, that in October 2000, before the invasion of Iraq in 2003, Saddam Hussein had begun selling Iraqi oil in Euros, thereby directly challenging the position of the dollar, or that prior to his overthrow and brutal murder in 2011, Colonel Gadhafi had been actively discussing the creation of an alternative reserve currency with fellow African leaders and had already amassed an estimated 150 tons of gold to this end. After all, both Muammar Gadhafi and Saddam Hussein had been tolerated and even accommodated by the USA for decades up until they each started going down this route. Moreover, just as we now know that Saddam Hussein did not have weapons of mass destruction, so there is no real evidence that Colonel Gadhafi was actually committing genocide against the people of Sirte as was then widely reported in justification of military action. The only wrong-doing of which we know for certain that the two men were guilty, therefore, is that they had both, in their different ways, threatened the reserve status of the dollar.

Which is something of an irony. For even if this posited motive for the removal of the two men is only partially true, that preserving the reserve status of the dollar was only one of the reasons for getting rid of them the last straw perhaps it is nevertheless the case that, for extended periods in their long careers, both of these men had actually contributed to the system that was ultimately to bring about their downfall. 

I say this because the cost of maintaining sufficient military power to execute this kind of regime change whenever it is deemed necessary is astronomically high. At present, for instance, the combined budgets of the US military and security and intelligence services amount to around $1 trillion per annum ($750 billion for the military, $250 billion for the security and intelligence services). To put this in perspective, current Russian military expenditure, for instance, stands at just $69 billion. Nor is it a coincidence that in the current financial year the US federal government will also be running budget deficit of around £1 trillion. For in order to fund this enormous military and intelligence machine, the US government invariably has to borrow at least some of the money it needs. And in 2019, the fact is that it is effectively borrowing all of it. And who will be lending it to them? Well, high on the list of lenders are quite naturally those countries which sell commodities denominated in dollars, especially oil. After all, what else are they going to do with all the dollars they receive for these commodities? They are not just going to put them in vault where they can’t earn any interest. So they buy dollar denominated assets, including US Treasury bonds, thus providing the funds with which to pay for the military which then crushes anyone who tries to break out of this self-perpetuating vicious circle. 

Nor was this the only monster which the petro-dollar unleashed upon the world. For back in 1972, military protection was only one of the conditions the Saudis laid down for continuing to sell their oil in dollars. Of far more immediate consequence was their demand for an increase in the price of the oil, itself, which went from $3 a barrel in 1972 to $12 a barrel by the end of 1973, an increase of 300% which sent a tide of inflation rippling around an oil-dependent world. 

That OPEC justified this price hike on the grounds that the West was supporting Israel and therefore needed to be taught a lesson made it seem, of course, that the US/Saudi accord had nothing to do with their demands for a price increase, especially as the US was among the group of countries, including the UK, that were also singled out for an embargo. Anyone who actually believes that the US/Saudi pact and the oil price hike were not connected, however, or that the US did not sanction the latter, is more than a little naïve, especially as the price rise, itself, was in America’s interest. For not only did it further increase the need of other countries to hold dollars and more of them thereby strengthening America’s position on the world stage, it also gave the direct recipients of this windfall Saudi Arabia, the Gulf States, Libya, Iraq and even Iran, then ruled the CIA-installed Shah even more dollars with which to buy US manufactured goods including armaments and US treasury bonds, thereby providing the US with the funds it needed to maintain the military and intelligence establishment required to police this new world order. 

The people who actually footed the bill for all this, in fact, were rather the citizens of the non-oil-producing countries of Europe, especially the UK, which had yet to discover or, at least, start to exploit its own oil reserves in the North Sea, and which, after a decade of financial mismanagement, was saddled with what was probably the weakest economy in the developed world. Already in 1967, the Wilson government had had to devalue the pound against the dollar from $2.80 to $2.40 almost entirely as a result of having indulged in the kind of fiscal and monetary policies which the Bretton Woods system was designed to prevent. Spending far more money on its welfare system and nationalised industries than it was able to raise in taxes, it inevitably ended up printing far more money than the growth in the underlying economy would have justified. The result was soaring inflation which meant that, at $2.80 to the pound, its exports had simply become uncompetitive, with the further consequence that it was also running a trade deficit with the rest of the world, which in turn meant that it was haemorrhaging dollar reserves, thus rendering further money printing even more problematic. 

When West Germany and others left the Bretton Woods system and allowed their currencies to float freely on foreign exchanges, this then made the position even worse by opening the door to arbitrage: a method of making money which exploits any differential in the price of a particular commodity in this case, the UK pound on different markets. Just as in the early 1990s, when the pound was artificially pegged to a basket of European currencies under the European Exchange Rate Mechanism or ERM and consequently came under predatory pressure from George Soros, so too, in the early 1970s, the Bank of England thus found itself repeatedly forced to support the official exchange rate by expending even more of the country’s gold and dollar reserves on buying pounds, until eventually, in June 1972, the then Chancellor of Exchequer, Anthony Barber, finally conceded defeat and allowed the pound to float freely, at which point, being grossly overvalued, it duly sunk like a stone.

The result was a further increase in the price of imports, including oil, and hence a further increase in inflation. When the hike in the price of oil also took effect later in the year, inflation then simply spun out of control, reaching a peak of 28% in 1974.

Speaking personally, it is period in our history I remember very well. I was a student at the time, working three or four nights a week on one of the bars at my university in order to augment a small student grant which was set at the beginning of each year but which rapidly lost value as the year progressed, with the result that by the latter half of final term of each I had almost nothing left to live on.

The situation in the country at large, however, was even more dire. Requiring significant pay rises to keep pace with inflation, workers all over the country were going on strike, thereby fuelling inflation even further. One particular strike, in our nationalised coal industry, which denied coal to coal-fired power stations, also meant that there were frequent power cuts. Many domestic customers only had electricity for a few hours each day while most of industry was reduced to working just three days a week. It was as if the whole fabric of our society were falling apart. And it was then, in the midst of all this turmoil, that we joined the European Economic Community, or Common Market as we then called it, our entry into this economic utopia having been sold to us on the basis that it would solve all our economic woes. Which, of course, it didn’t. 

Indeed, in many ways it actually made them worse, the fate of the British car industry being a case in point. 

In 1968, under the auspices of the previous Labour government, most of the country’s many small automotive manufacturers, such as Austin, Morris, MG and Rover, had been merged into a giant conglomerate called British Leyland. The idea had been to reduce costs and hence become more competitive by reducing the overall number of models produced, standardising tooling and unifying distribution. The effect, however, had already been fairly disastrous. Not only was the new mega-corporation cumbersome and overly bureaucratic, but the elimination of competition between the rival makes led to a steady decline in both design and build quality. By the time the country therefore entered the EEC and removed tariffs from French, Italian and above all German imports, the British automotive industry just couldn’t compete. Compared to the modern, well-built Citroens, Alfa Romeos and BMWs of our European neighbours, nobody wanted to buy the poorly designed and fault-riddled models of our own manufacture. Around the old Longbridge plant, to the southwest of Birmingham, where Austins used to be made, they actually rented fields in which to store all the unsold cars: acres and acres of them, all lined up and simply rusting away. 

The inevitable result was that the company also therefore started to lose money, so much so that in 1975 a returning Labour government decided to nationalise it in order to try to save it, at which point the trade unions, particularly at Longbridge, went on strike for higher pay, knowing that the new owners had no choice but to comply. And so it was that British Leyland continued its downward spiral, along with the entire British economy, with the further result that, in 1976, the then Chancellor of the Exchequer, Denis Healey actually had to apply for a loan from the IMF, possibly one of the greatest humiliations the UK has ever suffered. 

What it did, however, was finally convince the British people to try something different. And in May 1979 they duly elected a new Conservative government under Margaret Thatcher. 

As in the case the Reagan administration in the US, such was the revolutionary effect of the Thatcher government on the economy that many people have seen it as some kind of radical new departure in economic thinking. In America, they even have a name for it: they call it Reaganomics. In reality, however, it was merely a return  to the monetary and fiscal discipline of the 1950s. 

To eliminate inflation, both Reagan and Thatcher first put up interest rates, thereby reducing the rate of increase in the secondary supply of money brought about by credit issuance. To avoid depressing the economy they then cut direct taxation for both individuals and businesses and balanced this by reducing public expenditure. Margaret Thatcher also privatised or simply closed down loss-making nationalised industries like British Leyland, many of which, such as British Telecom or BT, then went on to be highly successful. In fact, the whole undertaking was a massive success. Although in the process Margaret Thatcher probably made herself the most hated British Prime Minister of all time, with those on the left quite naturally condemning her for cutting taxes for the rich while reducing benefits for the poor, she nevertheless turned the economy around such that by the mid-1980s Britain was not just stable again but booming. 

And at that point it might have been hoped that the negative effects of Bretton Woods would have finally worked their way through the system. After all, both Britain and America had learnt their lessons, discovering that all that was required for economies to function properly was for governments to act responsibly: something which the gold standard was designed to ensure but which could just as easily be accomplished without the tie to gold, simply by politicians behaving in the best interests of the countries rather than in the best interests of their parties or themselves.

Unfortunately, not only are such lessons easily forgotten, especially when they are barely understood in the first place, but at this point another of the pillars of the Bretton Woods Agreements, their internationalism, started to come into play in a way that had further economic, financial and monetary consequences: consequences which not only led to the financial crash of 2008, but which are still in play today and which will almost certainly cause the whole financial and monetary system to eventually collapse, as I shall endeavour to explain in ‘The End of an Era (Part II).