The overarching objective of the Bretton Woods Conference,
which met at the Mount Washington Hotel in New Hampshire in the last months of
the second world war, was to create a blue print for a new world order that
would ensure – or
so it was hoped – that
such world wars would never happen again. To this end it laid the foundation for
a new financial and monetary system designed to prevent the kind of economic depression
the world had experienced in the 1920s and 30s and which had provided such
fertile soil for extremist politics. The problem, as I pointed out in Part I of
this essay (which you can find here) was
that, based on gold, this system itself stored up monetary and financial
problems of its own, the first of which –
the tide of inflation that swept around the world when the system finally
collapsed – took
more than a decade to cure, while the second – the need for the USA to maintain the reserve status
of the dollar at any cost –
has still, I believe, to reach its inevitable dénouement, when it too will
surely collapse under the accumulated weight of US Treasury debt, which
currently stands at over $21 trillion and is growing at a rate of around $1
trillion per year, much of it used to fund the vast military and intelligence
machine needed to defend what, in time, its very cost will ultimately destroy.
However, these were not the only problems which the Bretton
Woods Conference set in train. Of potentially even greater destructive force
was the political path upon which the conference set the world: a path which,
at the time would probably have been called ‘internationalism’ but which we now
refer to as ‘globalism’, and which, from its inception, had two main aspects.
The first, defined by what it was against, was what one
might call anti-nationalism. Because both World War I and World War II were
perceived as having been caused by either national rivalries or the need to
assert or reassert national pride, nationalism, both as a political ideology
and as a state of mind, were now seen as forces to be curbed. To achieve this, it
was also therefore thought that both the role and the power of nation states had
to be curtailed or counter-balanced by new international institutions – principally the UN – which would not only attempt
to control how nation states behaved on the international stage but how they
behaved domestically as well.
That this was not only inherently very difficult but
essentially at odds with the even more important political principle that
nation states should be both self-governing and democratic – thereby providing a bastion
against authoritarianism –
will be the subject of the final part of this essay – The End of an Era (Part III) – in which I shall
discuss the consequences of this contradiction in the light not just of Brexit but
of the numerous other nationalists movements throughout Europe which are
currently attempting to wrest back sovereignty from an undemocratic and
increasingly authoritarian EU. In Part II, however, I want first to concentrate
on what, on the face of it, might be seen as the more positive aspects of this
new internationalism, especially the efforts made, both at Bretton Woods and in
the years that followed, to eliminate the kind of protectionism that had precipitated
a fall of 65% in world trade during the great depression and which now led to the
first ever signing of a General Agreement on Tariffs and Trade (GATT): a
comprehensive template for future world trade which came into force on 1st
January 1948 and which had three main objectives:
1.
To make it illegal for one country to ban or
restrict imports from another except under a small number of clearly defined
conditions;
2.
To prevent countries from discriminating against
some trading partners by providing preferential tariffs to others except under approved,
formal trade agreements;
3.
To progressively reduce or eliminate tariffs altogether
on as many goods and commodities as possible.
Even more importantly, it changed the nature of the
relationship between the developed nations of the world and many developing economies
which had previously been held back by asymmetrical tariff structures. Gone
were the days, for instance, in which colonial or former colonial powers like
Great Britain could trade manufactured goods for commodities without their own
manufacturers coming under pressure from manufactured goods flowing in the
opposite direction. Indeed, under the GATT, developing nations could now impose
higher tariffs on imports from Europe and America in order to protect their own
fledgling industries: a provision which effectively forced first world manufacturers
wanting to sell to developing countries to physically set up factories in those
countries themselves, thereby bringing into being the first truly multinational
corporations, with US firms like Ford and Coca-Cola, in particular, setting up
production plants all around the world, not just in order to service the local
markets but to make use of low cost labour in order to export more cheaply
elsewhere.
In fact, it didn’t take long for US multinationals to start
offshoring production for importation back into the USA: a development which also
very quickly revealed some of the negative consequences of this new globalist
pattern of trade. For while the offshoring of manufacturing might have been
good for both multinational corporations and consumers – providing the latter with cheap imported goods – it was far less benign with respect to first world
employment and became even less so after 1979 when, under the leadership of Deng
Xiaoping, China finally opened itself up to the world, undertaking a number of
economic reforms which ultimately led to a US/China accord that was greatly to
China’s advantage. For in order to be able to buy the western technology that China
needed for its widespread programme of modernisation, the US agreed to aid
China’s own exports to the west by allowing the Chinese to peg their currency – the renminbi or yuan – to the dollar at an
exchange rate set solely by the Chinese themselves.
As mistakes go, I believe that this will one day be seen as
one of the worst in post-war history, exceeded in its short-sightedness only by
the decision at Bretton Woods to tie the rest of the world’s currencies to the
price of gold. For the inevitable consequence of giving China this licence was
a period of almost three decades –
up until 2006, in fact –
during which the yuan was traded at anywhere between 30% and 40% below its true
value, making Chinese exports to the rest of the world so cheap that even the EU’s
high import tariffs couldn’t protect European industry from the unfair
competition.
For this wasn’t the kind of free trade championed by the
likes of Margaret Thatcher and others at that time. This was price manipulation
by a communist government which suppressed the wages of its workers in order to
produce consumer goods for export which its own people could not themselves afford.
It could almost be described as a form of economic warfare, designed to destroy
the industrial foundations of an enemy. Yet neither the multinational
corporations which benefited from moving their production to such a low cost
environment, nor the guardians of our new internationalist world order were
inclined to do anything about it, not least, in the case of the latter, because
they clearly regarded the political integration of China into the new global
economy as being of far greater importance than the continued economic health
of the West, which, to be fair, they probably didn’t even realise was under
threat. For like everybody else, they told themselves The Big Lie: that growth
in China meant greater prosperity for the entire world and that any loss in low-grade
manufacturing jobs in Europe and America would be more than compensated for by
the growth in higher grade jobs in the new technology sectors that were then
emerging and in the services industries, especially the financial services
industry, which duly took this opportunity to plead its case for deregulation,
thereby preparing the ground for the financial crash of 2008 and the perilous
state in which we still find ourselves even today.
I say this because although banking deregulation is seldom cited
as one of the causes or necessary preconditions for what happened in 2008, there
was a very good reason why, up until the 1980s, the banking industry had been so
closely controlled, with its operations strictly separated into five main types
of financial institution which can be listed as follows:
- Commercial or High Street Banks which provided short term loans, mostly in the form of overdrafts, to both retail and business customers, funded by short term deposits in the form of current or chequing accounts.
- Building Societies (in the UK) or Savings and Loan companies (in the US) which provided long term loans in the form of mortgages to retail customers, funded by long term deposits in the form of personal savings accounts. In the case of many UK Building Societies, in fact, customers were often required to have saved with them for a period of up to five years before they were eligible for a mortgage, thus not only ensuring that these borrowers were reliable but providing the long term financing which this kind of banking required.
- Merchant or Investment Banks which provided medium term loans to businesses, mostly in the form of medium term debentures, funded by the capital of the banks, themselves: a high-risk form of lending which naturally tended to make these banks extremely cautious with respect to whom they lent their money.
- Pension Funds which provided long-term income streams to long term savers, funded by long term investments in equities and bonds, the very nature of these long term liabilities making these funds also very conservative in the kind of investments they made.
- Stockbrokers who made medium to long term investments in equities and bonds on behalf of individual clients whose own attitudes to risk largely determined the type of investments made.
Note that with the exception of stockbrokers – who never incurred any real
risk to themselves except with regard to their reputations – the very structure of
most of these institutions made them extremely risk-averse, with the result
that getting a loan or investment out of any one of them required a very good business
case and a lot of persuasion. The real importance of these divisions, however,
was that they matched different forms of lending to different forms of funding,
thereby avoiding the most serious mistake any banker can ever make, which is to
borrow short to lend long. Once deregulation had taken place, however, nearly
all the new financial institutions that were formed – whether by organic expansion or mergers and
acquisitions – now
covered most or all of the above functions, especially the first four, with the
result that borrowing short to lend long not only became possible but, due to
other, parallel developments –
which I shall outline below –
became more or less normal.
When Bear Stearns collapsed in 2008, for instance, it is
reported that while almost 20% of its $365 billion in assets were in the form
of highly illiquid mortgage-backed derivatives, nearly 40% of its funding was
in the form of overnight loans which had to be renewed every day and which also
vastly exceeded its own capital of just $11.1 billion. When the short term
lenders began to worry about whether the bank’s long term mortgage-backed
derivatives were really worth as much as they were supposed to be, and consequently
refused to renew the overnight loans, it thus became instantly insolvent.
Another consequence of deregulation was the increased
issuance of credit cards, with every former building society and other new financial
institution eager to get in on this highly lucrative business. The trouble was
that this was lending of a completely different type to any that had gone
before. For prior to deregulation, nearly all lending had been for investment,
whether in property or a business. Credit card lending, in contrast, is almost
entirely for consumption. Indeed, it is this that justifies such high credit
card interest rates. For whereas a borrower who borrows money to invest hopefully
ends up in possession of an asset which he can then sell if necessary to repay
the debt, the borrower who borrows to consume ends up with nothing: the meal
having been eaten; the holiday having become a distant memory. Thus while
borrowing to invest is still intrinsically risky – in that the investment may not turn out to be worth
as much as one might have hoped –
borrowing to consume is sheer reckless irresponsibility. And prior to
deregulation, no bank manager would have ever allowed a customer to act in this
way, both for the customer’s sake and the bank’s. Indeed, he would not only have
regarded it as bad banking practice but as morally reprehensible. So lucrative
was this business, however, with interest rates on many credit cards above 30%,
that moral scruples were simply brushed aside, allowing yet another formerly
abjured practice to become the norm.
Still, none of this would have led to the perfect financial
storm to which it eventually gave rise without one further ingredient. Unfortunately,
this was duly supplied in 1987 when the newly appointed chairman of the US
Federal Reserve, Alan Greenspan, announced to the world the end of ‘Boom and
Bust’, a phrase which the UK Chancellor of the Exchequer, Gordon Brown, was to repeatedly
use a decade later to describe the intended consequences of his own economic
strategy. The idea was to use monetary and fiscal policy to iron out the peaks
and troughs in what is called the ‘business cycle’: a natural sequence of ups
and down in economic activity brought about by the fact that all businesses
tend to expand until they saturate their existing market or markets. The
problem is that they do not usually know what the saturation point is until they
have actually reached it, their first indication
of this very often being a build-up of unsold stock in their warehouses. Their
immediate reaction, therefore, is to cut production and perhaps even lay off staff,
which consequently has a knock-on effect on other businesses, both in their own
supply chain and in the local economy at large. As a result, these ‘business
cycle’ down-turns tend to be synchronised right across the economy and only
come to an end when a new point of equilibrium is reached, thereby allowing the
whole cycle to start again.
The new policy advocated on both sides of the Atlantic was
therefore to intervene, both fiscally and monetarily, whenever the first signs
of a general down-turn were detected: fiscally by increasing public expenditure
to compensate for the decrease in business spending; and monetarily by reducing
interest rates so as to encourage continued consumer spending. And initially it
worked: so well, in fact, that during the Clinton administration and beyond,
Alan Greenspan was thought to be a genius.
There were, however, a number of problems with this whole
policy. On the fiscal side, the main problem was that increasing public
expenditure very quickly began to seem like a general panacea for dealing with
any economic down-turn and therefore soon became a habit, leading to chronic
deficit spending: the very thing that the Bretton Woods convention was set up
to prevent. On the monetary side, however, the problems were more acute. For
reducing interest rates to maintain consumer spending effectively brings consumption
forward. People buy today to consume today what they might otherwise have bought
and consumed tomorrow. Which, in many cases, means that they won’t now be
buying the item in question again for some time. After all, if you buy a new
car this year, you are unlikely to buy a new car next year, especially if you
have had to borrow the money for the purchase. For apart from anything else,
you’ve now got this debt.
As a general policy, therefore, reducing interest rates to
boost consumer spending in order to avoid an economic down-turn has diminishing
returns. The first one or two times it is used, it works well. But there is
only so much consumption you can bring forward and only so much debt you can
load on to the consumer’s shoulders before it loses its attraction. For people
just won’t borrow more money if they are already over-indebted.
Worse still, this policy works like a downward ratchet on
interest rates. For while central bankers are quick to reduce interest rates
whenever an economic down-turn is detected, they are generally much slower to
put them back up again when the economy recovers, citing worries over whether a
premature increase might ‘choke off the recovery’ as justification for the
delay. Throughout the 90s and 00s, as a result, interest rates fell steadily
all around the world, making it cheaper and cheaper to borrow money, especially
for banks who found themselves able to borrow cheap short term money from other
banks at 2 or 3%, while lending it at 30% on credit cards or, more importantly,
at 11 or 12% on mortgages.
I say ‘more importantly’ because while, by then, banks were
providing 100% mortgages at up to six times annual salary to anyone who could
sign their name –
regardless of whether they had sufficient income to service the interest on
them, let alone repay the principal –
property backed mortgages were still inherently less risky than unsecured
credit cards, and were made even less so by their massively increased
availability. For their very cheapness and the fact that they were now
available to many more people than was traditionally the case meant that demand
for property was also greatly increased, which, in turn, increased house
prices. This meant that even if an over-extended borrower were to default, the
mortgage could simply be foreclosed, the house sold and the money recovered
without loss.
As a further consequence, this also meant that lending on
credit cards to anyone who owned a property was also less risky. For if their
credit card debt became unpayable –
or even unserviceable –
they could simply take out another mortgage on their rapidly appreciating
property and start afresh. Indeed, for many, it actually took the worry out of
credit card spending, allowing them to hit the High Street with gay abandon,
running up debts they would never have otherwise dreamed of incurring.
But surely, you say, someone had to know that this was going
to end badly. The government for instance? Couldn’t they see where this was headed?
Almost certainly yes. The problem, however, was that, while on the surface, the
economy (in the UK at least) may have looked rosy – largely as a consequence of rising house prices and
all this consumer spending –
underneath things were slightly different. This can be seen from the GDP
figures for the seven years leading up to the financial crash. For while,
according to the Office of National Statistics (ONS), overall GDP in the UK rose
by an average of 3.01% per annum, more than half of this was due to growth in
just two sectors:
- The financial sector, itself, which grew at an average rate of 9.08% per annum, exceeding 9% of the entire economy by 2008, and
- Public expenditure, which grew at an average annual rate of 6.41% during the same period.
In contrast, the real
economy – that
which produced the goods and services that people were actually willing to pay
for and which employed 85% of the working population – grew at an average rate of just 1.49% per annum.
Hollowed out by having exported its manufacturing core to the far east, the
part of the economy which created the country’s real wealth was thus in a state
of near stagnation.
From the point of view of the government, this therefore
posed a number of problems. For there was no way that it could afford to rein
in the financial sector. Not only was it providing a large part of the
country’s economic growth, but it was also supplying a large slice of the taxes
that were paying for the increases in public expenditure which comprised the other
significant portion of what appeared to be a booming economy. On top of that, it
was also UK banks that were financing most of the government’s annual deficit
which, by 2007, had already reached around £50 billion.
Worse still, the population at large, unaware of the growing
financial bubble upon which their apparently affluent life-style was based,
were partying like never before. With house prices having doubled in less than
a decade and set to go rising, people felt as if they’d won the lottery. And
there was no way that any politician was going to pour cold water on that.
But surely the banks, themselves, saw the danger? Of course,
they did. But they weren’t going to curtail their lending. They were making too
much money for that. Making use of the fact that financial institutions which
sold mortgages could now also create and sell such exotic financial instruments
as mortgage-backed derivatives –
something which building societies could never have done – they decided instead to
simply spread the risk by packaging up their sold mortgages into blocs and then
selling shares in these blocs to other financial institutions.
And for a while, indeed, these Collateralised Debt
Obligations, or CDOs as they were called, were actually very popular. For based
on a percentage of the mortgage interest coming into the original lending bank,
not only did they produce a good annual rate of return, they allowed secondary
investors to share in this lucrative business without having to sell mortgages
themselves. Even more importantly, with property prices continuing to rise, investors
seemed certain to get their money bank upon maturity. Indeed, so safe did these
long term mortgage-backed derivatives seem that banks like Bear Stearns were
even prepared to borrow short to invest in them, making a significant profit on
the differential interest rate.
The problem, of course, was that, as with any rising market
in which buyers have to borrow to purchase what is sold, there comes a point at
which the price exceeds that which borrowers can practically manage: a problem
which was further exacerbated by the fact that, as with all such tipping
points, its exact location wasn’t known until it was actually passed and
borrowers started to default. Worse still, defaults meant that lenders started
tightening their lending policies, such that there were now fewer buyers to
purchase the foreclosed properties, leading to a drop in prices in order to
ensure disposal. This, in turn, then led to problems for borrowers who had been
sold mortgages with graduated payments, in which the bulk of the burden only
kicked in when the borrower was supposed to have already had enough equity in
the property to dispose of it at a profit if required. The result was an
increase in fire-sales and a crash in property prices right across both America
and Europe, followed by a cascade of defaults, which now brought into question
the real value of all the CDOs that had been issued and the solvency of those
banks that had borrowed to buy them.
In fact, so bad was the contagion which now flowed through
the banking system that many people will be surprised to discover just how
little money the banking industry as a whole actually lost. In the four years
from 2008 to 2011, for instance, the four largest UK banks – HSBC, Barclays, Lloyds
HBOS and RBS – lost
a total of just £47.34 billion, with RBS responsible for around three quarters
of this amount. Of course, this figure, which is merely an aggregate of the
losses stated on the four banks’ P&L accounts, doesn’t take into account
lost profits. But even projecting profits forward from earlier years, the total
figure would still not have exceed £100 billion. The real contagion, therefore,
was not one of actual losses, knocking over banks one at a time like a row of
dominoes, but one of fear and hence paralysis. For not knowing which financial
institutions were solvent and which were not, banks around the world did not
know who they could lend to and who they could not, and so simply stopped
lending to one another altogether, freezing the circulation of money in a way that
caused the collective balance sheets of the same top four UK banks to shrink by
a massive £1.8 trillion.
If, like me, you find ‘money’ a bit of a mystery, you may of
course wonder how this is possible. If the banks didn’t ‘lose’ the money, where
did the £1.8 trillion go? What you have to remember, however, is that under
normal circumstances only around 3% of the money supply is ‘base’ money issued
by the central bank. The other 97% is largely created by the banking system
itself, as I shall endeavour to explain.
Suppose, for instance, that Mr. Brown wants to buy a house
from Mr. Green. He therefore goes to his bank (Bank A) and borrows £250,000. Bank
A doesn’t actually have £250,000, but once the deeds to the house have been
exchanged, it simultaneously borrows the money from Mr. Green’s bank (Bank B)
and transfers it back again so that it can be credited to Mr. Green’s account.
We now have a situation in which Mr. Brown owes Bank A £250,000 (in the form of
a mortgage); Bank A owes Bank B £250,000 (in the form of an inter-bank loan);
and Bank B owes Mr. Green £250,000 (in the form of an account from which, in
principle, he may withdraw the money at any time but, in practice, probably won’t).
The upshot is that, depending upon which way you look at it, either £250,000 or
£500,000, which didn’t exist before, has now been created out of thin air,
expanding the balance sheets of both banks involved by this amount, with each bank
gaining an extra £250,000 in both liabilities and assets.
Thus it is that, while banks keep lending to each other, the
money supply expands. When they stop –
and more especially when they start to unwind their respective positions,
calling in debts in order to repay debts of their own – it contracts. And that is precisely what happened
here. Unable to borrow money from other banks but with liabilities falling due,
each bank had to call in money lent, not just from other banks, but also from
commercial customers as well, in many cases reducing or even removing overdraft
facilities from businesses, thereby not only shrinking the money supply but adding
even further to the economic downturn which, in Britain at least, was to become
the deepest recession since the early 1920s, with a fall in GDP of over 6%.
What this revealed more than anything else, however, was not
just the anarchic dysfunctionality of our deregulated financial system but the inherent
weakness in the underlying economy, which, as a result of two decades of
accumulated debt, lacked the kind of robustness necessary to bring about an
early recovery. Worse still, with
interest rates already at rock bottom and government spending already at record
highs, it also very quickly became apparent that the previously
relied-upon policies for dealing with such economic downturns – reducing interest rates
and increasing public expenditure –
had all been used up.
Not, of course, that this prevented governments all around
the world from attempting to apply these policies anyway. In a coordinated
effort, central banks worldwide duly cut interest rates once again, in many
cases reducing them to below zero in an attempt to deter commercial banks from
placing money on deposit with them, thereby forcing them – or so it was hoped – to lend to consumers
and businesses instead. The problem was, of course, that with consumers already
up to their eyes in debt –
and actually starting to pay down their credit cards – and businesses disinclined to invest in the middle
of a recession in which demand was still declining, there just weren’t that
many takers. And even though politicians still attempted to morally shame banks
for not lending enough –
especially after all the public support they, themselves, had received – the fact is that you
can’t force people to borrow money if they don’t want to.
So that left increasing public expenditure as the only other
weapon available. And, as in many other countries, that’s what many people in
the UK also called for. The problem was that, due to the recession, UK tax
revenues had fallen massively, tripling the annual deficit from £50 billion to
£150 billion in just one year. Any additional borrowing to finance increased
public expenditure, therefore, would have risked lender resistance, very
probably forcing the Treasury to increase the interest paid on its bonds or
risk the failure of a bond issue altogether.
In fact, with the exception of the USA, most countries that
tried to borrow and spend their way out of the recession – most of them in Europe – found opposition in the
bond markets and, in many cases, had to turn for support to what became known
as the troika: a combination of the IMF, the ECB and the EU, which duly forced
supplicant nations to cut public expenditure as a condition of their help.
With an annual deficit of £150 billion – around 9% of GDP at the
time – this might
also have been the fate of the UK, even without an increase in borrowing to
finance increased public expenditure, had it not been for the early
introduction of a programme of quantitative easing (QE) by the Bank of England,
which duly printed additional base money with which to purchase UK Treasury
bonds from subscribing commercial banks, thereby not only ensuring that these
banks had enough liquidity to subscribe to the next Treasury issue, but guaranteeing
that the price of the bonds stayed high while their corresponding yield
remained low.
Not, of course, that it was ever admitted that this was the programme’s
primary purpose: an admission which, had it occurred, would have severely
dented confidence and would therefore have been self-defeating. Instead, the
programme was usually presented as simply another tool in the BoE’s armoury for
increasing general liquidity within the banking system and therefore for helping
to stimulate economic growth. The fact remains, however, that without the BoE’s
purchase of £325 billion worth of UK Treasury bonds from UK commercial banks between
2008 and 2011, it is very doubtful whether UK banks, on their own, would have
been able to purchase even half of the £520 billion worth of bonds issued by
the Treasury during that period. Even more tellingly, all this money printing
and purchasing of bonds had absolutely no effect on the real economy which remained
stubbornly stagnant throughout the entire three and half years in which the
programme was in operation.
Indeed, as a tool for stimulating economic growth, QE has proven
itself to be spectacularly ineffective almost every time it has been used. Since
2015, for instance, the European Central Bank (ECB) has been pouring between €60 and €90 billion a month into
the Eurozone financial system and, to date, has purchased more than €2.4 trillion in financial
assets, including stocks and corporate bonds as well as Eurozone treasury
bonds. And yet economic growth in the Eurozone has not managed to climb above
2% during this entire period, with some countries, like Italy, remaining in
almost perpetual recession. For the problem in Europe, as in most of the developed
world, is not actually one of liquidity. As shown in the above example, the
banking system, itself, is quite capable of producing the necessary finance if
there is a requirement. It doesn’t need a central bank to print more base money
to do this. What it needs is a demand for finance from the real economy. And
this is what is lacking. For with stagnant real incomes, over-indebted
consumers provide very little incentive for businesses to invest in increased
production or improved productivity. And without growth in either of these
areas, real incomes, in turn, remain suppressed. This is the vicious circle we
are in. And printing money and pumping it into the financial system does
nothing to solve this problem.
In fact, in many ways, it makes it worse. For once the money
has been pumped into the financial system, it has to be invested somewhere. And
as it is not being invested in the real economy, it ends up being invested in
either property or financial assets. As a result, property prices have risen
once again, in many places to levels at which those living on suppressed median
incomes simply cannot afford to buy a house or apartment of their own and are therefore
forced to spend an even larger proportion of their income on rent, thereby
depressing demand in other parts of the economy even more.
Worse still is the state of our financial markets. For with
central banks buying up so many financial assets, and so much money swirling
around within the financial system itself, owners or managers of this money are
desperate to find any financial asset in which they can profitably invest. The
result is that just about all financial markets – whether they be for stocks, treasury bonds,
corporate bonds or ever more exotic derivatives – are at peak highs. Nor is this helped by the fact
that, unable to find profitable investments in the real economy, the only way for
many businesses to secure increased shareholder value is through cash based
mergers and acquisitions and share buybacks, the technical simplicity of the
latter making them particularly attractive to beleaguered CEOs who have nothing
better to do with all the cheap money that is available to them.
To see why this is, suppose that one such CEO is the manager
of a company making £1 million per year in pre-tax profits, which, at an
admittedly rather low but numerically helpful price/earnings ratio of ten to
one, would give it an overall value of £10 million. Suppose, too, that the
company has one million issued shares which are therefore valued at £10 each. Now
suppose that the CEO decides to buy back 20% of the shares. So he goes to his
bank and borrows £2.2 million at a rate of interest significantly lower than he
is currently paying out to his shareholders in annual dividends, and offers to
buy 200,000 shares from his shareholders at £11 each, an offer which most of
them jump at. The result is that the company now only has 800,000 shares but is
still making £1 million per year in pre-tax profits, which, at a P/E of 10:1
still makes it worth £10 million. This means that each of the remaining shares
is now worth £12.50, an increase of 25% which so delights the shareholders – who have already made a
10% gain on the shares they sold –
that it earns the CEO a hefty bonus even though he has done nothing to improve
the underlying value of the company or, indeed, the overall condition of the economy.
And the same thing occurs in the case of cash based mergers
and acquisitions. Twenty years ago, most purchases of one company by another
were made in shares. The acquiring company issued more of its own shares and exchanged
them for shares in the acquired company. In this way the total equity in the
new combined corporation was not reduced. Today, however, with money so cheap
to borrow, most acquisitions are executed partly or entirely in cash, thereby
wiping out some or all of the equity of the purchased company. When German
pharmaceuticals giant, Bayer, bought Monsanto in June 2018, for instance, it
paid $66 billion for the American GM specialist entirely in cash, increasing
its own share price astronomically but saddling itself with an extra $66 billion
in debt as a consequence.
Nor was this exceptional. Mergers and acquisitions have been
steadily increasing in both number and value over the last six or seven years.
In 2018, there were more than 49,000 mergers and acquisitions worldwide, at a
total value of $3.8 trillion, most of it paid in cash.
The problem is that this increased leverage makes companies
vulnerable. For while costing more in annual dividends than the interest paid
on bonds, equity never has to be redeemed. Moreover, unlike interest, dividends
can be cut or cancelled if the company gets into financial difficulties. Thus
while funding acquisitions through debt may look attractive today, with
interest rates very low, it may seem rather different tomorrow if interest
rates were to rise. Indeed, it is estimated that up to 25% of all listed
corporations around the world would face some degree of difficulty if interest
rates rose by any significant amount. Not only would they not be able service
their debts, but in many cases they would not be able to refinance themselves
when the debts fell due, thereby earning for themselves the soubriquet of
‘Zombie Corporations’, or corporations that are actually already dead but just
don’t know it yet.
Not only has very little of this ‘financial engineering’
therefore been to the benefit of the corporations, themselves, it has also done
nothing for the wider economy. Indeed, for the most part it has no effect upon
the wider economy at all. Its only benefit has been to the guardians of this
vast financial machine who have made the ‘financialisation’ of the economy
their business: the hedge fund managers and heads of investment banks who,
along with their clients have made billions out of gaming the system in this
way, while the real economy, along with the incomes of all those who labour
within it, has remained more or less stagnant.
Indeed, it is as if the economy, itself, has been divided
into two: the real economy, which produces the goods and services which we all need
to live, but which is in slow decline; and the financial economy, where people
are turned into billionaires overnight, and which seemingly continues to expand
with endless ease. For creating money without having to create any extra real
wealth is easy. All you have to do is press the ‘enter’ button on your keyboard
and there it is: another million in someone’s bank account. The trouble is that
creating money without creating any extra real wealth, as demonstrated by the
Weimar Republic in the early 1920s and as the participants at the Bretton Woods
Conference tried to teach us in 1944, is the ultimate recipe for disaster.
In April this year, as if in reminder of this fact, the
Institute of International Finance reported that in 2018, total global debt – sovereign, corporate
and household –
reached a staggering $243 trillion, about four times global GDP: an amount so
vast it can never be repaid and must eventually bring down the world’s entire
financial and monetary system. Indeed, so enormous has this problem now become
that it is hard to understand how we could have got ourselves into such a position,
especially as there must be those in power who saw this is happening and yet did
nothing about it. It almost seems like a repeat of the years leading up to the
2008 crash when politicians turned a blind eye to all the excessive mortgage
lending that was then taking place and which speaks, therefore, to a kind of
moral corruption: one which is apparently not just confined to elected
politicians desperate to keep their electorates sweet, but which also extends
to the guardians of our new globalist order, who would seem to be just as determined
to preserve the status quo as everyone else, even while it pushes the world ever
closer to destruction.
And it is this, I believe, that people throughout the West
are now beginning to sense: that our globalist masters – that small, unelected coterie of non-governmental
technocrats, global financiers and heads of multinational corporations, whom we
watch descend upon Davos each year in their private jets – are not only not in control of what is happening but
are either in denial –
still believing us to be on course for that the brave new world envisioned at
Bretton Woods – or
are cynically content to oversee an End of Days they know they cannot prevent
but from which they are nevertheless determined to squeeze the very last drops
of power and prestige. And it is this fecklessness, I believe, combined with an
air of entitlement and a total lack of care or concern for those they have
betrayed and on whom they have turned their backs, against which populist
movements all across Europe are now beginning to rebel.
The only question that remains, therefore, is whether it’s
too late. And it is this question, along with how we allowed it to happen, that
I shall therefore be attempting to answer in the final chapter of this essay:
‘The End of an Era (Part III)’.
No comments:
Post a Comment