Wednesday, 26 February 2014

Milankovitch and the ‘Earth History’ Perspective on Climate Change, or ‘Why We Should All Stop Worrying About Global Warming’



It is, I believe, a little known fact that we are currently living in an ice age. It is called the Pliocene-Quaternary Glaciation and it started about 2.58 million years ago.

That this may come as a bit of a surprise to some people is probably due to the fact that it is generally believed – at least by people of my own generation – that the last ice age came to an end around 10,000 years ago. This, however, now turns out to have been an erroneous interpretation of the then available data by 19th century geologists who had neither the evidence nor the theoretical basis to distinguish between whole or completed ice ages and oscillating phases within an ice age, in which, it seems, temperature ranges are seldom if ever constant.

In fact, the current ice age can probably be best characterised as one in which relatively long periods of glacial advance, resulting from global cooling, are punctuated by shorter period of glacial retreat, resulting from global warming. What geologists in the 19th century took as the end of an ice age was only, therefore, the end of the most recent period of glaciation, which lasted around 100,000 years, and the beginning of the current interglacial period, which has so far lasted around 10,000 years.
Over the last 2.58 million years there have therefore been many such cycles of glacial advance and interglacial retreat. With respect to true ice ages, however, there is evidence of only five in earth’s history, as shown in Figure 1 – although there may have been others for which there is no geological evidence, or for which the geological evidence has not yet been found. 

Name
Period
Huronian
2.4 to 2.1 Billion years ago
Cryogenian
850 to 630 Million years ago
Andean-Saharan
460 to 420 Million years ago
Karoo
360 to 260 Million years ago
Pliocene-Quaternary
2.8 Million years ago to present
Figure 1: Known Ice Ages

What is truly remarkable about these five ices ages, however, is how very little we seem to know about any of them – at least outside scientific and academic circles. If you look up each one on Google, for instance, you will find out:
  •  that evidence of the existence of the Huronian ice age, which formed during the Proterozoic eon, comes from unique rock formations in an area north of Lake Huron.
  • that the Cryogenian ice age, which, as its name suggests, was probably the most severe of the known ice ages, covered the entire planet – turning it into snowball in space – and that it was probably brought to an end by tectonic or volcanic activity, which threw enough green-house gases into the atmosphere to start a period of global warming – though this, of course, is largely conjecture.
  • that of the Andean-Saharan ice age, nothing appears to be known whatsoever.
  •  that occurring at the onset of the Devonian period, when land based plants were beginning to cover the continents, the Karoo ice age was very possibly caused by this new vegetation extracting carbon dioxide from the atmosphere, thus reducing the greenhouse effect and starting a period of global cooling – which, again, is pure speculation.
  • that of the Pliocene-Quaternary glaciation, the only thing that is known for certain is that it is still going on.
That we know this – when the 19th century geologists did not – is because we now have ice-core samples from Antarctica and the Greenland ice-sheet which show that, around 2.58 million years ago, average global temperatures began to decline below the Vostok mean (named after the research station in Antarctica where this benchmark was first established), and that, despite the increasing amplitude of the fluctuations, they have continued to decline ever since, as shown in Figure 2.


Figure 2: Ice Core Record of the Pliocene-Quaternary Glaciation
 
Significantly, the increase in the amplitude of the oscillation tells us is that, throughout the ice age so far, the periods of glaciation have not only been getting colder but longer, with the average length of cycle having increased from 41,000 years during the first half of the ice age – as calculated to date – to around 100,000 years in the second half. What we don’t know, of course, is whether this trend is still continuing, or whether it has already reached the bottom. What we do know, however, is that, whether or not global temperatures are now on the upturn, further oscillations will occur. Without very significant external intervention, trends like this don’t simply come to an end. 

Because the current interglacial period has already lasted for around 10,000 years, we also know that we are due to enter a new period of glaciation sometime soon – even though the word ‘soon’, here, is intended in climatological terms, which could mean anything between 1 and 5,000 years. 

In fact, it was the imminence of the next cycle of glaciation which sparked much of the current research into climate change. When, in the 1980s – as a result of work on ice-core samples – it was first discovered that we are still in an ice age and that a return to a period of glaciation was due, this was very much everyone’s principal fear. And it was then that a number of climatologists came forward with the idea that the emission of man-made greenhouse gases might delay this outcome. As a result, politicians throughout the developed world then channelled funds into research to discover whether this was, indeed, the case, only to be told by the researchers concerned that man-made greenhouse gases, and the global warming to which they gave rise, might themselves prove to be problem.

Since then, of course, nearly all our attention has been focused on addressing this issue. Billions of dollars have been spent, both on fundamental research and on the technologies and changes to our economy that would be required to rein back man-made global warming. Far less attention has been given, however, to the more fundamental issue that for the last 2.58 million years the earth has been going through repeated cycles of global warming and cooling without any help from ourselves.

Indeed, for those who are not already aware of it, it is important to note that the whole of our civilisation, from the development of agriculture, around 10,000 years ago, to the creation of all the technological marvels we see around us today, has taken place entirely within the current interglacial period, a timespan so short that if you look for it on the graph in Figure 2, it is so hard up against the right-hand vertical axis that it is lost within the width of the printed line. Rather than have any causal effect on the cyclical pattern of climate change that is clearly indiscernible within the graph, our civilisation, in fact, would not have been possible at all without this brief interglacial respite between the much larger cycles of global freezing.

Of even greater importance, however, is the fact that being cyclical and entirely natural in origin, the pattern of climate change we see in Figure 2 is almost certainly driven by something which, in addition to being immensely powerful, is also fundamentally cyclical in nature. Given, therefore, that there is only one object in our solar system which meets both these requirements – exhibiting both immense power and cyclical fluctuations – it is almost certainly the case that this underlying pattern has something to do with our sun – or, more precisely, with our planet’s orbit around it.

To understand how this could be possibly, however, one first has to appreciate that the earth’s solar orbit is not uniform, being subject to cyclical variations known as the Milankovitch Cycles, named after the Serbian astronomer and mathematician, Milutin Milankovitch, who first postulated and demonstrated their existence while serving as a prisoner in an Austrian POW camp during the first world war.

The Milankovitch Cycles are the result of three fundamental, though probably little known facts about the earth’s orbit:
  1. The fact that the solar or tropical year, on which our calendar year is based, is slightly shorter than the earth’s orbital or sidereal year, thus giving rise to a phenomenon known as planetary precession.
  2. The fact that the earth not only tilts on its axis at different degrees to the sun during the cycle of each tropical year, thus defining our seasons, but that the tilt itself is subject to cyclical fluctuations.
  3. The fact that, due to the gravitation pull of the other planets, particularly Jupiter and Saturn, the eccentricity of the earth’s elliptical orbit – the degree to which it diverges from the perfectly circular – is also subject to cyclical variations.
I shall explain each of these in turn, starting with the first, which has the largest effect upon our climate, and is the factor which other factors therefore either augment or partially serve to nullify.

So let’s start with the basics.

If asked to define the term ‘year’, I suspect that most people would say that it is the amount of time it takes for the earth to complete one whole orbit of the sun. This, however, is what is known as the sidereal year. Our calendar year, in contrast, is based on the tropical year, which is measured from equinox to equinox: seasonal phenomena which result from the fact that the earth not only turns on its access, but wobbles on it, tilting first one way and then the other. What is probably less well known, however, is the fact that that these two ‘years’ are not quite the same.

If, for instance, we were to select a point in the earth’s orbit at which we could say that both years began – a point which, for convenience, we might choose to have coincide with one of the earth’s two equinoxes – then even after our wobbling planet had completed one whole inclinational cycle – leaning first one way and then other, before returning to the perpendicular again – in terms of its sidereal journey around the sun, it would still not yet have returned to its original starting point. Admittedly it only falls short by about 20 minutes. But cumulatively these annual discrepancies add up, with the earth slipping further and further back around its orbital path, until eventually – 25,772 years later – it slips all the way back to meet itself one lap behind, at which point it may have notched up 25,772 tropical years, but has only completed 25,771 actual orbits.

But what possible effect could this have on our climate, you ask. Well, if our planet’s orbit were perfectly circular, it wouldn’t have any effect at all. But, as we know, the earth’s orbit is elliptical. This means that there are times when it is further away from the sun than other times. And should these times also coincide with times when, due to its tilt, the earth is also pointing away from the sun, the effect is magnified as shown in Figure 3.


Figure 3: Solstices at Extremes of Ellipse

Here we see a situation in which both the summer and winter solstices are at the extremes of the ellipse, while the spring and autumn equinoxes are therefore at the closest points to the sun. In this configuration, winters are likely to be the coldest in the entire cycle. With the earth furthest from the sun and tilting away, less light reaches the winter hemisphere than at any other time. Summers are also likely to be poor, in that, although the summer hemisphere is pointing towards the sun, the earth is so much further away that the amount of light reaching the surface is still less than during any other summer. The only periods of respite in this cycle of rather gloomy seasons, in fact, are likely be the springs and the autumns, which, occurring at points at which the earth is closest to the sun, are likely to be warmer than usual, merging almost imperceptibly with the rather tepid summers, so that the only seasons that actually stand out – and this for their harshness – are likely to be the winters.

This is in marked contrast to the opposite configuration, as shown in Figure 4, in which the winter and summer solstices occur when the earth is closest to the sun. In this case, the summers are likely to be the hottest in the cycle, with the winters at their mildness, merging almost imperceptibly with the rather poor springs and autumns resulting from the equinoxes now being at their furthest points from the sun. In fact, in this configuration, the only seasons which are likely to stand out are the summers.


Figure 4: Equinoxes at Extremes of Ellipse

As this is based on the certainty of physical laws, geometry and mathematics, unlike many other factors affecting our climate, this, therefore, is an ever constant heartbeat, something one can absolutely depend on, and almost certainly constitutes our climate’s most fundamental cyclical base. If there were no other factors involved, in fact, this cyclical coincidence of tilt and elliptical extremes would result in the earth’s climate simply being divided into four periods, each of 6,443 years, in two of which the planet would experience cycles of overall cooling, while in the other two it would experience cycles of overall warming.

As already stated, however, Milankovitch identified two further factors which affect this underlying pattern. The first of these concerns variations in axial obliquity, or the degree to which the earth tilts one way and then the other during its tilt cycle, which varies between 22.1° and 24.5° over a period of 41,000 years. This means that during some of its precession cycles, and for quite long periods at a stretch, the earth points further away from the sun during its winter solstices than during other cycles, making these years even colder. What is also quite interesting here is that this cycle of 41,000 years just happens to be the same as the cycle of glaciation and retreat in the first half of the current ice age, as shown in Figure 2

This is then further complicated  by a third factor which concerns the shape of the earth’s orbit, itself, which is pulled in different directions, becoming more or less elliptical, depending on the alignment of the other planets in the solar system. These are again entirely determined by physical laws, geometry and mathematics, and are therefore wholly predictable, operating within an overall cycle of 413,000 years – although, within this, there are smaller cycles of around 100,000 depending on planetary alignment. 

To produce an overall model of our planet’s underlying climatic cycle, therefore, all it would seem that we now need to do is work out how all these factors interact. For depending upon the way in which their individual peaks and troughs coincide – or not, as the case may be – there are clearly going to be times when their effects reinforce and magnify each other, and times when they counteract and perhaps even nullify each other. The problem, however, is that although the Milankovitch cycles clearly constitute a fundamental part of this overall pattern, no one has yet come up with a model based on them that fully accounts for all the available climatological data, even though climatologists, today, include in their calculations two addition orbital factors – apsidal precession and orbital inclination – which were not taken into account by Milankovitch himself.

What this means, therefore, is that there have to be additional factors affecting the cycle which go beyond the pure mathematics of moving bodies in space. And two of these, it is generally believed, are feedback loops – one positive and one negative – resulting from glaciation itself.

The positive feedback loop, known as increasing albedo, results from the fact that snow and ice reflect more of the sun’s energy back out into space than grey ocean and brown earth. The further the polar ice caps extend, therefore, the more of the sun’s energy is lost in this way, and the cooler the planet becomes, raising the question, in fact,  as to how, once a period of glaciation has started, the earth ever manages to escape its grip. Indeed, increasing albedo is quite possibly one of reasons why periods of glaciation gradually get longer and deeper over time, each cycle starting from a lower base.

What stops this downward slide going on indefinitely, however, is believed to be a counter-acting or negative feedback loop which results from the fact that, as the ice sheet extends over more and more of the planet, this reduces the amount of both land-based vegetation and oceanic algae which would otherwise absorb carbon dioxide. As a result, more and more greenhouse gases build up in the atmosphere; the planet starts to warm; and the ice sheet retreats. 

What this amounts to, therefore, is another natural cycle of cooling and warming which, in a way, sits on top of and further augments the Milankovitch cycles, making the overall picture even more complicated. Even taking these feedback loops into account, however, climatologists have still not yet been able to produce a model which completely satisfies all the empirical data. Other factors – some of them possibly still to be identified – have got to be involved. What makes climatologists so concerned about the current situation, however, is that, according to the model, as I have so far presented it, greenhouse gases should only build up in the atmosphere towards the end of a period of glaciation. They should actually be at their lowest towards the end of an interglacial period. What this suggests, therefore, is that although our own contribution to greenhouse gas emissions is relatively low – accounting for less than 3% of the 793 billion metric tons of carbon dioxide deposited in the atmosphere each year, according to the International Panel on Climate Change – and although most of this (782 billion metric tons) is then reabsorbed by the world’s forests and oceans in yet another purely natural cycle, through deforestation and the burning of fossil fuels, we, ourselves, have nevertheless become another factor in the equation. 

So what’s new, you may ask. Climatologists have been telling us this for the last twenty-odd years. And so they have. What they haven’t done, however, is provide us with the kind of climatological background and historical perspective that I have been trying to provide here.  By omitting such factors as the Milankovitch cycles from their presentations, and by failing to mention that we are still actually living in an ice age, intentionally or otherwise, they have therefore given the impression that without the greenhouse gases which we ourselves are generating, our climate would be more or less stable. More to the point, by ignoring the fact that without these anthropogenic greenhouse gases we would almost certainly be heading into another cycle of glaciation, they make it seem as if man-made global warming is entirely a ‘bad thing’. But consider the alternative. For, make no mistake, if we were to prevent man-made global warming from bringing an end to the cycles of warming and cooling shown in Figure 2, then sooner or later we would enter another period of global cooling. 

As a result, the ice cap at the north pole would gradually extend further south again until, eventually, if geological evidence is anything to go by, it would reach a latitude of around 40° N, making much of Northern Europe and North America uninhabitable. Ice hundreds of metres thick would cover all of Canada and the northern states of the USA, including all of New England, New York, Michigan, Wisconsin, North Dakota and Montana. In Europe, it could easily extend as far south as the Alps and the Rhone valley – which was glaciated during the last cycle of global cooling – and would certainly cover much of northern Germany, Poland and Russia.

Admittedly, the alternative – if we allow global warming to continue – isn’t much better. In this case, the polar ice caps would continue to recede until, eventually, they would disappear altogether, returning the planet to something like its normal condition. I say this because, in the 2.1 billion years since the Huronian Ice Age, ice has only actually covered the earth’s poles for 17% of that time. What this also means, however, is that sea levels would continue to rise, with some whole countries disappearing beneath the waves. Indeed, without massive new sea defences, some of the world’s largest cities, such as London and New York, would also be at risk.

The fact is, however, that of these two possible futures, global warming offers the better prospect for survival. It may be difficult and cost a great deal of money. For as we have seen during the floods in Britain recently, defending oneself against the forces of nature is a mammoth task. It is a lot easier, however, than trying to maintain agricultural production in a country covered by millions of tons of ice.

This being the case, the question one has to ask, therefore, is why we are so exercised by the second of these two possible outcomes but seem not to care about the first, especially given the fact that if, by some miracle, we were able to prevent global warming from bringing the Pliocene-Quaternary Glaciation to a permanent end, we would actually be responsible for plunging ourselves into another period of global freezing. 

Part of the answer, of course, is simply one of timescale. For, as already mentioned, another cycle of glaciation might not happen for another thousand years or more, and even then it would take some time for the ice sheet to reach its maximum depth and coverage. In the last cycle, for instance, its didn’t reach its peak until 85,000 years in. Given that our entire civilization is only 10,000 years old, this doesn’t therefore fill one with immediate concern. The effects of global warming, in contrast – or so we are told – are already being felt. It makes sense, therefore, to concentrate on the problem at hand.

Eminently pragmatic though this strategy may seem, however, I have my doubts as to whether it actually plays any significant part in most people’s thinking. Indeed, most people, I’m fairly sure, have absolutely no idea that another cycle of glaciation is even an option. Another possibility, therefore, is that, sceptical of our ability to reverse the current trend, both climatologists and politicians alike have more or less concluded that a return to glaciation is no longer a threat: a conclusion which would certainly explain why, over the last thirty years, it has so completely slipped off the radar.

Not, of course, that one can blame politicians for doubting our collective will to combat climate change. For even if governments around the world were actually to take steps to reduce greenhouse gas emissions – rather than merely talking about them – this wouldn’t reduce the volume of such gases already in the atmosphere. In fact, unless emissions were reduced to a level lower than the rate of oceanic absorption, this residual volume would still continue to rise. If these greenhouse gases have the effect upon our climate which so many people claim they have, it is hard to see, therefore, how this effect could be reversed without a drastic reduction in economic activity: something which would almost certainly result in the deaths of millions, if not billions of people.

The question which this naturally prompts, therefore, is why, if curtailing the emissions of man-made green-house gases is so politically out of the question, politicians throughout the democratic world still largely support a green agenda, trotting out all the right words and phrases on camera, even if no real action is being taken behind closed doors. A very large part of the answer to this question, however, has to do with the way in which the campaign against man-made global warming was initially set in motion. For if one remembers back to the early years of this century, prior to the financial collapse of 2008, when venerable presenters such as David Attenborough stood in front of collapsing glaciers and bemoaned the fate of the polar bear, the way in which the issue was presented was not, primarily, as a matter of scientific concern, but as one of moral urgency. It may not have been the intention of Sir David or his producers to deliberately play upon our feelings for furry animals, but that, of course, is what they did, and with entirely predictable results. For not only did we, too, start to bemoan the way in which we – or, more specifically, right-of-centre governments and big business – were raping and destroying the planet, we also made it impossible for our politicians to adopt any other stance.

And to make matters worse, this popular media campaign to enlist our support in a world-wide movement to reverse global warming didn’t stop there. For having engaged our sympathy and provoked our moral outrage, it also triggered what I can only describe as our predisposition to mythologise. For having lifted global warming out of its historical context and attributed it purely to the activities of man, the barely disguised subtext with which we were then presented was inescapable: mankind had been given a green and pleasant land – a veritable Garden of Eden – which, in our infinite folly and wickedness, we were now in the process of turning into a watery grave. The appeal to the biblical – to the flood and Noah’s Ark, in fact – couldn’t have been more blatant. More importantly, however, it stirred in us that self-flagellating fascination with the eschatological which has haunted writers and artists for millennia, and which retains its place within our deepest fears even to this day.

The reasons for this dark obsession at the heart of our religious and artistic culture are, of course, fairly obvious. By linking our own deaths to the extinction of the species, we imbue our individual lives with a tragic significance they otherwise lack. This only works, however, if our extinction is our own fault. Being wiped out by a rogue asteroid, for instance, provides no such narrative fulfilment. Indeed, it only goes to show just how meaningless our lives are – or were. To be meaningful, our destruction has to follow from our own actions, and has to be seen as a just punishment. And the destruction of the planet as a result of man-made global warming fits this requirement perfectly.

In fact, in many ways, it is the almost perfect artistic expression of the many ambivalent feelings we have about our stewardship of the planet over the last the 10,000 years, during which period we have:

  • Used up most of the world’s non-renewable resources, squandering them on the manufacture of material possessions we don’t need, don’t used, and for which, in many case, we don’t even have enough room in our houses;
  • Driven countless other species to extinction, or to a marginal existence in which their numbers can be counted in thousands, if not hundreds;
  • So increased our own population that the earth is barely able to support the 7.2 billion people it currently has to feed, without the adverse climate conditions which global warming will almost certainly bring about.
So numerous are our crimes against the planet, indeed – at least in our own fevered, anthropocentric imagination – that it is hardly any wonder that there are those on the extreme wing of the green movement who have a tendency regard human beings as something more akin to a parasitic virus: one that is slowly consuming the body it has infested. Nor is it particularly surprising, given such attitudes, that advocates of radical action to combat global warning often sound more like religious zealots than political campaigners, vilifying those who express any scepticism about their beliefs as ‘climate-change deniers’ and heretics.

The problem this presents for politicians, however, is that, having created this monster – or having at least given it the oxygen it needed to flourish – they cannot now distance themselves from it without being labelled as apostates. At the same time, they have also become increasingly aware of how little they can actually do to address the underlying problem, especially in a world recovering from an economic recession, in which growth and the balancing of budgets quite naturally take priority.

To make matters worse, they are also now being told that global temperatures over the last fifteen years have not increased in the way that climatologists predicted – remaining more or less static – which makes them question whether the models on which these predictions were based were correct. In fact, for most politicians today, the whole issue of climate change has become a disaster zone they enter at their peril.

But global warming is happening, right? Probably. 

And human beings are at least partly responsible, yeah? Again, probably. 

So there’s no real change in the underlying position? Correct.

The fact remains, however, that our climate models are incomplete. We do not know what caused the five ices ages of which we are aware, or what brought them to an end. We don’t even know what causes the cycles of warming and cooling within the current ice age. We know that the Milankovitch cycles are involved, as are various feedback loops resulting from glaciation, itself. But the model is incomplete. We still do not have a complete picture.

Indeed, I mentioned earlier that there are almost certainly other factors involved in climate change which may not yet have even been identified. One of these is almost certainly the sun itself, not just in the role it plays in determining the earth’s orbit, but in the energy it radiates, particularly in the form of sun spots or solar flares. These produce waves of ionised particles, known as the solar wind, which washes across the earth’s upper atmosphere, stripping out dust and other accumulated debris, thereby allowing more light to penetrate through to the surface, while at the same time interacting with the oxygen in the atmosphere to increase the amount of ozone, thus doubly contributing to global warming.

Unsurprisingly, this form of solar activity is again cyclical, though nobody really knows why. Nor is it quite as predictable as the Milankovitch cycles. Typically, solar cycles last around 22 years, with very little in the way of gaps between them. However, between Solar Cycle 23, which ended in early 2008, and Solar Cycle 24, which started in late 2009, there was a gap of nearly 2 years in which virtually no sun spots occurred, and in which global temperatures already began to fall, causing some climatologists to fear that this could actually have been the start of another Maunder Minimum.

This was a period which lasted from 1645 to 1715, in which sun spots became extremely rare, resulting in a period of global cooling often referred to as the Little Ice Age, which is still commemorated on many of our more traditional Christmas cards. Famously, the Thames in London froze over each year, allowing for the construction of an annual ice fair, which became a perennial feature of the capital’s economic and social life, providing both a market and various forms of entertainment from December through to March. 

So what am I saying? That another Maunder Minimum is going to come along and save us from the effects of global warming? Of course not. Such events are totally impossible to predict. But that’s the point. Given the incompleteness of our climate models, even our best guess at what is going to happen may be completely wrong. To spend our time and our energies worrying about global warming when (a) we can’t do anything about it, (b) it may not actually happen, and (c) it may not be a entirely bad thing if it does, is therefore the height of irrationality, especially when it is elevated to the status of a religion.

So what then? Do we just sit here and let the waves roll over us? No, there’s plenty of things we can do: things that are both rational and practical. Following the floods we have suffered in Britain this winter – which many people have attributed to global warming, though that, of course, is impossible to establish – we can start by improving our flood defences. Given that infrastructure projects are generally believed to be a good way to stimulate the economy, this would therefore be a very good way to spend a few billion pounds. It won’t stop global warming, of course, but it might just mitigate some of the effects – which is the point I’m trying to make.

To finish, therefore, I’d like to leave you with one further thought: that it is not changes to the environment that cause species to become extinct, but the failure of a species to adapt to the said changes.

Recently, I caught a news item which reported that US authorities are extremely worried about a species of Asian carp getting in the great lakes and displacing the indigenous species. They are therefore planning to spend billions of dollars to prevent this.

While watching the report, however, I couldn’t help thinking about an alternative approach. Instead of stopping the carp entering the lakes, they could stimulate a fishing industry to catch them and sell them to consumers. After all, carp are very good source of protein and omega 3, and, if prepared correctly, are very palatable. In Poland, I believe, they are served at Christmas as a seasonal delicacy. If caught in sufficient numbers, this would therefore stop them displacing other species and would create both food and employment at the same time. Only it takes is a bit of lateral thinking.

The problem, it seems to me, is that, too often, we think that our only responsible attitude to the environment is one of conservation. Nothing must be allowed to change, including our climate, even though we know, from earth’s history, that our climate is in constant flux. So we spend billions trying to prevent what cannot be prevented, rather than prepare for the changes we, ourselves, will need to make once the inevitable has happened. For, make no mistake, the carp will get into the great lakes. Sometimes, therefore, accepting change, and adapting to it is the best policy. Sometimes, indeed, it is the difference between survival and extinction.

Thursday, 18 July 2013

Our Food Industry & How It’s Killing Us (Part III): Paying the Price



In my first two essays in this series – subtitled ‘Obesity& the Incoherence of Much Current Dietary Advice’ and ‘The Rise of the High Sugar Diet’– I first took the reader through the scientific arguments of Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco, who believes that it is sugar, rather than dietary fats  – or simply eating too much – that is the main cause of the high rates of obesity, hypertension, Type 2 diabetes and cardiovascular disease (CVD) currently found in the USA and other parts of the western world. After examining – and largely discounting – the fairly widespread belief that it is one type of sugar in particular – High Fructose Corn Syrup (HFCS) – that is the principal culprit, I then looked at a raft of statistics on per capita sugar consumption in different countries and concluded that, although much of the data on this subject is somewhat less than reliable, from figures published by the US Department of Agriculture, we can say with absolute certainty that in the USA, at least, there was a very significant increase in overall sugar consumption between 1985 and 2000, and that although the figures have fallen back a little over the last ten years – very possibly as a result of the debate over HFCS and the increased uptake of sugar-free versions of leading soft drinks – they are still considerably higher than in the 1960s, thereby adding further weight to Professor Lustig’s thesis.

That I was unable to find equally reliable statistics for either the UK or the EU does, of course, render the evidence somewhat less than conclusive. I cannot say for certain, for instance, that the reason the UK is now officially the second most obese country in the world – after the USA – is that it too is on a rising curve when it comes to sugar consumption. What I can say, however, is that the UK diet is very similar to that in the USA, includes many of the same products from many of the same multinational food manufacturers, and that if Americans are consuming more sugar, I’d be very surprised if we in the UK were not.

The real question, therefore, is not whether the case against sugar has been made, but why – assuming that we are – we’re all eating so much of it, especially as at no point over the last thirty years do I remember making this choice? 

The easy answer, of course, is to blame Coca-Cola. In one of his lectures, Professor Lustig points out how much the size of Coca-Cola bottles has increased over the last thirty years and how much extra sugar we are consuming per year as a result. The question one has to ask, however, is whether the replacement of all such sugar-rich soft drinks with sugar-free alternatives would solve the problem. And judging by the overall consumption figures and the proportion attributable to soft drinks the answer is almost certainly no. It would certainly help. But alone and of itself, it would not put an end to our burgeoning waistlines, which are less the result of individual products – which we could simply choose to avoid – than of our diet as a whole, or,  more especially, of the way in which it is now produced.

To properly understand this, however, one has to understand the way in which our food industry has developed in the modern era, not just over the last thirty years, but over the last sixty or seventy – since the Second World War, in fact. 

Crucially, before this historical watershed, the value-added sectors of the industry – particularly processing and manufacture – were very much smaller than they are today. Food manufacturers already existed, of course. In the UK, they are numerous famous brands that were established as long ago as the 19th century. One thinks of Cadbury’s (chocolate) in Birmingham, for instance, or  Colman’s (mustard) in Norwich. But nearly all of these manufacturers were concerned with long shelf-life products, such as confectionaries and condiments, rather than with what one might call the bread and butter of daily life, which, for the most part, was produced by much smaller enterprises, closer to the customer, and, in many cases, with an original view to preservation. 

Butchers, for instance, cured and smoked pork, not just to sell us bacon and ham, but to stop it going off. Dairies turned milk in cheese, not just to give us something different to put in our sandwiches, but to give their raw material a longer shelf life. The same is true of herrings turned into kippers and fruit and vegetables turned into jams and marmalades, pickles and chutneys. True, butchers also made pies and sausages – and other forms of charcuterie – to use up the offal and off-cuts of meat they couldn’t sell in any other form. But even though this may not have been ‘food preservation’ in the strictest sense, it was still about making use of everything they had and avoiding wastage. 

It was the Second World War, itself, and the need to supply soldiers in war-zones all over the world, that brought about the first major change: a revolution in canning! In order to store and distribute prepared rations over thousands of miles while keeping them fresh, everything that could be sealed in a tin – from corned beef to poached pears – was, thus establishing a large scale canning industry which, after the war, then gave us baked beans, tomato soup and steamed treacle pudding – though the latter, I seem to remember, never came out very well. With the exception of canned soups and stews, however, most of the foods sold in this form were still elements of meals rather than meals in themselves, and the role they played in our diet was still quite marginal.

When I was growing up in the 1950s and 60s, for instance, at a time when most households could still manage on the income of a single wage-earner, the vast majority of meals were still cooked from scratch, using locally sourced produce, bought from locally owned butchers, bakers, fishmongers and greengrocers. Even in the early 70s, when I went to university, there still wasn’t very much in the way of ready prepared meals in the supermarkets. It’s why every student of my generation learnt to make at least two simple dishes – usually spaghetti bolognaise and some kind of curry – which, going on to form the basis of our culinary repertoire over the years that followed, now more or less define us as being of that age. It wasn’t until the late 70s, by which time inflation had made it more or less essential that every household have at least one and a half wage-earners – with most women therefore having to go out to work, either full-time or part-time, as well as doing most of the household cooking and cleaning – that ‘convenience’ foods, in the form of fully or partially prepared meals, began to take hold.

And it was at this point that the value-added food industry, as we know it today, really came into its own. For while people were still cooking food at home, using fresh ingredients, its scope for adding value was always strictly limited, being largely based on distribution. Moreover, fresh ingredients have a much shorter shelf life than processed food, leading to far greater wastage. Distribution therefore had to be fast and efficient, with only premium products travelling any significant distance. All this meant that small, local suppliers and retailers could still hold their own. With the increase in demand for convenience foods, however, all that changed. By producing ready prepared meals – at this stage mostly frozen – large manufacturers not only added value to fresh ingredients, they also added longevity. This, in turn, allowed them to lengthen the distribution chain, concentrating manufacture in major industrial centres, thereby achieving greater economies of scale.

Manufactured ready-meals also facilitated more extensive branding. Most long shelf-life items, such biscuits and confectionary, may already had well-established brands; but it is very hard to brand a chicken. Not so coq-au-vin for two in its own little aluminium tray, meaning less washing-up.

More extensive branding also allowed for more extensive advertising and far more intensive product development. From celebrity endorsed brands of ‘cook-in’ sauce and salad dressing, to new types of breakfast cereal and yoghurt, throughout the 80s and 90s it seemed like hardly a week went by without something new arriving on our supermarket shelves and television screens. And as sales soared, more and more money was invested in the industry.

From a patchwork of local artisan producers and retailers, our value-added food industry became big business, and has now actually overtaken agriculture as the largest industry in the UK. In 2012, for instance,  the total value of domestic and imported agriculture, as shown in Figure 1, was £78.9 billion. In contrast, the total value of the value-added food industry, including processing, distribution and retail – if you add them all up – was £86.9 billion.

Figure 1: UK Value-added Supply Chain
(Source: Food Statistics Pocketbook 2012, DEFRA)

It is one of the great business success stories of our time. But it has also produced a number of far less desirable consequences.

The first of these is that most of the independent butchers, bakers, fishmongers and greengrocers, on which we once relied, have now disappeared, their prices undercut, their business model obsolete. Far worse, the restructuring of the industry has, itself, led to greater and greater consolidation, thereby reducing the number of participants. In the long shelf-life sector, for instance, most of the world’s most recognisable brands are now owned by just five or six multinational conglomerates, including Kraft, Nestles, Coca-Cola and PepsiCo. With respect to retail, in the UK we now buy 89% of all our groceries from just five large supermarket chains, with the largest of these, Tesco, having a 30%  market share.

It is the very success of these mega-corporations, however, that has now exposed a contradiction at the very heart of the value-adding principle which made this success possible. For the purpose of adding value to basic ingredients, of course, is to be able charge more for the resulting products and hence make greater profits. According to this principle, therefore, you shouldn’t sell a man tomatoes to make a pasta sauce if you can actually sell him the pasta sauce.  Similarly, you shouldn’t sell him the pasta sauce to make a lasagne if you can sell him the lasagne. This only works, however, if you are able to make a lasagne at a price he can afford. If not, he’ll go back to buying the tomatoes and making it himself.

Of course, there is some latitude in this. Given the convenience of a readymade lasagne, the customer may be prepared to pay a little bit more than it would cost him to make it from scratch. But ideally, it would be best if you could actually make the manufactured lasagne cheaper than homemade, thus not only providing the customer with convenience, but saving him money. The problem, of course, is that by doing all the work for him, the value-added supply chain adds costs at every stage; and the only way these costs can be offset is by reducing the cost of the basic ingredients.

To determine by how much the manufacturer has to cut his ingredients costs to meet this requirement, I therefore conducted a little experiment. At my local supermarket, the cheapest readymade lasagne I could find was £1.99 for a single portion. I therefore set about making a lasagne from scratch for £2.00 per head, based on the ingredients shown in Figure 2.

Figure 2: Cost Breakdown for a Homemade Lasagne for Six

That I could only produce a lasagne at £2.00 per head if I made enough for six is, of course, a bit of a problem. For one of the great advantages of a ready meal of any kind is the convenience of the ‘single serving’ portion. This is therefore something to which I shall have to return later.

First, however, I want to continue with the experiment, for which the next step is to determine the cost at the farm gate of both my ingredients and the equivalent ingredients purchased by the manufacture. For I, of course, am buying mine at a supermarket, with the cost sale and distribution already added, whereas the manufacturer is buying his wholesale. What we need to work out is the value of the ingredients in each case when they were sold by the farmer.

We do this by first calculating the average cost breakdown of all foods sold in a supermarket using the figures for the value added supply chain shown in Figure 1, ignoring, of course, that part of the supply passing through the catering industry. The results are shown in Figure 3.

Figure 3: Average Cost Breakdown of Food Sold in Supermarkets

This is the cost breakdown for all food sold in a supermarket, regardless of the amount of processing the food has undergone. 30.27% is therefore the average cost of processing, with some items receiving more and some less. 

In fact, we can divide ‘processing’ into three broad categories: high, medium and low. Examples of low level processing include the butchering and mincing of beef, the pasteurising and bottling of milk, and the grading and packaging of tomatoes. Medium level processing, for instance, includes the making of cheese and the manufacture of tomato puree, while high level processing involves the taking of intermediate products such as minced beef, cheese, and the said tomato puree, and the manufacture of a readymade lasagne, which probably represents a fairly average level of processing in this category.

The making of my own homemade lasagne, in contrast, involves some pre-processing – in the mincing of the beef, and the manufacture of cheese – but given that I am also using fresh vegetables and herbs, which have received little or no pre-processing, and am making my own pasta, the overall level is probably just above average for the lower band.

Having thus established that the readymade lasagne is somewhere in the highest category of food processing and my own homemade lasagne is somewhere in the lowest category, we are now, therefore, in a position to calculate the relative cost of the ingredients at the farm gate for both the readymade lasagne and my homemade version.

We can do this because, in addition to knowing what the mean processing cost of all food is – 30.27% – we also know both the minimum and maximum. The minimum, of course, is 0%: no processing at all, as in the case of loose, unwrapped carrots and onions. The maximum we can work out if we first assume that the cost of sale and distribution for all items is more or less constant. Taking these two values out of the equation, therefore, this leaves 69.16%, which is the combined cost of the processing and raw ingredients. If we then assume that there are some products for which the cost of ingredients is nothing – as in the case of some types of bottled water, for instance – it follows, therefore, that it is possible for the value-added component to be the entire 69.16%, which therefore gives us our maximum.

Assuming, therefore, that the processing costs of all foods sold in a supermarket are situated somewhere in this range, between 0% and 69.16%, and knowing that the mean cost is 30.27%, we can therefore generate the graph shown in Figure 4.

Figure 4: Relative Ingredients Costs for Homemade & Readymade Lasagne

Importantly, this curve is not a representation of an actual distribution, as would be the case if based on empirical data. To produce that, however, I would need to know the input costs and the output pricing for every product produced by every manufacture, or at least a large enough sample to be sure that it was representative: a herculean task, to say the least. Figure 4 is rather an estimate or approximation. Assuming, however, that the actual data – if I had it – would follow a normal distribution, I’d be very surprised if what I have produced here were very far off the mark.

On the horizontal axis we have the percentage processing costs for all food items, ranging from 0 to 69.16%. On the vertical access, we have the percentage of products which have each of these values. At the lowest point in the range, for instance, we can see that around 0.5% of all products have no processing costs at all. Given the piles of loose fruit and vegetable which greet us whenever we enter a supermarket, this may seem somewhat low. But what you have to remember is that the percentage processing cost attributed to each item, is a percentage of their price at the checkout; and although supermarkets may sell a large volume of these items, they actually only represent a small fraction of total sales.

The question we now have answer, therefore, is where the ingredients for my homemade lasagne and its readymade equivalent sit on this scale?

In the case of the former, I have gone for half way between the minimum and mean, at around 15%. In accordance with my earlier analysis, this is just above the middle of the lower range, which runs from 0 to 23%, and takes into account the inclusion of ingredients such as the butter and the two types of cheese, which fall into the medium band. In the case of the readymade lasagne, I have gone for half way between the mean and the maximum, which is approximately 50%. This, therefore, is actually at the lower end of the higher range, which runs from 46% to 69.16%, and consequently represents a very conservative estimate of how much processing goes into ready-meals of this type. Even so, the cost differential between the ingredients that go into a readymade and a homemade lasagne, as shown in Figure 5, is very substantial.

Figure 5: Relative Cost Breakdown for Readymade and Homemade Lasagne

In my homemade lasagne, as you can see, more than half of the cost is going to the farmer for the raw ingredients, whereas in the readymade version, the farmer is receiving less than a quarter. More importantly, given that I have taken a fairly conservative view of the cost of processing for ready-meals, a similar differential would hold for just about every highly processed product you might buy.

So how do the supermarkets and the manufacturers do it? Well, part  of the answer, of course, is that they buy so much, they are able to force the price down at the farm gate. Indeed, the pressure they put on their agricultural suppliers is, in itself, one of the less desirable consequences of their omnipotence, forcing farmers to adopt practices which are both inhumane and environmentally damaging, while still driving many of them out of business.

In the UK, for instance, dairy farmers, in particular, are almost an endangered species. Due to an unfavourable exchange rate between the pound and the euro, supermarket chains are able to buy milk from EU farmers at a price which is less than the cost of domestic production. As a result, the UK dairy industry is contracting, with many dairy farmers being forced to diversify, often by adding value to their own raw ingredient by becoming artisan cheese makers. As a result, there are now more than a thousand specialist cheeses in the UK, many of which have won international awards, but none of which appear on any supermarket shelves. For in order to do so, these new artisan cheese makers would have to both increase their production – by at least an order of magnitude – and reduce their prices, both of which measures would affect their quality, thereby effectively defeating the purpose of the exercise.

However, it is not just by driving down prices at the farm gate that the food industry solves its value-added cost conundrum. After all, the ingredients for my own homemade lasagne were also bought at a supermarket, and the price I paid, therefore, also benefitted from this same price-squeezing. In order to maintain the cost differential between the ingredients I purchased and the ingredients that go into an industrially produced lasagne, therefore, the food industry has to take even tougher measures, and it does this by buying the lowest quality ingredients they can get away with.

In the UK recently, there was a major scandal over the revelation that there was horsemeat in some industrially produced ready-meals, including lasagnes. For days, our newspapers and television screens were filled with nothing else, as discovery after  discovery meant that more and more products had to be removed from the supermarket shelves. The fact is, however, that horsemeat is one of the least offensive ingredients in some of the pre-prepared foods we eat. Most of the meat in most low cost lasagnes, for instance, is MRM (Mechanically Recovered Meat), most of which is produced from what would otherwise be regarded as abattoir waste.

Indeed, if one looks at the list of ingredients for my homemade lasagne in Figure 2, it is fairly clear where the industrial manufacturer has to save money. For the two most expensive items are, of course, the minced beef and the cheese: the protein and the dairy fat. These are the ingredients that all manufacturers are therefore forced to cut back on, making it also very unlikely, as a consequence, that the cheese sauce in a manufactured lasagne is actually made from real cheese, a soya based substitute with cheese flavouring now being the preferred option.

Even the tomato sauce is likely to have been made from sugar, vinegar, emulsifier and tomato flavouring. Indeed, the only two ingredients in any of these products one can really trust to be what they purport to be are the sugar and the salt: the two low cost ingredients that are absolutely essential in making any industrially produced food palatable. And it is this, more than anything else, I believe, that explains the amount of sugar we are now all eating. 

From supposedly healthy cereals and yoghurts, to readymade chicken tikka masala and naan bread, it’s in almost every processed food we buy; and the tragedy is that the cheaper the product the more sugar it tends to contain. As a result, we are now very probably the first society in history in which obesity has become a disease of the poor.

The really sad fact, however, is that we like it: all this sugar-rich food. It is not quite that we are addicted to it, but we have certainly become accustomed to it. We’re like the person who always puts two sugars in his tea or coffee and grimaces in disgust if he accidentally takes a sip from an unsweetened cup. He doesn’t realise that if he drank it unsweetened for a week or two, he’d actually come to like it that way, and would then find the sweetened variety far too rich and sickly for his taste.

Not that, as a society, we’re likely to make this discovery any time soon, especially as we train our children to want and prefer sweet foods almost from birth.

In my first essay on this subject, I pointed out that we now have six-month-old babies suffering from obesity. But I didn’t explain why this was. The answer, however, is fairly simple. It’s baby formula.
Natural milk contains its own sugar: lactose. In baby formula, however, the manufacturers take this out and replace it with either sucrose of HFCS. Lots of it. Which makes it very yummy. As I also pointed in Part I, however, the fructose in the sucrose or HFCS, as well as being turned into fat in the baby’s liver, also produces a substance which is a leptin inhibiter – leptin being the hormone which tells our brains when we have eaten enough. This means that when the baby has his feed, he will probably drink the whole bottle, and will enjoy it very much, but will still not feel full. Half an hour later, as a result, he will then start wailing for more, showing all the signs of being hungry, to which his mother – not knowing what else to do, and very probably at the end of her tether – will probably respond by preparing another bottle. In no time at all, therefore, we have an obese baby, who will probably grow up to be an obese child and then an obese adult, turning to sweet comfort food as his only solace in a world that has played such a mean trick on him.

In the UK, it is now estimated that the National Health Service spends £5billion per annum treating obesity and the diseases that can be directly attributed to it. It is further estimated that over the next twenty years, this figure will more than double, making it extremely unlikely that, with an aging population and the escalating cost of new drugs, the NHS will be able to go on indefinitely treating patients freely at the point of use. This is not, therefore, just a health issue; it is also a financial issue.

Part of the problem, of course, is that the food industry, itself, can do nothing about it. It has followed a certain business logic, a game-plan that made perfect sense in business terms, and probably still does to those who are unable to look beyond this framework. The idea that they might now go back to selling people healthy raw ingredients for them to cook at home, effectively dismantling the value-added supply chain they have built up over the last fifty years and reducing their business to a quarter of its current size, is therefore beyond fanciful. When faced with criticism, the industry’s strategy, as already revealed in the USA, will therefore be to deny that sugar is a problem, point out that all products are clearly labelled, and argue that the consumer has a choice as to what they eat. And if that doesn’t work, they will then bring out the big guns, funding scientific studies to produce evidence supportive of its position and lobbying governments to ensure that no legislation is passed to make the slow poisoning of people with fructose illegal. Just like the tobacco industry over the last fifty years, in fact, we can expect the food industry to use every tactic available to it to maintain its lucrative value-added business. After all, what’s the alternative?

The good news for the industry is that even if more people decided to heed Professor Lustig’s warning, and wanted to start cooking again using raw ingredients, there are far fewer people now than thirty years ago who could actually do it. For despite all the celebrity chefs on our television screens and the hundreds of endlessly recycled cook-books sold each Christmas, the fact is that, for the most part, we have become a nation of only occasional cooks, with Christmas becoming the one exceptional occasion. Yes, there are still some very good home cooks out there. But most of them are either middle class enthusiasts, who like to think of themselves as living the ‘good life’ with Hugh Fearnley-Whittingstall, or they’re my age. Very few of them are the harassed and overworked parents of children whose tastes and ideas about what they want for supper are largely determined by TV advertising. 

More importantly still, in many homes today, the number of times per year the family sits down to a meal together can be counted on the fingers of one hand. Different members of the family want different things and different times. Afterschool activities and teenagers wanting to go out to meet their friends mean that meal times are often staggered, and the take-away and the ready-meal are the ideal solution. My homemade lasagne, which I had to make for six in order to make it economical, just no longer suits the way most families now live. For in changing its business model since the Second World War, our food industry not only changed itself, it changed us. Families today, are not like the families I knew when I was growing up. And for most people, the idea of going back to live that way is as unimaginable as the food industry actually dismantling itself.

‘But what about the government?’ I here you say. ‘If the effect on our health of all this sugary food is as serious as you say it is, shouldn’t they be doing something about it?’ What, and alienate an industry that contributes so much to their campaign funds! And where are the votes in it? We like our diet just the way it is. If we didn’t, we wouldn’t eat it. To many people, therefore, any government which tried to change it would be seen as just one more example of the ‘nanny state’. And after long campaigns against smoking, alcohol and dietary fats, to most politicians a campaign against sugar would be seen as a campaign too far.

Then there is the strategic issue. Not that I would ever suspect politicians of thinking strategically. But it’s always a handy excuse for inaction. And the fact is that the world’s population is currently 7.16 billion, and is rising by over 100,000 every day. By 2050, therefore, it is estimated that it will have reached 10.9 billion, which is a lot of people to feed. Too many if you want to feed them fresh meat, fish and vegetables. Already in the UK, as a result, there are people building farms to breed insects, from which animal protein can be extracted, which can then be processed to (very probably) taste like chicken. 

High value-added processing, based on low value ingredients is our future. The only good news is that with life expectancy in some countries now dropping as a result of our highly processed sugar-rich diet, we won’t have to endure it for very long.