Saturday 18 May 2024

The Pernicious Creep of Online Technology & Bureaucratic Control

 

1.    Technology for Technology’s Sake

In the small seaside town where I live, there used to be a very simple and almost infallible system for obtaining a repeat prescription from one’s doctor. One went to the surgery, filled in a slip of paper with one’s name and the name of the requested medication and then inserted it into a closed box a bit like a post box. A couple of days later, one would then return to the surgery, pick up the prescription from the receptionist and take it to the pharmacy, where one would be asked whether one wanted to wait or come back later. I usually opted to do a bit of shopping and told the pharmacist I would be back in twenty minutes when the medication would usually be ready for me.

This system was very simple and very cheap in that all it required was some pre-printed slips of paper, a wooden box and a ballpoint pen on a piece of string. It was also more or less infallible in that the person with the greatest interest in ensuring that it worked – the person requesting the repeat prescription – was in control of most of the process and interacted personally with the two other people involved: the surgery receptionist and the pharmacist.

Then, one day, someone decided to modernise the system in such a way that one could now download an app to one’s mobile phone, set up an account on a sever somewhere and request a repeat prescription online. Whether this was a local, regional or national initiative I am not sure but, fortunately for me, my own doctor’s surgery decided to still keep the old post box and slips of paper for those who wanted to continue using them, of whom I was one. This was not only because, being a member of the older generation, I am fairly useless at using my phone for anything other than making phone calls, but also because I reasoned that, if I adopted the new system, I’d not only have to remember yet another user name and password but would have to download the app again every time a got a new phone.

Unfortunately, however, the modernisation of the system didn’t end there. For instead of the patient picking up the prescription from the surgery, it was now sent to the pharmacy electronically, thereby taking control of this part of the process away from the person who has the most interest in ensuring that it works, which, at times, it does not. Even if one waits a week before going to the pharmacy to collect the requested medication, for instance, one may discover that the prescription has still not arrived, necessitating another visit to the surgery to find out why. In my own case, this has happened three times in the last six or seven months: once because my doctor wanted to run some tests before signing off on the prescription – though no one thought to inform me of this – and twice for no reason anyone could explain, although it was strongly suggested that, had I submitted the request electronically, it would not have happened in that it would have automatically been in the system.

Even if everything works properly and the prescription arrives at the pharmacy within the specified period of time, however, the problems do not end there. For instead of looking on the shelves to see whether the medication is ready for collection, the pharmacist’s assistant now has to check her own mobile phone to find out where the prescription is in the system. For even if it has arrived at the pharmacy, because it is not hand delivered by the person making the request, who cannot therefore say whether they are going to wait or come back later, all prescriptions are now dealt with in the order in which they are received, without reference to when the patients might be picking them up, making it a matter of pure chance as to whether the medication is ready when the patient calls in for it.

This fundamental change in the nature of the system was further exacerbated by the fact that, alongside the new system being introduced, the doctors at my local practice stopped issuing prescriptions for two months’ supply of a medication and now only issue them for one month’s supply. This was almost certainly due to the UK National Health Service (NHS) wanting to reduce its expenditure on prescription medications, which a reduction in the quantity prescribed achieves in two ways. Firstly, it halves the amount of working capital tied up in pharmacological products sitting in people’s medicine cabinets, thereby reducing the NHS’s cash flow requirement. Even more importantly, however, it doubles the amount of income the NHS obtains from prescription charges. This is because the amount patients have to pay for each prescription is the same regardless of the quantity of the medication prescribed. By halving the prescribed quantity, therefore, and forcing patients to request another prescription every month rather than every two months, the NHS doubles its prescription income. The problem, however, is that, in addition to increasing the cost to patients, this has also increased the cost to pharmacies, which now have to fill out twice as many prescriptions as was previously the case, thereby requiring them to take on more staff.

At my local pharmacy, for instance, there used to be three members of staff working at any one time: the registered pharmacist and his assistant in the back room filling out prescriptions and one assistant on the counter dealing with customers. These days, in marked contrast, I have counted up to five members of staff in the back room filling out prescriptions. Because there is still only one assistant working on the counter, however, and because there are now twice as many people coming into the pharmacy to collect their medication, this means that long queues tend to build up, which take even longer to service because the name of each patient now has to be entered into the assistant’s mobile phone to find out where their prescription is in the system. If it has been filled out, she then has to go into the back room to find the actual item or items on the now overflowing shelves. If, on the other hand, it has arrived at the pharmacy but has not yet been filled out – which is often the case – she then has little choice but to advise the patient to come back the following day, which means that the patient has wasted fifteen minutes in the queue to no end.

Of course, most of these problems are due to the NHS reducing the amount of medication supplied on each prescription rather than to the introduction of the new online system. But they are both part of a greater malaise. For it is fairly clear that no proper systems analysis was carried out before either aspect of the overall system was changed. Somewhere in the NHS bureaucracy, it was simply decided that, in order to save money, doctors should prescribe smaller quantities of medication without anyone taking into account what this would do to the system’s throughput. What’s more, it’s greatly to be doubted whether they even realised that the additional cost to pharmacies would ultimately flow back to the NHS.

Similarly, it seems likely that someone somewhere simply thought that they could solve the throughput problem by introducing a new online system. Not only did they not take into account the fact that the more complicated a system is, the more likely it is to fail, but they do not seem to have even been aware that when one takes control of a system out of the hands of those who have the greatest interest in making it work, it very often doesn’t.

The question this clearly raises, therefore, is how senior managers in the NHS, who are presumably paid quite handsomely to make such decisions, could get them so wrong. The answer, however, is a little more complicated than one might initially suppose. For it almost certainly has to do with the widespread but fallacious belief that a more advanced technology is always better than a less advanced technology, the fallacy of which is obscured by the fact that, while a superior technology is always going to be more advanced than any technology it supersedes, it supersedes the less advanced technology, not because it is more advanced, but because it is superior.

In 1764, for instance, Lancashire mill owner, James Hargreaves, invented the spinning jenny, which, over the next few decades, gradually replaced more traditional looms, not because it was more advanced, but because it enabled textile workers to produce up to 120 times more cloth than had been possible using the older technology, thereby revolutionising the British textile industry and making Britain the largest manufacturer and exporter of woven fabrics in the world.

That’s not to say, of course, that there isn’t a perfectly understandable reason why we consistently conflate these two concepts. For, throughout most of our history, the attributes of being more advanced and being better did, indeed, tend to go together, with more advanced technologies generally being an improvement on what preceded them. This is because technological development is a lot like evolution in that, under normal economic circumstances, only those technological innovations which appreciably improve efficiency and productivity tend to survive. Technologies which do not work so well – which are slow, cumbersome or prone to failure – are either judiciously abandoned by their inventors or end up bankrupting anyone foolish enough to persevere in trying to make them work.

The problem today, however, is that state funded public services such as the NHS simply cannot go bankrupt. No matter how long they persist in using poorly designed systems, they never have to pay the ultimate price for doing so, with the result that poor technological choices are not weeded out in the way they are in other parts of the economy.

This is partly due, of course, to the fact that, without economic consequences, those responsible for these poor choices are seldom held to account. But it is even more profoundly due to the fact that, if one conflates the two attributes of being more advanced with being better and believes, as a consequence, that a more technologically advanced system is always better than less technologically advanced system, not only is one more likely to choose a more technologically advanced system as a matter of principle, but one is less likely to feel the need to carry out any real technological assessment to determine whether the more advanced technology really is better. After all, how could a temperature controlled room full of servers with blue flickering lights not be better than a wooden box, some pre-printed slips of paper and a ballpoint pen on a piece of string?

To make matters worse, at some point in the development of our current way of thinking, the conflation of these two concepts – that of being more advanced and that of being better – then gave rise to a new moral category: that of ‘progress’, which is always, of course, progress towards something better, even if we do not know what that is. What’s more, being a moral category, the requirement to effect progress always takes precedence over other considerations, especially economic considerations, and can thus be used to justify the expense of a more advanced technology without having to take into account such issues as cost effectiveness or value for money. Indeed, many of those who work in our public services – who often see themselves as having a kind of vocation and want nothing more than to make the world a better place – often regard such economic considerations as a gross impertinence imposed on them by soulless ‘bean-counters’.

The problem, however, is that, without the hard-headed realism provided by both rigorous technological assessment and meticulous cost-benefit analysis, such naïve idealism is not only unlikely to achieve its goals but very often makes things worse, not just in the sense of foisting on people systems which actually make their lives more difficult, but in a wider socio-economic context.

2.    Prevention or Cure

This is particularly the case with respect health services, which primarily exist to help people who are ill and are therefore essentially reactive rather proactive in nature. That is to say that, traditionally, doctors only tended to treat people who came to them seeking a remedy for some ailment; they did not, for the most part, wander around the streets of a city looking for sick people. While the world of an individual patient whose illness may have been cured by the ministrations of a doctor could thus be said to have been made substantially better by such treatment, traditional medical practices did not, for the most part, therefore, set out to make the world a better place per se. In order to do that, modern health services consequently have to go beyond merely treating the illnesses of individual patients and focus more of their resources on proactively preventing illness in the population in general.

And this, of course, is what most modern health services now do. And they do it, not just by giving people vaccinations and advising them on diet and healthy living but, primarily, by the early detection of disease through screening and testing. The problem with this, however, is that unlike the task of curing people, which is strictly delimited by the number of people coming forward to be cured, the task of catching diseases early through screening and testing is completely open-ended. In fact, there is no limit to how much screening and testing one can do or, indeed, should do if its saves lives. Not only can this be very expensive, however, it can also become obsessive, irrational and authoritarian.

I mentioned earlier, for instance, that on one of the occasions on which my small town’s repeat prescription service failed, this was due to the fact that my doctor wanted me to undergo some tests. This, it turned out, was because I hadn’t actually seen him for over seven years. I also suspect that the NHS was on another drive to save money by cracking down on doctors who issued repeat prescriptions without regularly checking on patients to see whether the medication was still needed. All well and good, you might say, except for the fact that, in my case, the medication for which I now have to submit monthly requests is allopurinol, the drug standardly prescribed to prevent kidney stones and gout, both of which are caused by the liver producing more uric acid than one’s kidneys can cope with. The result is that the uric acid either crystallises in the kidneys to form kidney stones, which, under certain circumstances, can then pass into the ureter – the narrow tube connecting the kidney to the bladder – causing it to spasm very painfully as it tries to push the stone through, or it remains in the blood stream, where it accumulates at the body’s extremities – principally the fingers and toes – where it again forms crystals which then grind away at the joints, which is again very painful.

The most important thing to know about gout and kidney stones, however, is that the underlying condition – that which causes the excess uric acid – is genetic and cannot therefore be cured. In my case, in fact, I inherited the ‘gouty gene’, as it is sometimes called, from both sides of my family: my father had gout while my mother’s sister suffered from kidney stones. I accordingly suffer from both. I passed my first kidney stone when I was nineteen and suffered my first attack of gout when I was around forty. Without allopurinol, therefore, my life would be almost unbearable. For although uric acid is a by-product of metabolising purines in one’s diet and, in theory, therefore, can be controlled by cutting out purine rich foods such as offal and shellfish, in practice it is a little more difficult, in that most cereals and all forms of sugar also contain purines, with the result that a lot of manufactured foods, from bread to muesli to tomato ketchup and marmalade, also contain them.

For those of us who have the gouty gene, therefore, the development of allopurinol was a godsend, not least because it is very effective. In the thirty-odd years I have been taking it, I have not had a single attack of gout and, although I have passed three more kidney stones during this period, these were almost certainly formed before I started taking the medication. Being out of patent, allopurinol is also very cheap. If one could buy it over the counter, as one can in America, it would cost little more than aspirin.

I make this point because, although I have to take it every day and will have to go on doing so for the rest of my life, I am not, therefore, a significant burden on the NHS. Indeed, the fact that I am now in my seventies and yet, until last year, hadn’t seen a doctor for more than seven years would rather suggest that I am just the kind of patient the NHS should want.

All this begs the question, therefore, as to why my doctor should insist on me undergoing tests before he would write another prescription for a drug that costs virtually nothing and without which I would probably be incapacitated within six months. When I pressed him on this matter, moreover, not only did he dissimulate, making vague comments about his responsibility, he even lied to me. He said that he had to make sure that the dosage I was taking – which is the standard dosage of 300mg a day – was right for me, which we both knew to be untrue. I say this because seven years ago, when I passed my last kidney stone, I distinctly remember drawing his attention to a trial carried out by Dr. Robert Lustig, Professor of Endocrinology at the University of California in San Francisco, who used large doses of allopurinol to treat obesity in children, thereby effectively demonstrating that one cannot overdose on allopurinol at any dosage that any reasonable person would ever take. Given that, if the dosage were too low, I’d be getting gout, it follows, therefore, that the standard dosage is the standard dosage for a reason, being the minimum dosage that generally works.

The only conclusion I can draw from all this, therefore, is that, after thirty-odd years of taking a low cost medication for an incurable condition based purely on my history of suffering from the effects of this condition, NHS guidelines now dictate that I have to be tested, not just now and again, but every year in order to determine whether I still need the medication in question, even though the cost of a year’s supply is almost certainly less than the cost of the tests themselves. This is not, therefore, just an irrational application of inflexible rules, but a complete waste of time and money.

Of course, it could be argued that I may actually benefit from this and that it is not therefore totally pointlessness. After all, the tests may reveal something unexpected and catch an illness in its early stages. But this is precisely my point: that instead of a patient with a problem going to see his doctor to find out what is wrong, and the doctor responding by running tests to arrive at a diagnosis, the health service is now forcing doctors to test patients to find out whether a problem exists, which is not only extremely expensive but actually impairs the quality of service offered. For assuming that increased testing results in increased detection rates, unless these increased detection rates are matched by commensurate increases in resources devoted to treatment, all this does is create yet another throughput problem, resulting in longer waiting lists.

In fact, one can see this quite clearly if one compares the Department of Heath’s own figures for NHS expenditure and patient waiting lists over the last few years. When the current Conservative government came to power in 2010, for instance, total NHS expenditure was £131.8 billion. This has steadily increased over the last thirteen years with the result that in 2022/23, with the worst of Covid pandemic behind us, the figure stood at £181.7 billion, an increase of 38%. Even taking inflation into account, moreover, the increase is still around 23% in real terms. Waiting lists, however, have increased by considerably more. In May 2010, when David Cameron became Prime Minister, the number of patients waiting for consultant-led elective care was 2.6 million; in May 2023, in contrast, it was 7.57 million, an increase of around 192%.

Of course, it will be argued that Covid delayed a lot of treatments, creating a backlog which still hasn’t been cleared. Even by January 2020, however, before the pandemic started, the number of patients waiting for treatment had already nearly doubled to 4.57 million. What’s more, the backlog has continued to grow since pandemic ended, strongly suggesting, therefore, that there is an underlying trend at work here that has nothing to do with Covid. In fact, every way one looks at what has happened over the last twelve to thirteen years, it has all the characteristics of a classic throughput problem, with more illnesses being detected and hence more patients entering the queue for treatment than there are resources available to treat them.

What the numbers do not tell us, however, is the effect this is having on patients. In fact, one can only guess at what it must be like to be told that one has a condition that needs hospital treatment and then have to wait for months or even years for the problem to be resolved, especially if one is asymptomatic. For these are months one could have been enjoying  life in blissful ignorance. Instead, one is forced to live in state of low level but constant anxiety which, in itself, will have an effect upon one’s health. In fact, this level of cruelty, inflicted on so many people, would be a national scandal if it were openly discussed. The fact that it is not is almost certainly due, therefore, to the fact that politicians on both sides of the House know that there is nothing they can do about it. For in order to bring resourcing levels up to those required to meet the level of demand for treatment in 2022/23, the NHS’s budget for that year would have had to have been £384.8 billion, more than double its actual budget of £181.7 billion. The only realistic solution, therefore, is for doctors to go back to merely responding to the problems presented to them, which, as long as we remain trapped in the progressive mind-set, of course, is never going to happen.

3.    How Online Technology Can Disguise Pointless Exercises

If throughput problems are one of the principal ways in which reality continually thwarts the needs of politicians and bureaucrats to constantly improve things, online systems, on the other hand, are one of the principal ways in which those same politicians and bureaucrats are then able to fool themselves into thinking that, despite reality’s obduracy, they are still actually achieving something.

As I approached my seventieth birthday, for instance, I was told that I had to apply for a new driving licence, which I initially assumed would entail me having to take another driving test in order to prove that I was still competent to drive: a requirement which irritated me almost as much as having to undergo annual tests to determine whether I still need to take allopurinol for my gouty gene. For not only do I resent being judged on my age when I’m perfectly fit and well, but it struck me that the whole business was entirely unnecessary.

I say this because if older drivers really do pose an increased risk to both themselves and other road users, this would surely be reflected in the actuarial tables of insurance companies, who would ramp up their premiums for older drivers accordingly. This, in turn, would make driving more expensive for the elderly, who would therefore be forced to gradually give up their cars. In this way, in fact, the insurance industry would – and presumably does – police road safety far more effectively than bureaucratic control ever could, making state intervention more or less otiose.

One might also pose the question as to who in their right mind would go on driving if they thought they were too physically or cognitively impaired to do so safely, thereby endangering their own lives as well as the lives of others? State intervention is surely only required, therefore, in the case of those who are not in their right minds, being too cognitively impaired to know that they are cognitively impaired, which then raises the question as to how they are still able to drive a car at all. Even assuming that such people exist, moreover, one has to wonder how they are able to function in general without family, friends or carers to look after them: people who would surely tell them that they shouldn’t still be driving and who would take the keys away from them if necessary, making government intervention again seem somewhat heavy-handed.

Despite all these very good arguments as to why reapplying for a driving licence should not have been necessary, the fact was, however, that I was legally required to do so before my seventieth birthday or be banned from driving, leaving me little choice, therefore, but to accept what I also assumed was a fact: that at some point I would have to take another driving test. When the time came to make the application, it was with some considerable relief, therefore, and no small amount of astonishment, that I discovered that all I actually had to do was go online and answer a few questions about my health, most of which I was able to answer more or less truthfully without setting off any alarm bells at the Ministry of Transport. So I filled in the questionnaire as requested and, one week later, my new driving licence duly arrived in the post.

‘Well, that was easy,’ I thought. ‘And totally pointless!’ For even if I’d had some serious disability that should have disqualified me from driving, had I been so minded, I could simply have lied about it. After all, no one checked. So what was the point of making me fill in the questionnaire? In fact, the more I thought about it, the more pointless it seemed, until it suddenly dawned on me that getting me to fill in a questionnaire was not what those who had initiated the scheme had originally had in mind when they decided that something had to be done about elderly drivers. For what I’m fairly sure they originally wanted was, indeed, to make us take another test. At some point in the process of drafting the new regulations, however, someone must have realised that, given the UK’s current demographics, there were probably as many people reaching their seventieth birthday each year as there were eighteen-year-olds applying for a licence for the first time. This meant that, if the government forced all seventy-year-olds to retake the driving test, they were going to have to double the number of driving test examiners. What they were contemplating, in other words, was another massive throughput problem.

Not, I imagine, that they would have been willing to give up there. Their next plan would have almost certainly been to make all seventy-year-old drivers undergo a medical examination, which is actually what happens in some EU countries. The problem with this, however, is that the cost would have fallen on the NHS, to which the Ministry of Health, of course, would have objected strongly. So, eventually, someone must have come up with the idea of a kind of medical self-assessment, a bit like the kind of self-assessment people undertake when filling out their tax returns, except that, in this case, the questions would be medical in nature rather than financial. What would have made this idea even more attractive, moreover, is the fact that those needing to make this self-assessment could do it online, thereby making everyone at the Ministry of Transport extremely happy in that they had not only found a solution but a high tech one.

In fact, it would have been the availability of the technology that made the whole scheme possible. For if all the questionnaires had had to be administered manually, in paper form – an option that is still available for users who prefer it – it would have been very much more expensive and may not, therefore, have gained Treasury approval, especially as the scheme didn’t actually serve any purpose other than to allow the Minister of Transport to stand up in the House of Commons and announce it as one of the many initiatives his department was implementing to improve road safety.

And that, of course, is what this is really all about. Indeed, the sole purpose of many such regulatory changes is merely to give politicians the opportunity to demonstrate how they are continually making the world a safer and better place: something which online technology now makes far easier in that it facilitates the expansion of bureaucratic control in a way that would not have been economically or practically feasible in the past. It also allows politicians to disguise the fact that this is what is actually happening. For if we believe that a proposed set of new regulations will saves lives – as we are usually told they will – and if we believe that we have the technology to implement these regulations without significant cost, then, without even considering whether these new regulations represent a significant expansion of government control – and whether this is actually desirable or even necessary – our default setting is simply allow the government get on with it. After all, making the world a better place is a moral duty and thus takes precedence over all other considerations, especially economic considerations, which, in any case, the use of online technology is supposed to render inconsequential.

Nor am I saying that this is not the case. For while neither the bureaucracy nor the technology required to issue all seventy-year-old drivers with new drivers’ licenses is entirely without cost, it is but a drop in the ocean compared with overall government expenditure. Even if one assumes that there are dozens, if not hundreds of other such totally unnecessary initiatives across all government departments, they would not, in themselves, be the cause of the kind of financial difficulties into which many western governments are now inexorably falling. The problem is rather one of the mind-set which our dual beliefs in the benignity of progressive government and the limitlessness of our technological ingenuity have instilled in us and which leads us to uncritically accept that if we can make the world a better place then, in the immortal words of Captain Picard, we should simply ‘make it so’.

The inevitable result, however, is an inexorable growth in both the size and cost of government which, in the UK, has long since passed the point at which it could be funded purely out of taxation, forcing the Treasury to resort to more and more borrowing which, in itself, has various unintended consequences, some of which may be quite surprising.

4.    The Transmogrification of the Banking System

One of these, which will probably surprise most people, is the loss of our high street banks, the surprise being how this could possibly have anything to do with UK government borrowing. To understand the connection, however, one only has to consider how banks traditionally operated and why they no longer operate in this way today.

In the traditional model, banks used to borrow money from their depositors – mostly private citizens with either current or savings accounts – and lend it to their borrowers, mostly businesses, which borrowed money, either in the form of overdrafts to even out highs and lows in their cash flow, or in the form of long term loans to finance new plant and equipment etc. What many people do not fully appreciate about this arrangement, however, is that, traditionally, it was only from their borrowers, who paid interest on their overdrafts and loans, that banks made their money. Their depositors, in contrast, actually cost them money, both in administrative costs and in interest paid on savings accounts. The only reason banks tolerated savers, therefore, was because they needed their savings to lend to their borrowers.

In the traditional banking model, therefore, the art of banking was all about balancing one’s depositors against one’s borrowers, which was done by setting interest rates so as to achieve two objectives. The first objective was to ensure that the difference between the rate at which the bank lent money and the rate at which it borrowed money was sufficient both to cover its administrative costs and make a healthy profit. The second was to set a high enough deposit rate to attract savers while not being forced to set the bank’s loan rate so high that it deterred borrowers. This resulted in a continuous process of rate adjustment. If a bank had too many borrowers and not enough savers, it put up rates to attract savers and deter borrowers, whereas if it had too many savers and not enough borrowers, it reduced rates to deter savers and attract borrowers.

At some point, however, all this changed, the biggest single factor effecting this change being the adoption of Modern Monetary Theory (MMT) by governments and central banks as one of their principal tools for managing the economy. This is done by the central bank lowering its interest rates when economic growth is weak, making it cheaper for both businesses and consumers to borrow money, and by raising them when increases in borrowing by businesses and consumers risks causing inflation. These adjustments in the central bank’s interest rates only have these effects, however, because commercial banks have accounts with the central bank, where they can deposit excess funds when they don’t have enough borrowers, and from which they can borrow money when they have a liquidity shortage, such as when depositors withdraw their savings en masse, thereby causing a ‘run’ on the bank.

The key to understanding how MMT works, therefore, is to understand how adjustments to the central bank’s interest rates affect commercial banks. And, in fact, it is very similar to the way in which adjustments to the commercial banks’ interest rates affect their customers. When the central bank cuts its deposit rate, for instance, commercial banks get less interest when depositing funds with it, which then forces them to cut their borrowing rate so as to attract more borrowers in order to take up these excess funds. When the central bank puts up its lending rate, in contrast, this deters commercial banks from borrowing from it, forcing them to put up their own interest rates, both to attract more savers and to deter borrowers.

The problem, however, is that, over the last thirty years, most western economies have been in decline. It may not have seemed this way because MMT helped disguise the decline by prompting central banks to reduce interest rates – in order to stimulate the economy – far more often than they increased rates to prevent inflation. This, however, had the effect of ratchetting down interest rates lower and lower, a process which was then further accelerated by the financial crash of 2008 when, in order to stop the rot, central banks in many parts of the world reduced both their deposit and lending rates to less than zero, such that commercial banks had to actually pay money to park money with them, while, in some cases, borrowers were actually paid to borrow. The result was that, for more than ten years, money became so cheap that commercial banks could borrow all the funds they needed, either from each other – which increased the money supply by increasing the velocity of its circulation – or from central banks, which, of course, could simply print the money. The overall effect of this, therefore, was that commercial banks no longer needed savers, who, with interest rates so low, were no longer being paid any appreciable interest on their savings and so weren’t leaving money in the bank anyway.

In fact, even by 2008, very few people were saving money at all, preferring instead to buy whatever they needed on their credit cards: a development which also suited the banks in that savers, who had once cost them money, were now transformed into borrowers from whom they could now make money. The only fly in the ointment was that commercial banks still had to administer the current accounts of these people and maintain an expensive bricks-and-mortar infrastructure to service their needs. Given that most transactions were now card based, however – and therefore essentially electronic in nature – it was a fairly simple step to move nearly all their retail banking operations online and close down most of their branches.

Not, of course, that this as yet explains or even demonstrates a connection between the disappearance of high street banks and government borrowing. All it really demonstrates is how the adoption of MMT by central banks caused first a gradual and then a dramatic reduction in interest rates. To suppose that this was unintended, however, is to accept at face value the generally touted view that the sole purpose of MMT is either to stimulate an economy or cool it down, and totally ignores the fact that the constant ratchetting down of interest rates, and the  consequent transformation of western economies from savings based to debt based amounts to a kind of economic suicide, which no economist would recommend unless they had some ulterior reason for doing so.

What this strongly suggests, therefore, is that the primary reason for allowing central bankers and other proponents of MMT to constantly ratchet down interest rates was not to stimulate the economy – which thirty years of empirical data would suggest it never does anyway – but rather to make it possible for national governments to hide economic decline behind a semblance of prosperity by allowing them to borrow money to spend on public services and subsidised living standards at ever-reducing interest rates.

In fact, we can see this in the contribution made by the public sector to UK GDP over the last twenty-three years. In the latest year for which I have figures, 2022/23, this contribution stood at 45.6%, which is substantially lower than its peak of 53.1% at the height of the Covid pandemic in 2020/21, when many businesses in the private sector actually shut down, and is also slightly lower than its previous high of 46.3% in 2009/10, after the financial crash of 2008. Otherwise, however, it is the highest it has been during this entire century and demonstrates a clear upward trend, suggesting that, as the non-public sector parts of the economy have declined, mostly as a result of deindustrialisation due to offshoring, successive governments have increased public spending in order to take up the slack, creating more jobs in the public sector by forcing all seventy-year-old drivers, for instance, to reapply for their driving licenses.

The problem with this faux economy, however, is not just that it pads out GDP, making it seem as if we are still living in a prosperous country when we have probably been in a recession since long before Covid, but that it has also increased government borrowing to the point at which the national debt now stands at £2.98 trillion or 108% of GDP. What’s more, the recent higher rates of inflation caused by Covid lockdowns and Russian sanctions have forced the Bank of England to incrementally increase its base rate to its current level of 5.25%, which has meant that the UK Treasury has had to increase the yield on its bonds, with the result that we are now paying around £100 billion a year on the accumulated debt, a figure which will only get worse as older bonds are redeemed and new ones issued.

Eventually, therefore, this whole house of cards must collapse, which consequently raises the question as to why our politicians do nothing about it, to which I think there are two possible answers, both of which may be correct. The first is that that there is nothing they can do about it, at least not politically. For if they were to reduce public expenditure to reduce borrowing, this would also reduce GDP, officially putting us into the recession in which we have actually been for years. Without the veneer of prosperity provided by debt-funded public spending, our economy would thus be revealed as a tired old carthorse which can barely drag itself along let alone bare the weight of an overfed state: a revelation which no government could survive.

The second and not completely incompatible answer, however, is that they just do not understand any of this. They do not understand that paying someone to design and maintain an online system for administering a totally pointless set of regulations is not a valid or meaningful economic activity. Indeed, were this to happen in a commercial enterprise, that enterprise would almost certainly go broke. It is only because governments wrongly think that they can’t go broke that they not only think that they can get away with it, but even seek to justify it in the name of progress.

And it is the same, of course, with the banks. They are not going to tell you that they are closing all their branches because after twenty-odd years of artificially low interests rates, rigged to make successive government look good, they have finally achieved their dream of saver-free banking. No, of course not. Closing down all their bricks-and-mortar branches and putting all their retail banking business online is simply progress, to which only stick-in-the-mud old luddites like me could object. Not only does this particular kind of progress create problems for a great many people, however, it actually represents a significant threat to just about all of us.

5.    The Risks Inherent in Our Dependence on Online Systems

To see this, ask yourself what you would do if you could not access your bank account online and did not have a bricks-and-mortar branch of that bank anywhere near your home?

I ask this question because this is something that actually happened to me quite recently. Partly, I have to admit, it was my own fault, because, as I explained earlier, I am very bad at using my mobile phone for anything other than making phone calls. I simply lack the necessary dexterity. In fact, I would say that I’m all thumbs, except that, in this case, that would be an advantage. The point, however, is that although there is mobile phone app available for accessing my bank account online, I don’t use it. Instead, I use a much older system, which involves a hand held device into which one inserts one’s bank card and then enters one’s pin. The device then generates an eight digit code, which one types into the login page of the bank’s website. Unfortunately, after nearly twenty years of usage, this device recently developed a fault which resulted in some of the digits in the display being illegible. This meant that I couldn’t access my account online and couldn’t therefore use my online access to request a replacement device.

In fact, this is a classic systems bind of which most systems designers are well aware and of which many more people will become aware in years to come. This is the problem of needing to access a system in order to solve a problem when the problem is precisely that of not being able to access the system.

A year ago, in my case, I could have solved this problem by simply walking down to my bank’s local branch and asking for a replacement card reader. Unfortunately, my bank closed its local branch about six months ago, with the result that there are now no branches of any bank at all in the small town in which I live or anywhere along this entire stretch of the North Sea coast, from Whitby in the south to Redcar in the north, a distance of around forty kilometres.

The only option I had left, therefore, was to try to phone the bank, which these days, of course, is nowhere near as easy as it sounds. Indeed, it took me a good ten to fifteen minutes to get through the ‘call waiting’ system, listening twice to the various call options available, none of which seemed to even remotely fit what I was calling about. Nor were my problems over when I finally got through to a human being. For we then spent somewhere between five and ten minutes going through the necessary security questions, a number of which I could not answer because I had set up the answers years ago and could not remember what they were. Then, when the operator finally decided that I was who I said I was and asked me what I wanted, he spent another five minutes trying to talk me into downloading the app for my mobile phone, which, he explained, I could use instead of the card reader, giving me the distinct impression that the card readers were being phased out.

I am, of course, slightly exaggerating this whole nightmare for comic effect. After all,  it’s what we do to exorcise our terrors. On the other hand, I suspect that, in outline, it is an experience most people will recognise as one that it is becoming ever more prevalent in the world today, making life more difficult, not just for those trapped in this kind of systems bind but for all of us collectively.

A few weeks ago, for instance – on a Saturday of all days – the one supermarket in my small town suffered a partial systems failure, which restricted the number of payment methods that could be used. Although one could still use a credit card – as long as one inserted the card and keyed in the pin – one could not use bank cards or mobile phone apps. One could, of course, still use cash; but with no banks in town and only one ATM, which ran out of cash very quickly, this only served to make people even more anxious.

The situation was then further exacerbated by the fact that there were fewer staff than usual working on that day. Ordinarily, I get the impression that the supermarket employs about seven members of staff on the morning shift: one on a ‘served’ checkout, one supervising the self-service checkouts, one or two in the bakery, one on the kiosk, selling cigarettes and lottery tickets, and two or three replenishing the shelves. That day, however, they appeared to be at least one person short, which meant that, although they could open a second ‘served’ checkout, that was the most they could do. The inevitable result, therefore, was that all the checkouts had long queues and all the staff were clearly stressed.

Being a regular customer, who always uses a ‘served’ checkout whenever possible, not least because I like chatting to the ladies who work there, I couldn’t help feeling for them. None of them, however, were feeling particularly chatty that day. Indeed, it is something I have also noticed at our pharmacy, where the women who used to work the counter were once all middle aged and were nearly always open, therefore, to exchanging a few pleasantries. These days, however, the counter staff are nearly all teenage girls who have the necessary dexterity to use the mobile phone app upon which the pharmacy’s system is now based and are always so busy they have no time to talk.

Indeed, this may be one of the worst ways in which the combination of online systems, increased throughput and reduced staffing are affecting us. It is not just that the world is becoming more dysfunctional, it is also becoming less human and, with it, less helpful. Last year, for instance, my car failed its Ministry of Transport safety test, which left me with a bit of a problem in that the specialist garage to which I always take it was unable to schedule the necessary repairs for over two months. When I asked the manager what I was supposed to do without a car for all that time, however, his singularly unhelpful reply was, ‘Don’t know, mate. Not my problem!’ something one would never have heard forty years ago when the answer would have almost certainly been, ‘Don’t worry, mate. We’ll sort something out’. In fact, forty year ago, no garage or any small business ever worked to such a tight schedule that they could not fit in urgent jobs when necessary. It requires a computer system and, with it, a particular mind-set to be so unrelentingly inhuman.

If conforming to systems designed to make us more efficient – rather than to meet human needs – has made us less human, however, it has also made us less competent to operate within the oldest and most fundamental system of all, that of society, itself, which requires us all to undergo years of social training in order to learn how to read each other’s moods, interpret tacit signals and diffuse difficult situations with just the right gesture. The problem is that, because one has to have been around long enough to compare how things are today with how they were forty years ago, many of us simply do not realise how deficient we have become in this regard and how much less friendly the world has become as a consequence. Indeed, it’s doubtful whether many of us even suspect that the technological changes of which we are aware have been accompanied by social changes of which most of us are entirely oblivious. In fact, most people probably assume that whatever social changes have occurred over the last forty years have been for the better. After all, didn’t people use to be far more racist, sexist and homophobic back then? And aren’t we all now more enlightened and tolerant?

Enlightenment and tolerance, however, are not what makes the world a friendlier place, which is far more accurately measured in terms of the degree to which people are able to relax and not worry. It’s about knowing that if one gets into trouble, there are always people to whom one can turn without having to battle one’s way through a call waiting system only to be told that the earliest available appointment is in two weeks’ time. It’s about not being shut out of an essential service because one has entered the wrong password and cannot now rectify the situation because there are no human beings to whom one can talk in order to fix the problem out. It’s about not having to perform completely pointless tasks because that is what the rules dictate and is therefore how the system has been set up. More than anything else, however, it is about not becoming isolated by the very online technology which is supposed to bring us together but which, as I see it, is increasingly having the opposite effect.

Monday 8 April 2024

The Third Era in Relations Between Men & Women

 

1.    Why We All Have Twice As Many Female Ancestors As Male

It used to be call ‘The Battle of the Sexes’ and was thought to be due to fundamental differences in the way in which men and women think and in what they most value and want in life.

Today, of course, in this post-feminism age, this view is no longer fashionable. For if there are such fundamental differences between men and women, not only might they give rise to discrimination, but they might even be used to justify it. Even obvious physical differences, as a consequence, are now downplayed, while sex, itself, in terms of both gender and sexuality, are regarded as mere social constructs.

The question this immediately raises, however, is what then caused the battle of the sexes, if it ever existed? For if there are no fundamental differences between men and women, why, on occasion, do we get so exasperated with each other? Nor is this question answered simply by blaming men for their old-fashioned, sexist attitudes towards women. For not only does this fail to address male exasperation, but the suggested solution, that men should amend their ways and become more like women, is more or less an admission that men and women are, indeed, different.

More to the point, by continuing to deny that this difference exists, we make no progress towards understanding it. And if we do not understand it, we can do very little to either avoid or resolve the problems to which it may give rise. Indeed, it could be argued that by continuing to deny its existence and simply blaming all the consequent problems on toxic masculinity – a clear admission, in itself, that fundamental differences between men and women do, in fact, exist – we may actually be making the problems worse, as is clearly demonstrated by the well-documented rise in misogynistic online trolling. Because we do not understand such behaviour, however – and cannot understand it as long as we persist in our denial – the only way in which we can respond to it is by labelling it a hate crime, as if this in any way explains what is actually going on.

There is, however, a fairly simple explanation as to why, under certain circumstances, men can, indeed, come to hate women, which, once we accept that there are fundamental differences between the sexes, does not require any detailed psychological analysis of the individuals concerned or, indeed, any theoretical underpinning at all. For it stems almost entirely from an asymmetry in the relations between men and women I first noted in ‘Why We All Have Twice As Many Female Ancestors As Male’, an essay I wrote a little under four years ago after coming across a couple of scientific studies carried out by completely separate research teams at the Max Planck Institute for Evolutionary Anthropology in Leipzig and the University of Arizona, both of which compared the genetic diversity of male Y chromosomes and the genetic diversity of men’s mitochondrial DNA.

Although both studies thus used the same basic methodology, making the congruence of their findings slightly less compelling than it would have been if they had arrived at their conclusions from different directions, the fact that the studies were completely independent of each other nevertheless makes it more or less certain that what they each discovered is correct: that all human beings alive today, with some small regional variations, most notably in East Asia, do, in fact, have twice as many female ancestors as male.

The question to which this naturally gives, however, is why this should be so: a question I did not think was adequately answered by any of the explanations offered in the literature at the time, all of which either relied on anthropological assumptions which could not be verified or seemed to me intrinsically implausible. I therefore decided to explore an explanatory theory of my own, one which did not rely on unsupported assumptions for the simple reason that it was based on a single undeniable premise: that while it is to the advantage of any species that reproduces sexually to have as many of its females bear children as possible thereby ensuring that the next generation is as well populated as it can be there is no such advantage to be gained from having all the males reproduce. On the contrary, there is actually a distinct evolutionary benefit to be gained from restricting the number of males who become fathers so that only the strongest, fittest and best adapted pass on their genes, thereby using this otherwise largely superfluous male side of the mating equation to weed out genetic weaknesses.

I called this theory Male Accentuated Natural Selection (MANS) and went on to describe the two main mechanisms by which it is achieved. The first and most commonplace is simply to have the males of the species fight each other for the right to mate the females, a strategy which works particularly well in species in which the breeding males do not need to play any significant role in either rearing their offspring or in providing for their offspring’s mothers. Fairly obvious examples of this are grazing herbivores such as deer, cattle, buffalo and antelope: species in which the females suckle their young and where their own food is both plentiful and easily obtained.

In species in which the females do not suckle their young, in contrast, and where both parents are required to hunt for less easily obtained food – both for themselves and for their offspring – a slightly different mechanism has to be employed. The males still compete with each other for the right to mate, but in order to avoid the males being injured or driven off, the emphasis is placed on the females then choosing from among them. Good examples of this are to be found in most species of bird, where the males demonstrate their health and fitness by displaying their often elaborate plumage to the far less gaudy females, or by showing off their skills in nest building.

The biggest consequential difference between these two mechanisms is the fact that those species in which the males actually fight each other for the right to mate usually have far lager ratios between their female and male ancestors than those species in which the females select their mates. It wouldn’t surprise me, for instance, if the ratios in many species of deer weren’t in the hundreds, while many species of bird mate for life and have only one mate each. Although the MANS principle still applies in this case, with unfit males remaining unselected, each unselected male also results, of course, in a non-breeding female, thereby keeping the ratio of female to male ancestors down at 1:1.

There are, however, some species in which both of the MANS selection mechanisms can be found, either singly or in combination. This is especially true in the case of predators that hunt in packs, such as most canine species and some species of primate, including homo sapiens, where the males still, on occasion, fight each other for the right to mate with the females, but where female choice is the primary or dominant selection mechanism for the very good reason that, for the pack to be successful in hunting, unsuccessful males must still remain within it and cannot therefore be severely injured or driven off. As a result, the females of most successful species in this category are usually able to hold their own against the males, particularly against weaker males, enabling them to fight off the attentions of unwanted suitors while acquiescing in the attentions of the males they select. In this way, the successful males, while perhaps asserting their authority with the occasional growl, seldom have to actually fight off rivals and risk injury.

This, however, raises the question as to why the instantiation of the MANS principle through female choice in humans should lead to a ratio of 2:1 in the number of female to male ancestors rather than the ratio of 1:1 it leads to in birds. There are, however, a number of significant differences between birds and pack animals in other aspects of their lives. While, in the case of birds, the need for both parents to collect food for their offspring leads to pair-bonding, for instance, this is not quite the same in the case of pack animals, where the whole pack is required to work together in order to mount a successful hunt and where no special relationship is therefore required to bind together any two individuals. What’s more, all pack animals suckle their young, which means that the males do not need to play any part in rearing their offspring. Most important of all, however, is the fact that during the later stages of pregnancy and while they are nursing their young, the females of most pack animal species are unreceptive to sexual advances, with the result that the males which fathered their offspring now turn their attention towards other, more receptive females, who are willing to accept their advances for exactly the same reason as the female who has just given birth to their progeny: because they are fit and strong and highly attractive to females who want the best mate they can get. The result is that highly attractive, high status males end up fathering offspring on multiple females while lesser males are continually rejected.

It may, of course, be objected that this is not how things are in the case of human beings. After all, most of us do pair-bond and mate for life, just like birds. There are, however, a number of highly persuasive reasons for supposing that this is a relatively recent development in our history, one which only dates back to the end of the last ice age, around 11,000 years ago, when a warmer climate allowed us to switch from being nomadic hunter-gatherers to settled farmers, making the ownership of land critical to our whole way of life. Fathers who wanted to pass their land on to their sons, therefore, also wanted to be sure that their sons really were their own and not somebody else’s, a requirement which could only met through the institution of marriage, in which women were not only required to be virgins before they married but then had to stay faithful to their husbands for the rest of their lives, with severe punishments meted out to any woman who strayed, strongly suggesting, therefore, that women, just like men, are not naturally disposed to being constrained in this way.

If the institution of marriage was only introduced when we started owning property, as seems highly likely, this then has the further implication that hunter-gatherer families or clans were almost certainly ‘matriarchal’ in structure, a term which is very often misunderstood. Many people assume, for instance, that it means that the ‘matriarch’ or women in general were somehow in charge: they weren’t. It was just that all familial relationships ran through the female line, with fathers not actually being recognised.

This does not mean, of course, that children did not have biological fathers or that the biological function was not understood. Without a rigorously enforced institution of marriage, however, one could never be absolutely sure who the father was, with the result that no familial relationship between a man and a child was ever assumed unless it was mediated or transmitted via a woman. While most children would have had brothers, for instance – male siblings born to the same mother – and maternal uncles – the brothers of their mother – they did not have paternal uncles, for example – the brothers of their father – because their father simply wasn’t recognised as such.

This then had the further implication that, because a child’s biological father was not recognised as having any familial relationship to the child, he also had no role or responsibilities with respect to looking after the child. That role, in fact, fell to the child’s maternal uncles who, in the absence of fathers, were responsible for providing both for their sisters and their sisters’ children: a familial responsibility which led to some very strange taboos, as anthropologists studying matriarchal societies in the late 19th and early 20th centuries discovered.

Even when a woman was in a sexual relationship with a man, for instance, she was not allowed to share meals with him. This was because, had she done so, the man’s family would have been entitled to demand some form of payment from the woman’s family for supporting her: something upon which her brothers and maternal uncles would not have looked kindly and which they therefore did everything possible to prevent. The result was that even when a woman became pregnant, she still remained within her matrilineal family, which meant that, in the later stages of pregnancy in particular, she very often lost touch with the child’s father. By the time the child was weaned and she was ready to become sexually active again, therefore, the chances are that, even had she wanted to resume relations with the father, just like the males of other pack animal species, he would have already moved on, especially as, having already fathered a child, he is likely to have been a male of high status and value.

I say this because, from puberty onwards, most women in matriarchal societies would have spent most of their lives either pregnant or nursing a small child. While most fit and able men would have been ready for a sexual relationship at almost any time, most women, therefore, would not. This meant that those women who were available had the pick of the men and could always choose the strongest, fittest and most attractive, even if they were only of average attractiveness themselves. The result was that the same high value, high status men were chosen every time, with the further consequence that most men never got to mate and thus pass on their genes, thereby explaining why, today, we all have twice as many females ancestors as male.

2.    The Pros & Cons of Marriage

In fact, 10,000 years ago, when the gradual introduction of monogamous marriage started to bring down the ratio, the number of females to males in our ancestry would have probably been even higher. When I first started researching this subject, in fact, I came across one article which estimated that the number could have been as high as 17:1, which would have had some seriously negative consequences for most hunter-gatherer clans.

I say this because while most women and a small number of men may have been perfectly happy with this situation, the vast majority of men, of course, would have been frustrated, angry and resentful, and would have almost certainly tried to force themselves on women whenever there was an opportunity. This is because, unlike female wolves, for instance, which are about the same size as their male counterparts and can therefore fight off unwanted male advances, on average, human females are much smaller than men and are thus relatively easy for men to subdue.

This disparity in size is due to the fact that the evolutionary success of human beings is almost entirely due to our large brains. For women, however, this comes at a very high price. Not only is the gestation period of a human baby much longer than that of a wolf, but human babies are much larger at birth than a litter of cubs. This means that human females have to cope for much longer with a much larger distended belly than female wolves, making it almost impossible for them to join their men in hunting. What’s more, human offspring are dependent on their mothers for much longer than most other species, further curtailing what their mothers are able to do even after the physical restrictions of pregnancy are over. Thus, while there is an evolutionary advantage to be gained by human males being as large, strong and fast as possible – in that it makes them more capable hunters – there is no such advantage to be had from human females being large, strong and fast, in that, for most of their lives, they wouldn’t be going hunting anyway.

In fact, there is actually an evolutionary advantage to be had from women being small. For the greater one’s lean body mass, the higher one’s metabolic rate: the amount of energy required to maintain the body in its current state. At a time when the amount food available was never certain, there was a far greater chance of a woman and her unborn child surviving, therefore, if the woman was small and her own nutritional requirements commensurately low.  

The problem, of course, was that this put women in an extremely onerous position. For while they were required to select the men with whom they were willing to mate – in order to prevent men fighting and possibly killing each other for that honour – they were not equipped, in the way that other pack animals are, to defend themselves against the men they reject. In fact, it is unlikely that we could have evolved in the way we did if, in addition to men posing a threat to women, women hadn’t also had the protection of men, which would have been provided, not by the males within their matrilineal families, who are more likely to have been among their abusers, but by those fit, strong, alpha males whom they regularly selected as mates, and who would have had a vested interest in ensuring that the small number of women who were available to them at any one time were not removed from the mating pool for any significant period as a result of being made pregnant by men whom they regarded as their inferiors.

In fact, this would have been another reason why women would have always chosen the strongest, fittest and fastest men with whom to mate, in that they would then have fallen under their protection. It would, however, have created a considerable amount of tension and incipient violence within the clan: so much so, in fact, that a shrewd clan leader would have understood that, in order to maintain his position, he needed to channel the aggression of his fellow clansmen in the direction of other clans, keeping them loyal to himself by offering them the prospect of abducting another clan’s women, who, in the short term, at least – until they were assimilated into their new clan – would have then been used to keep the clan’s lesser males happy. In fact, so essential would this have been to the stability of the clan that it would have almost certainly been standard practice, especially as it was also standard practice for hunter-gatherer clans to split into two groups each day, with the men going off to hunt while the women and children went foraging, making them very easy targets.

Far from the peaceful, pastoral existence it is so often depicted as being, the life of a hunter-gatherer was thus extremely violent. In fact, another reason why we all have twice as many female ancestors as male would have been the very high mortality rate among men, with males not only being killed while hunting and in wars between clans over women but during the abduction of women, itself, when the male children within a foraging group would have been killed to prevent them growing up to wreak revenge upon their captors.

For men in particular, therefore, the transition from hunter-gather to settled farmer and, with it, the transformation of the entire structure of society would have been a great improvement, not least because it also transformed what had once been a purely physical competition between men into an essentially economic one, thereby enabling far more men – and men of a very different calibre – to win the favour of women. For in order to marry and have a family, it was now no longer enough for a man to be fit and strong and therefore able to fight off his competitors, if he wanted to lure a woman away from the relative security of her existing family, he now had to provide her with a home and a means of support: a new and additional requirement on men which was eventually to become the biggest economic driver in human history, incentivising men not just to work hard and scrape together a living, but to be inventive, resourceful and capable of building the kind of life a man could ask a woman to share.

Not, of course, that this happened over night, not least because, in any property or capital owning society, some men are inevitably more successful than others, with the result that throughout most of our pre-industrial history, wealth tended to accumulate in the hands of a small number of families, who then very often used it to prevent others from improving their economic position, thereby maintaining the status quo. Because this also had a tendency to lead to economic and social stagnation, however, it also led to such societies being regularly taken over – often quite violently – by a technologically more advanced society, which would have been more technologically advanced precisely because it allowed those inventive and resourceful individuals who were capable driving the society forward to rise within it.

A perfect example of this is ancient Rome, which came to dominate the Mediterranean world very largely because, during its republican period, it fostered a highly dynamic and entrepreneurial middle class, which, among its many other technological achievements, invented a hydraulic-setting cement which, when added to an aggregate, formed what the Romans called opus caementicium but which we now know as concrete. This, in turn, allowed other middle class entrepreneurs to build aqueducts and sewers, apartment blocks up to seven stories high and, of course, Roman roads, which allowed Roman generals to move armies at far greater speeds than any of their competitors and hence conquer most of the known world. Thus, while it may have been aristocratic members of the senatorial class who led Rome’s armies, it was middle class engineers and businessmen who, by continually striving to improve their station in life in order to marry and have a family, made it all possible.

The entrepreneurial inventiveness of competitors, however, was not the only driving force by which restrictively conservative and hence socially and economically stagnant societies could be overturned. Throughout history, this has also happened as a result of natural disasters, one of the most historically significant of which was the bubonic plague pandemic of the mid-14th century which effectively brought centuries of feudalism to an end. Up until then, the vast majority of people in Europe had been bound for life to their feudal lords, whose estates they were required to work in exchange for being allocated small strips of land which were barely large enough to keep them alive, let alone yield a profit. Because they neither owned the land on which they worked nor had the right to leave it, this meant that they were trapped in a state of serfdom for their entire lives without any prospect of economic or social improvement.

With respect to marriage, this meant that women were more or less in the same position they were in as matriarchal hunter-gatherers. For unable to choose a mate on the basis of his position or prospects, they ended up once again choosing the biggest, strongest and fittest man they could get, not only because his very fitness made him physically attractive but because it also made him capable of both fulfilling his duty to their feudal lord and producing as much food as possible from their own meagre plot of land.

This all changed, however, between 1346 and 1353, when the black death swept across Europe killing an estimated fifty million people – roughly half the population – leaving most feudal lords with insufficient serfs to work their estates, thereby creating a demand for labour which eventually brought  feudalism to an end. For while some feudal lords tried to maintain the traditional feudal order – often by making terrible examples of any serf who tried to run away – others somewhat predictably adopted the more devious and temporarily more successful strategy of poaching what serfs remained from their neighbours, usually by offering them more land.

The problem with this strategy, however, was that it was therefore the aristocracy, themselves, who first broke the rules binding serfs to their masters: a contravention of the supposedly unalterable because divinely ordained social order which the serfs, of course, couldn’t help but notice, making them realise that they were not, after all, bound for life to a single master, that they could move from one master to another or, indeed, from one master to no master at all.

Again, of course, this did not all happen overnight. To varying degrees, the emancipation of their serfs was resisted by the aristocracy all across Europe. Once the idea of the sanctity of serfdom was broken, however, in the minds of free men it could not be easily reinstated, especially when its beneficial economic effects began to be seen. For it wasn’t just that peasants could now move from one landowner to another, they could actually leave the land altogether and make their way to the cities where, as a result of more and more men being willing to risk their live on voyages of discovery for the chance of making their fortunes, they could obtain paid employment in any one of the growing number of industries involved in foreign trade.

In 15th and 16th century London, for instance, they could always find work on the docks, or in ship building, or in sail or rope making, or in one of the many iron foundries that were then springing up across the city in order to supply the growing number of factories producing both tools and weapons. Then there were all the trades required to clothe, feed and house all this industry: the butchers, bakers, brewers, tailors, carpenters and stonemasons. Even more importantly, with the economy expanding so rapidly, there was always the possibility that, having learnt a trade as an apprentice or even as a mere labourer, an enterprising young man could go into business for himself, thereby adding to the burgeoning middle class which, just as in ancient Rome, was rapidly becoming the economic driver for an entire civilization. And just as in ancient Rome, what ultimately lay behind all this enterprise was, of course, the desire of most men to get married.

If there were any negative aspects to this revolution, in fact, it was with respect to women themselves. For while marriage was a major goal in the life for most men, for women it was somewhat different. I say this because for many women, especially middle class women, for whom there were very few employment options, it was more of a necessity. For having been removed from the matrilineal families, in which they had previously spent their entire lives, enjoying the freedom to have sex whenever they wanted it with the assurance that their children would be looked after by their mothers, brothers, sisters and maternal uncles, women who now failed to find husbands were condemned to a sexless and childless life in their parental home, where they would eventually be considered ‘old maids’. If a woman had a brother, who would one day inherit the family home, the prospect was even worse. For she would then end up living with him and his wife, who, being mistress of her husband’s household, quite naturally took precedence over her.

Faced with this prospect, it is hardly surprising, therefore, that many women undoubtedly accepted proposals of marriage from men they certainly didn’t love and possibly didn’t even like, an arrangement which would then have been further soured if the husband turned out to be less good at providing for his wife than he had made out in his suit, causing the marriage to spiral into a loveless pit of mutual recriminations and reciprocated disaffection, from which neither husband nor wife could ever escape.

Not, of course, that things were very much better for working class women. For while they could find employment and manage on their own if they had to, in many cases they had no choice. I say this because, except in the countryside, where landless farm labourers often received ‘tied’ cottages as part of their remuneration and could at least, therefore, provide a wife with a roof over her head, throughout most of the 16th, 17th and even 18th centuries, few working class men could actually afford to get married. In cities like London, in fact, the best chance most working class men and women had of marrying was to work in service in a household where they could either gain employment as a married couple or meet and get married with their employer’s consent. For many working class women, however, the only way they could survive in many of Europe’s larger cities was by resorting to prostitution, which flourished in Elizabethan London, for instance, precisely because there were so many working class men who could never afford to support a woman on a full time basis and whose only chance of ever getting one was to therefore pay for her by the hour.

It wasn’t until the industrial revolution started to dramatically increase overall wealth throughout the economy, therefore – not only elevating more people into the middle class but also making it possible for more working class people to marry – that marriage actually began to fulfil its potential, not only as an economic driver, but as a satisfactory solution to both men and women’s sexual needs. For as more men could afford to marry, women had more men to choose from. With more men to choose from, they therefore had less need to fear being ‘left on the shelf’ and could consequently take more time and care in making their selection. The more careful they were in their choice, the more likely they then were to end up in a happy and fulfilling marriage. And with happy wives, men, too, were happy.

The only problem was, of course, that women still didn’t really have a choice as to whether to get married or not. Even by the first half of the 20th century, employment prospects for women were still poor, not least because, fearing that women would cease working when they got married – if not immediately then when the first child came along – employers were reluctant to invest time and effort in training female staff. Because they couldn’t risk getting pregnant unless they had someone they could rely on to provide for them if they couldn’t work, this then more or less forced women to get married if they wanted to enjoy either a decent standard of living or a sex life.

Then, in 1960, the first reasonably safe, reliable and easy to use oral contraceptive came on to the market and the world changed as profoundly as it had done when we stopped being matriarchal hunter-gatherers and became patriarchal farmers.

3.    The Need for a New Relational Paradigm

Again, it did not happen overnight, not least because, in this case, it proceeded in a series of cyclical feedback loops. The introduction of the pill meant that women could delay getting married and having children until much later than had previously been the case. This in turn meant that employers were more open to employing women in positions that were previously closed to them, thereby giving them more economic independence and allowing them to extend their period of sexual freedom still further. In this way, incremental increases in both sexual freedom and economic independence continually reinforced each other. They did so, however, in a way that was so gradual that hardly anyone would have noticed had the transformation not been so well publicised.

Even then, for those of us who lived through it, it seemed strangely remote: something that was happening somewhere but not wherever we were. When I was a teenager in the 1960s, for instance, swinging London, hippy culture and the summer of love were things we only watched on television. In small towns all across Britain, young men and women still assumed that, one day, they would get married and have children, just like their parents. Even when we went to university, for most of us ‘free love’ seemed more like a sensationalised projection of the media than anything real, especially as, by that time, a new socially conscious feminism had arrived on university campuses, which not only regarded hedonistic promiscuity as sexually exploitative of women but took the view that any man who didn’t take feminist politics seriously was a male chauvinist pig who wouldn’t, as a consequence, be getting any kind of love at all.

The result was total confusion. Most of us didn’t have a clue what was going on or how we were supposed to behave, and those who did were beginning to question whether the sexual revolution wasn’t a two edged sword, especially for men. For while there were some men who were undoubtedly enjoying the benefits of being serially selected by a whole procession of women eager to exercise their new sexual freedom, these were the same men they had always been: fit, attractive and self-confident, with all the trappings of success which such attributes brought with them. Most ordinary, averagely attractive, run-of-the-mill type men simply weren’t in that league. Worse still, traditional marriage, which had been the average man’s salvation for hundreds of years, allowing him to win a woman’s favour simply by proving to her that he would be a reliable, hard-working father for her children, now appeared to be slipping away from us. For if women were economically independent, they had no need for such a man or, if they did, it would be on radically different terms. For most men, therefore, it looked like we were heading back to the bad old days of matriarchal hunter gatherers, or would have done if we had actually thought in those terms.

The one thing in our favour – if a favour it actually was – was that the cycles of social transformation were not yet over. For as I have explained elsewhere, the fact that more and more women were now going out to work meant that the huge amount of goods and services they had once produced in the home without financial remuneration – from home cooked meals, to homemade clothes, to the care of both children and the elderly – were now being produced by the same women in paid employment. This meant that although there was no great increase in production – in that there was no great increase in the amount of these goods and services being consumed – there was an enormous increase in the money supply, resulting in massive but hidden inflation, which was not picked up by any price index due to the fact that all such indices only measure increases in the price of the same or similar goods and services over time, not the increased monetary cost of substituting home produced goods and services with their new commercially produced equivalents.

The result was a massive reduction in the real value of people’s wages, which meant that while a family with three, four or even five children had lived quite satisfactorily on the pay packet of one wage earner in the 1950s, by the 1990s, a family with just one child could barely manage with both parents working. This also meant that the economic independence of young women didn’t actually last very long. For over the same period, it became increasingly difficult for a single wage-earner, living on their own, to cover the cost of both their accommodation and all their other living expenses. The result was that most young people – both men and women – either had to share accommodation, dividing the rent between them, or remain at home, living with their parents in a kind of enforced extended adolescence, which has been getting steadily longer ever since.

This is largely because the devaluation of money and the need for both partners in a marriage to go out to work also resulted in increased government expenditure, particularly on things like child care and care for the elderly. This, in turn, naturally led to increases in taxation which then put up the cost of just about everything else. The result was that corporations throughout the developed world started relocating whatever production they could to countries with lower overheads. And although this was primarily restricted to large scale manufacturing operations, it inevitably had an effect on traditional supply chains, which often consisted of small to medium size businesses in the vicinity of the now relocated factories. These too, therefore, disappeared, along with thousands of highly paid jobs in both engineering and management, effectively hollowing out the middle class across both Europe and America.

Not, of course, that all such high paid jobs in industry were lost. Some, like those in construction, transport and the utilities, simply couldn’t be offshored and have therefore continued to provide the kind of employment which can still more or less support a traditional family. In most western countries, however, most of the employment that remained was in the service sector, where the jobs could be roughly divided into three main categories: high end service sector jobs such as those in finance, the media and central government; mid-tier service sector jobs such as those in law, medicine and higher education; and low end service sector jobs such as those in retail, hospitality, healthcare and social services.

Like the remaining well paid jobs in industry, the mid-tier service sector jobs in law, medicine and higher education are also sufficiently well paid to support a traditional family and, together, these two sectors of the economy constitute what is left of the middle class, which has remained relatively unchanged in its values  and lifestyle over the last sixty years. The difference in pay between the high end and low end service sector jobs, however, represents a massive polarisation of society between those at the top and those at the bottom which has also given rise to a huge divergence in the values and approaches to life of the two groups, including their approaches to relations between men and women.

Of course, it could be argued that there has always been a substantial difference between the values of the upper class or aristocracy and those of the working class, and that this difference has always extended to the relations between men and women. After more than a century in which all layers in society subscribed to what was more or less the same social paradigm, however – one based on traditional marriage and the traditional family – for which the British Royal family was actually required to be a role model, forcing Edward VIII to abdicate in order to marry a divorcee, the divergence between the millionaire class and the rest of us, not just in wealth but in social attitudes, has never been so stark.

One sees this most clearly in the attitudes of the two classes towards money, with those at the top placing considerably more emphasis on its acquisition than those struggling to get by: a difference in attitude which may initially seem somewhat counterintuitive in that one would have assumed that it would be those without money who would place more emphasis on it. For those at the top, however, neither money, itself, nor its conspicuous expenditure on everything that can be tastefully bought with it, has anything to do with the pleasure or satisfaction to be gained from owning material possessions. For both men and women, in fact, it has far more to do with the social status such possessions bestow on those wealthy enough to afford them. After all, one does not buy a Rolex because it is a good watch, but because it is a Rolex.

If it is status, rather than money, that both men and women in the upper echelons of society primarily seek, however, the value they each place on it is slightly different, with women nearly always regarding status as an end in itself – a sign in a feminist age that they have ‘made it’ – whereas for men, its value lies predominantly in what it brings them: the interest of women. In fact, for women, having a high status may actually have a negative effect upon their personal lives, in that, due to our evolution as pack animals, most men will not approach a woman with a higher status than their own. A high status woman who wishes to have a sex life, therefore, either has to find herself a ‘toy boy’ – a lower status male who is willing to accept the disdain of other men and, indeed, of many women, for trading on his good looks and physical fitness to obtain other benefits – or attract a mate of equal or higher status than her own: something which is not impossible – one does occasionally come across ‘power couples’ who are celebrated for their almost fairy tale relationship – but is very rare. This is largely because, by not committing to such a relationship, high status males can of course allow themselves to be serially selected by an endless procession of attractive women who have been drawn to such men ever since our species first drew breath.

No matter which category they fall into, for most women at this top end of society, therefore, the result is less than satisfactory. A high status woman can always, of course, bask in her achievement and focus on her career; but she will very probably end up doing so alone. Similarly, those very attractive women who are able to serially pursue high status males may well have a lot of fun in doing so, enjoying all that life has to offer; but at some point, as I have heard many of these women complain, they hit what is known as ‘the wall’. As their biological clocks tick down and they start to feel the need for a more serious and permanent relationship, the high value males they pursue simply stop asking them out and turn their attention, instead, to younger models.

The mistake this second group of women make, however, is not merely that of failing to realise that the men they pursue have no reason to change their behaviour, it is more that the change they want from these men is not merely behavioural: it is rather a change in the paradigm upon which the relations between men and  women in this group are largely based. I say this because, at the risk of stating the obvious, up until the point at which these women discover that it is no longer working for them, the paradigm upon which they had been basing their lives was clearly a modern version of the matriarchal paradigm in which both men and women enjoy serial affairs without commitment, the main difference being that, whereas in the prehistoric version, it was the men in a woman’s matrilineal family who were responsible for supporting her, in the modern version, the women have to find some way of supporting themselves, even if part of the means by which they achieve this is provided by whoever picks up their hotel and restaurant bills and pays for the flights to their latest holiday destination. The problem is that, while women may still expect men to pay for them in this way, it is quite another thing for them to expect men to actually revert to the traditional patriarchal paradigm in which men commit to taking economic responsibility for women in return for what, for the price of a shopping trip to Dubai, they are already enjoying, especially as the eventual and highly predictable termination of this commitment is likely to be very expensive.

More to the point, these two paradigms do not merely characterise two different ways in which sexual relations between men and women may be conducted, they are actually the bases of two completely different social orders – those of the matriarchal clan and the patriarchal family – which are not only incompatible but are highly inimical to each other, making it very difficult for them to stably coexist. What this suggests, therefore, is that, in the higher echelons of our current society, their present coexistence represents what is still very probably a transitional phase, in which either of the old paradigms may yet prevail or an entirely new paradigm emerge.

Nor are things significantly less complicated at the other end of the economic spectrum, where those in low end service sector jobs have no choice but to form stable, long term economic partnerships if they are ever to escape the parental home or the accommodation they share with friends and establish a home of their own. The problem in their case, however, is that, after going to university and obtaining a degree which adds nothing to their earning potential while saddling them with a whole load of debt, most of them don’t realise the truth about their situation until they are in their early thirties, when they may then have a great deal of difficulty finding the right economic partner.

This is partly due to the fact that, in going to university, they may well have become separated from the group of friends with whom they grew up, especially if they did not return to their home town on completing their degree but found a job elsewhere. Without an existing network of friends, this then makes it very difficult to make new friends, a problem which is not significantly ameliorated by social media and online dating apps, not least because online dating apps, in particular, tend to promote dishonesty and superficiality.

When choosing a photograph for our online profile, for instance, we always choose the most flattering. Similarly, when providing our personal details and a list of interests, we tend to emphasise those activities which make us seem more interesting, even if we have only ever watched them on television.  Worse still, when it comes to making our own selection of other people’s profiles, we give most of them about two seconds consideration before moving on with a dismissive swipe.

Is it any wonder, therefore, that most people are generally disappointed by the whole exercise? More to the point, this is not the way to meet the economic partner we are going to work with and rely on for the rest of our lives, who is far more likely to be found in a social setting which, initially at least, has nothing to do with dating. For it is only when we interact and converse with others in a less artificial context, in which we are not actually trying to impress anyone, that we tend to reveal our true selves while simultaneously discovering qualities in others which we would never have gleaned from an online profile.

Even if people are lucky enough to find the right partner for themselves in this more informal and old-fashioned way, however, this is only the beginning of a couple’s uphill economic journey together. For even with their two salaries, they are probably going to have to work all the hours they can, not only to cover their current living expenses, but to put money aside for a deposit on a home of their own, something which can take years and will probably require them to make considerable sacrifices, like not going out at weekends, for instance, or taking holidays, all of which will put enormous strain on their relationship and may eventually lead them to ask the all-important question: ‘Why are we doing this? What is it all for?’ questions which would once have prompted the simple and unequivocal answer: ‘For the children’, but which is now less and less applicable or relevant. For having only met in their early thirties and spent years saving up to put a deposit on a house, many couples today will never actually have children.

In fact, the cost of buying a home and the need for couples to put off having children for as long as possible is one of the main reasons why the fertility rate in most developed countries is well below the global population replacement rate of 2.1, with the UK’s fertility rate currently standing at just 1.56. While this may represent a serious demographic problem for national governments, however, for married couples it can actually be disastrous. For while children can put additional strains on a marriage, forcing couples to work even harder, they also provide a common purpose without which many couples may simply drift apart or even come to the conclusion that there’s just no point in carrying on.

One possible solution, of course, is for couples to find some other project they can work on together, one very obvious candidate being that of starting a business, which, if successful, would have the additional benefit of improving their economic position. Indeed, it’s possible that, just as the lure of marriage was once our civilization’s biggest economic driver, motivating men to succeed in business so as to be able to afford a wife, so the need for married couples to find an alternative purpose in life may now not only help reverse the West’s economic decline but may actually come to constitute a new paradigm for relations between men and women: marriage not just as an economic partnership but as a business partnership.

Not only does starting a business require a high degree of business acumen, however, as well as a huge amount of determination and hard work, but it also requires a great deal of sacrifice, which, if the business is run by a married couple, may mean them giving up having a family, which they may then come to regret, especially if the business is a success and they have no one to leave it to. While starting a business may be a solution for some couples, therefore, it is probably not a universal panacea.

Another purposeful activity to which it seems that couples have increasingly devoted themselves in recent years is therefore political activism – especially with respect to issues such as climate change – which may well have proliferated over the last two decades precisely because it not only binds couples together in shared values but fills the vacuum created by them not having anything more personally meaningful in their lives. What this really tells us, however, is just how big a loss the decline in traditional marriage has been to many people. For while the freedom to have sex without having children may have given us far more choices in life than we ever had before, not only does this make making a choice that much more difficult, but the one choice which most of us always had and which made life that much simpler – that of getting married and having kids – has now been largely taken away from us, leaving us with an empty space which many of us are now struggling to fill.

4.    Hatred and Other Mental Illnesses

One of the groups which struggles most in this respect, of course, is that comprising men who are never selected by women partly because they have no direction in life: a man with neither goals nor the energy and determination to achieve them being the last thing a woman looking for an economic partner needs. In one of the most vicious of all vicious circles, however, one of the reasons these men have no direction in life is because they know that they are never going to be chosen by a woman, making their whole lives seem rather pointless. This, in turn, not only drains them of all ambition but of all confidence, making them doubly unattractive to women and sending them into a spiral of lethargic despondency in which they don’t even try to get a place of their own or make progress in their careers, seeking comfort instead in the solitary pleasures of video games, online pornography and over-eating, thereby putting on weight and further confirming them in their self-fulfilling belief that they have no chance of ever getting a woman.

While most men in this position simply succumb to a life of loneliness and quiet despair, however, given a little encouragement, some men can start to project the cause of all their woes onto others, specifically women, whom they can come to hate not merely because it is women who deny them what they most want and need, but because the denial alters the nature of the desire, turning it into a desire to destroy the desired object in order to overcome the humiliation of being denied it.

If this seems slightly contradictory or paradoxical it is because hatred, itself, is slightly contradictory. I say this because, as I have explained elsewhere, far from being what most people think it is, hatred, like envy, to which it is closely related, is an essentially upward looking emotion. That is to say that we can only hate someone or some group of people if we consciously or unconsciously look up to them in some way or regard them as in some sense superior to ourselves. Consciously or unconsciously, we may admire them, for instance, or want to be like them. In some cases, we may even feel that we have relationship with them. We see this most clearly, for instance, in the way some fans feel that they are in some kind of relationship with their idol, or the way a stalker may feel that he is in some kind of relationship with the woman he is stalking. Then, one day, we discover that far from having any relational ties to us, or even any regard for us, the person we so admire has never even noticed us or, worse still, looks down on us with contempt or disdain. Not only does this therefore reinforce our conscious or unconscious sense of inferiority, but it also makes us ashamed, both of what we are and for being so foolish as to think that we could ever be accepted as equals by these people: a painful realisation of the truth which, if we let it, can then gnaw away at us, turning our former admiration into resentment and hatred.

Importantly, this is not an inevitable or necessary reaction. Most people, in fact, simply crawl away somewhere to hide their shame from the world. In order to transform it and redirect it back towards its proximate cause requires either a predisposition towards blaming others for our own failings or, perhaps more commonly, the prompting of a third party: a friend who, in coming to our defence, asks ‘What makes that bitch think she’s any better than us?’ a question which, once we have thought about it, enables us to channel our hurt feelings into vengeful indignation which seems not just justified but righteous.

In another slightly odd twist, another important characteristic of hatred is that its cause does not have to be real. Not only may the slight or offence we feel we have received have been entirely unintentional – indeed, if someone has merely failed to notice us, it can hardly have been deliberate – but it can also be entirely imaginary. The misogynist who hates women, for instance, does not have to have been repeatedly rejected by women to feel that they look down on him. In fact, he may never have even had the nerve to approach a woman to find out. Based on his sense of inadequacy, however, he believes that he would be rejected if he were to approach a woman and hates women precisely because it is this belief that makes him feel inadequate.

An even more important attribute of hatred, however, is that it loves a crowd or, more precisely, a mob. That’s not to say, of course, that individuals cannot hate. When they do so, however, the hatred tends to be personal in the sense that it is directed at another individual rather than a group. Hatred directed against groups, in contrast, tends to be generated by groups, who continually remind each other of their grievances against those they hate, thereby not only continually rubbing the sore, but making it virtually impossible for group members to dwell on their own inferiority or inadequacy, as this would necessarily involve projecting this inferiority or inadequacy on to the group as a whole, which other members of the group would staunchly reject as being offensive and hence hateful.

Just as importantly, mobs also give people the freedom to indulge violent emotions which they would not be allowed to indulge in everyday society. This is especially the case if the mob also confers anonymity on its members, as is commonly the case with respect to most social media platforms, where men are now increasingly posting misogynistic comments of which they would be ashamed if they appeared under their own name. I say this because even though we try to deny it, even to our ourselves – or more especially to ourselves – everyone who hates knows that their hatred is fundamentally based upon a sense of their own inferiority or inadequacy and that the expression of this hatred consequently reveals them to be what they are, of which, of course, they are inherently ashamed.

My point in explaining all this, however, is not merely to answer the question I tangentially asked at the beginning of this essay as to why misogynistic online trolling is on the increase, or even to explain why merely labelling it a hate crime completely fails to explain it if we do not first understand what hatred actually is, which most people don’t. My point has rather been to explain why, even if one does understand the nature of hatred, one cannot understand why men hate women unless one also understands the role women play in natural selection. For unless one understands that women are genetically programmed to always choose the fittest and strongest men with whom to mate and that they are programmed to do this because it gives them the best chance of having strong, healthy children who are more likely to survive and hence pass on this very same genetic programming to their own children, one cannot understand why those men who do not make the grade should feel so humiliated by what is, in effect, an evolutionary rejection by their own species that they actually hate women as a result.

Even more importantly, unless we understand all this and can also make the misogynist understand it, we have no chance of making him realise that women are not to blame. After all, evolution is not a cognitive process: women don’t choose to be genetically programmed in this way. If there is any fault to be assigned here, therefore, it belongs squarely on the shoulders of the men, themselves, for not coming up to scratch. Even this, however, is a little unfair. For it is not only nature that has dealt men such a lousy hand, we too – which is to say society – must also take some of the blame for the invidious position in which many men now find themselves, which is at least partly due to the fact that, in recent years, we have conspicuously failed to teach them how to be better at being men: a responsibility which was once seen as a priority in the education of boys but which has now more or less disappeared from our education system.

That this is something to which I can personally attest is due to the fact that, during the 1960s, I attended an all-male grammar school where the all-male teaching staff imposed on us a highly disciplined regime specifically designed not just to turn us into men but men of certain character. Each week, as a consequence, we had six periods of physical education, which, for most of the year, consisted in playing rugby, an extremely physical sport which not only made us physically fit but also taught us courage, confidence and fortitude. I say this because I know from experience that it is only when one has tackled someone larger than oneself and successfully brought them down that one acquires the courage and confidence to do it again and again and again. Even more importantly, it is only when one has been kneed in the face a few times while making such tackles that one eventually learns, not only how to keep one’s head out of the way of galloping knees, but how to shrug off such knocks and get on with the game.

Being an all-male school, another important lesson we were taught was how to be gentlemen: how to always treat others with politeness, courtesy and respect, thereby making the world what I like to think is a slightly more civilized place in which to live. We were also taught never to lie, in that, in lying, we render our word worthless, and never to cheat , in that, in cheating, we pretend to be better than we are instead of actually working to make ourselves better, thereby doing a disservice to ourselves. Most importantly of all, however, we were taught that when we did something wrong and were subsequently confronted by it – as we usually were – we didn’t try to weasel our way out of it, which only worsened our offense, but owned up to it ‘like a man’, which is also how we took our punishment.

Of course, many people today would say that this was a very harsh regime. And, in many ways, it was. I, for one, however, would not have had it any other way, in that nothing else could have so prepared me – physically, intellectually and morally – for the life I have lived or made me the man I like to think I became. For, in this, of course, the post-feminists are correct: men don’t come out of the womb fully formed; they have to be made. The mistake the post-feminists make, however, is to suppose that this making of a man starts with a blank canvas, when, in fact, the starting point is already the result of thousands of years of evolution, upon which one can work to bring out the best of those attributes with which nature has endowed the males of our species, but which one cannot simply ignore or deny. For if we deny that there is something that it is inherently like to be a man, which differentiates men from women, not only have we absolutely no chance of instilling in our young men those very qualities of fortitude, honesty and gentlemanliness which most women almost certainly want in their men, but we risk storing up a whole heap of troubles for ourselves when men do not then act in the way we want them to. Indeed, a society which does not train its young men to be the best men possible has only got itself to blame when it all goes horribly wrong.

If our failure to teach men how to be men results in misogynistic online trolling, however, this is a relatively minor issue when compared to some of the even more profound problems that can occur as a result of our underlying refusal to differentiate between men and women. For hatred is not the only mental illness that can arise when we do not have a strong and robust sense of who and what we are, especially in the case of children, who invariably prefer clarity and certainty to ambiguity and vagueness, especially with respect to such fundamental issues as their gender. After all, whether one is a boy or a girl tends to determine many other things in one’s life, from the clothes one wears to the games one plays. To be told, while still a child, therefore, that one’s gender is a matter of choice, rather than one of life’s absolute certainties, can be both confusing and unsettling, much like being told that one’s mum isn’t one’s mum.

Not, of course, that I know for certain what is actually being taught in schools these days. I am not a parent of school age children and, even if I were, most parents only get occasional glimpses of what goes on beyond the school gates. It is also highly likely that what is reported in the media is greatly exaggerated. The mere fact that a transgender debate concerning prepubescent children exists, however, and that teachers can lose their jobs for misgendering a child, suggests that something very disturbing is happening. In fact, it reminds me of what is probably the most famous aphorism of the now not quite so famous leader of the anti-psychiatry movement in the 1960s and 70s, Dr. Ronnie Laing, who wrote in ‘The Divided Self’ that ‘for every disturbed person in the world, there is usually at least one disturbing person’, someone who, despite appearing perfectly normal, is usually the more profoundly disturbed of the two.

What makes this comment not just highly relevant, however, but slightly ironic, is the fact that, as well as being the leader of the anti-psychiatry movement in the 1960s and 70s, Laing was also a researcher and clinician at the Tavistock Clinic in London, an institution that is now heavily embroiled in the transgender controversy, the irony being, therefore, that, from Laing’s point of view, those who are currently disturbing children by telling them that they can be whatever gender they want to be – including clinicians at his own former institute – would almost certainly be regarded as, themselves, suffering from a mental illness. What is also highly relevant is the fact that it is one of the most prevalent mental illnesses in the world today. For in its most generic form, it is nothing other than the mass delusion of our era that there is no immutable reality, that reality is whatever we want it to be, such that if a boy wants to be a girl, he can be a girl.

What really makes this particular delusion so dangerous, however, is not just how much damage it can do to those who are adversely affected by it, such as children who are led to believe that they were born with the wrong genitalia, but the fact that it remains largely undiagnosed and is regarded by most people as entirely normal. The result is that it is on the rise almost everywhere, in almost every sphere of thought. Nowhere is it more conspicuous, for instance, than in the kind of economics that has brought us to the point where we can no longer afford to have children, unless, of course, they are paid for by the state: a recourse which, far from being a solution, is actually the cause of the problem, stemming as it does from this same delusion that reality is whatever we want it to be. The only difference, in this case, is that the new reality we believe we can wish into being simply by declaring it so is not a different gender but an entire fantasy world in which the state can and therefore should be the universal provider, conjuring money out of thin air to pay for everything we need.

The inevitable consequence of this fantasy, however, is a massive rise in the UK’s national debt, which currently stands at £2.98 trillion or 108% of GDP. The fact that no one seems to be alarmed by this, moreover, is yet another sign of how detached from reality we have become. For the reality, of course, is that, eventually, the lenders of all this money will not only realise that it can never be repaid – which they probably already know – but will actually start to worry that, at some point, the UK Treasury will not even be able to pay the interest on it and will therefore stop lending us any more money, thereby causing the UK government to default and the entire economy to collapse.

Nor is the UK alone in this regard. Just about every country in the western world is in a similar position, which, given the interconnectedness of all our economies, means that, once one economy fails, the rest are likely to follow suit. In fact, we are almost certainly heading for the worst economic collapse since bubonic plague first arrived in Europe in the 6th century and proceeded to decimate the continent’s entire population: a demographic and economic disaster which is perhaps most startlingly illustrated by what happened to the population of Rome, which fell from around a million to just 10,000 in little more than a century.

Indeed, so difficult is it to grasp the magnitude of this catastrophe that it is actually quite helpful to look at it purely through the lens of some of its more extraordinary economic consequences, one of the most historically interesting of which is the fact that, by the middle of the 7th century, there was no one left who remembered how to make cement, which had the further consequence that concrete was not used in the European construction industry for the next five hundred years, until it was effectively reinvented in the 12th century. Even then, it was not as good as the concrete Roman engineers had used and didn’t meet Roman standards again until the 17th century, a thousand years after it had disappeared.

The reason I mention this little known historical fact, however, is not just to point out how fragile and precarious all civilizations are or even to reinforce the point I made earlier in this essay that natural disasters, such as the return of bubonic plague in the 14th century, are one of the primary causes of economic upheaval. My reason for resorting to yet another historical illustration – which I realise I tend to do quite frequently – is rather to emphasise a point I have been trying to make throughout this essay: that, whatever the cause of a particular economic change, whether it be a pandemic or the invention of a new technology such as the contraceptive pill or, indeed, a  wishful delusion about what is economically sustainable, economic changes almost always lead to social changes, including changes in the relations between men and women. Given the magnitude of the economic storm that is currently heading our way, it is highly unlikely, therefore, that the economic and social changes we have witnessed over the last sixty years, enormous though they have been, have yet come to an end.

Not, of course, that I or anyone else is in a position to say what relations between men and women will look like when the dust has finally settled. Given that most national governments will be bankrupt, however, and that the state as the universal provider will have ceased to exist, the one thing of which we can be absolutely certain is that we are going to be served a massive dose of reality, not just with respect to the economic facts of life but, more particularly, with regard to relations between men and women, which it will no longer be possible to base on what either men or women may wish, but will have to be grounded, once again, in what evolution has handed us and the unforgiving reality of the world in which we find ourselves. The question, of course, is whether we can make a better job of it this time than in our last two attempts.