Sunday 18 October 2020

Problems in the Culture of Modern Science

 

I first began to suspect that there might be something wrong with the culture of modern science in the mid-1980s, when I taught at the University of Jyväskylä in Finland and sought to augment my lowly junior lecturer’s salary by editing the English language versions of scientific papers being submitted for publication by my Finnish colleagues. As most Finns speak excellent English, this task largely consisted in tidying up the grammar a little, mostly by cutting out a number of commonplace ‘Finnishisms’ which arise in Finnish English as a result of the structural differences between the two languages, with Finnish, for instance, using case endings instead of prepositions, having only one word for the third person singular instead of three, and being entirely without a verb ‘to have’. In order to satisfy myself that any given sentence or paragraph made sense, however and, just as importantly, actually expressed what I thought the author intended I nevertheless found that I usually had to spend quite a lot of time learning the language of the science, itself, which primarily consists in the nouns and verbs which denote a science’s objects and the ways in which these objects relate to or act upon each other.

Given how much time I usually had to spend on this background research, I was always especially pleased, therefore, whenever I received repeat business within the same scientific field, in that while I could still charge my regular fee, I didn’t have to learn a whole new scientific language from scratch. It was when dealing with one particular repeat customer, however an ophthalmologist that I began to notice something odd about the way in which he set about communicating the results of his work to the rest of the world. For the empirical study upon which he based the second paper he brought me appeared to be exactly the same as the study he had used for the first paper I had edited for him. In fact, for a while, it even made me wonder whether I’d picked up the wrong document, especially when I found myself correcting sentences I was fairly sure I’d corrected before.

For a while, indeed, it even made me feel slightly annoyed. For once I’d checked that the second document was, in fact, a different paper in that it had a different title, at least I felt that the minimum the author could have done was incorporate my previous corrections into the new work. By not doing so, it was almost as if he didn’t care about the improvements I’d made to the previous paper’s presentation, in which I took some pride, trivial though my contribution may have been to the work as a whole. Because so much of the second paper was more or less identical to the first, however, I refrained from saying anything, happy to hand my client a substantial invoice for what had essentially been a few hours work copying my previous corrections from one document into the other.

Then he brought me a third paper and, lo and behold, it was based on exactly the same empirical study as the previous two. The methodology was the same, the test subjects were the same, and so were the results as set out in the various tables. The only differences between the three papers I could discern were a number of additional findings and conclusions set out towards the end of each one, which, in my naivety, I thought could have been presented better had they all been in the same paper: something I was foolish enough to suggest to the author when he came to pick up my final set of revisions.

It was one of those rare moments in life when the scales fall from one’s eyes and one suddenly sees the world for what it truly is rather than what one had childishly supposed it to be. For instead of thanking me for such an insightful suggestion, my client simply looked at me as if I were a complete idiot. And then the penny dropped. For these ophthalmological papers, over which I had diligently laboured for many more evenings than I now suspected they were worth, were not about imparting new information to fellow scientists working in the field. Even less were they about adding to the total sum of human knowledge. They were simply about adding three more titles to the list of publications on the author’s curriculum vitae in order to advance his career, either by helping him to secure further funding for his research and by enabling him to obtain a better position within the university system: two measures of career success which were already connected back in the 1980s, but are now almost inseparable.

I say this because most university graduates today leave university with a far greater level of debt than they did forty years ago, such that, in most cases, they cannot even contemplate post-graduate research unless their tuition fees are paid and some level of maintenance support provided. Because most western countries regard their science base as essential to their economic prosperity and therefore require a large pool of science graduates to undertake post-graduate training most countries, especially in Europe, have consequently developed public funding regimes for both scientific research and post-graduate support which basically tie the two together, such that if a student wins a place on a publicly funded research programme, their own funding automatically comes with it, their eligibility for funding actually being determined by their selection for the programme.

What this also means, however, is that if a university science department wishes to run a post-graduate programme, as most university science departments do such programmes figuring prominently in all their marketing material then funding for some area of research must first be secured, with the further consequence that those members of the department who are particularly good at this tend to be more highly regarded and enjoy a higher status than those who aren’t. What’s more, this then affects recruitment. For in recruiting members of a science department, particularly the department head, most universities tend to place far more emphasis on a candidate’s track record in obtaining research funding than on their teaching ability, thus making this the key attribute in building a successful academic career.

For universities and scientists alike, however, this whole system of funding has put them on something of a funding treadmill. For if universities want to continue their prestigious research and post-graduate programmes, and if scientists wish to maintain their high status positions, then they have to continually win more funding, constantly going back to the relevant funding agencies in a never ending cycle of application and political lobbying which can and often does have the effect of inverting the relationship between funding and research, such that instead of obtaining funding in order to continue their research, scientists now too often find themselves in the position of undertaking research in order to continue obtaining their funding, which, in turn, then places them on another treadmill: that of having to continually produce results in the form of a never-ending stream of published papers in order to justify all the money they have been given and thus be given more.

Quite predictably, this has also brought about a massive expansion in the scientific publishing industry. For in order to accommodate the demands of so many scientists who have to get their results published in order to justify their funding, dozens of new scientific journals are founded every year. Today, indeed, it is estimated that there are more than 30,000 such journals worldwide, which collectively published over 2.2 million scientific papers in 2018, the latest year for which I have figures, most of which, statistically, can only have achieved a very limited readership. Indeed, there is a widespread joke within the scientific community indicative of an equally widespread cynicism that the vast majority of all published papers only ever have three readers: the authors, themselves, the editor of the publishing journal, and the referee appointed to conduct peer review.

Amusing as this may be, what is far less amusing, however, is the ever-growing cost of this publishing juggernaut, which, like the cost of scientific research, itself, is either met by students in the form of tuition fees or by taxpayers either through grants to universities or through research funding these two sources of finance roughly corresponding to the two principal ways in which scientific journals make their money.

The first and more traditional of these is by charging annual subscriptions, both for online access to individual papers and for the annual bound editions which still accumulate on many university library shelves. Given the level of growth in the industry, however, it was inevitable that at some point the cost to universities of further expansion based on this model would simply have become prohibitive. Most new journals those which have been founded during the last couple of decades or so have therefore adopted an Open Access (OA) model, in which the published papers are free to read online, with the costs being covered by the authors. The idea that this makes them ‘free’, however, is something of a wilful misconception. For knowing that they are going to have pay to have their research published in this way, most scientists now include the cost of publishing in their applications for research funding, with the result that the costs are simply charged to the public purse.

Indeed, it is the fact that no one who cannot pass the cost of these journals on is ever directly charged for them that permits their seemingly infinite expansion. For if the only payments journals received came from those who actually consumed their contents, and if these consumers only paid for what they actually consumed whether this be in the form of single online papers or the quarterly paperback editions to which I, myself, used to subscribe then not only would there be far less money flowing into the scientific publishing industry but, as a consequence, there would also have to be far fewer journals, which, in turn, would mean that far fewer papers were published.

The problems to which this lack of any commercial restraint give rise, however, are not confined merely to this ever-increasing drain on public funds. For by providing what is, in effect, a virtually limitless publishing capacity, publishers have not only ceased to perform the filtering function they once providedensuring, instead, that just about every scientist can now get their work published, no matter how mediocre or entirely meritless it may be but have also largely abdicated their responsibility for maintaining standards, thereby allowing scientists to increasingly get away with some fairly unscientific practices.

Probably the most widespread of these is something called p-hacking, which is the selective reporting of results in order to support a theory which the raw data as a whole would not support. Another common form of data manipulation is something called HARKing, or Hypothesizing After the Results are Known, which may sound fairly innocuous, but which actually means that the hypothesis in question, coming the at end of the process rather than constituting its starting point, is never really subjected to any serious scrutiny.

What both of these practices thus do is allow scientists to publish results which they wouldn’t otherwise be able to obtain, thereby making the incentive for going down this road fairly obvious. For if one needs to publish something within a particular time frame in order to maintain one’s funding, but hasn’t yet discovered anything of note in one’s current line of research, then the temptation to resort to some sort of statistical sleight of hand may well be overwhelming, especially given the fact that, because both of these practices involve some fairly complex statistical techniques, they can also be fairly hard to detect without detailed analysis of both the data and methodology employed. 

Indeed, the sophistication of some of these techniques, along with the fact that their application at the desk-top level has only recently been made possible by increases in personal computing power, has led some commentators to speculate that, in some cases, the abuse of these techniques may well have been unwitting, the suggestion being that in exploring the potential of new analytical tools, some scientists may have crossed the line inadvertently. And, in some cases, it’s perfectly possible that this is how these practices began. It is also perfectly possible, however, that having discovered these easy and convenient ways of obtaining results, many scientists then talked themselves into believing that while technically unscientific and invalid, they weren’t really doing any harm, especially as no one was likely to read the resulting paper anyway.

The problem, of course, is that once one has created an environment in which the need for scientific integrity is no longer felt to be absolute, it opens the door to other, far more obviously intentional and less ‘innocent’ forms of abuse, the simplest and most blatant of which is that to which Charles Rotter, editor of the journal ‘Molecular Brain’, drew the public’s attention earlier this year, in an article in his own journal in which he describes how, of the 180 papers submitted to the journal during the previous two years, he had to recommend that 41 of them be ‘revised before review’, his principal request being that the authors submit their raw data for appraisal. He then reports that:

‘among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data. Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portion of these cases.’

In that 40 withdrawn or rejected manuscripts out of 180 represents 22% of all the manuscripts submitted to ‘Molecular Brain’ during this two year period, should this be happening at all of the  30,000 scientific journals currently published around the world, this would mean that up to 620,000 papers could be being submitted to and rejected by journals each year on the grounds that they are entirely spurious. The suspicion, however, is that they are not all rejected. For not only is it to be doubted whether all journal editors are quite as scrupulous as Charles Rotter, it is also hard to believe that so many scientists would attempt this kind of scam unless they thought they could get away with it, which suggests that, at least some of the time, they do.

The question, of course, is how widespread these various types of scientific fraud are. And the answer, unfortunately, is that it is almost impossible to tell. For if all attempts at fraud were known, then, presumably, they would all be stopped. One clear indication of their increase, however, is the growing problem of scientific irreproducibility, wherein other scientists are not able to reproduce the results reported in a scientific paper following the methodology laid out in the paper itself: the clearest possible sign that there is something wrong with the underlying science.

Again, it is difficult to estimate the overall extent of the problem in that it varies from field to field. What should not come as a surprise, however but is shocking nevertheless are the fields in which it would appear to be most prevalent. For they are not the fields which common prejudice would lead one to expect such as those in the social sciences, for instance but rather those which are not only the most competitive, being awash with money, but in which we naively expect a higher level of integrity to be maintained, the most notable being cancer research. Yet according to one meta-analysis by F. Prinz et al, published in 2011, only around 20% to 25% of published studies in cancer research could be validated or reproduced, while another analysis by Begley and Ellis in 2012 put the figure as low as 11%.

If you find these figures as shocking as I did when I first stumbled upon them, then, like me, you will also probably be asking how such a systemic failure to maintain scientific integrity is possible. For surely there has got to be someone who checks the validity of a scientific paper before it is published: if not the journal’s editor, then whoever is appointed to conduct peer review. What you have to remember, however, is not just that referees are traditionally unpaid their lack of reward or inducement supposedly ensuring their impartiality but that the current system of peer review was instituted at a time when science was still largely a gentlemanly pursuit, when the only form of income universities brought in came from teaching, and when scientists largely conducted their research in between the lectures and tutorials for which they received their salaries. By not receiving any remuneration for their research as such, not only were they thus under no pressure to publish their results until they were ready or, indeed, until they had something worthwhile publishing but they also had absolutely no reason to commit fraud, which, in turn, meant that their colleagues were quite happy to act as unpaid referees, secure in the knowledge that all this would actually entail was a hopefully enlightening read and a quick check for obvious errors that might otherwise embarrass the author and journal alike.

Of course, now that the situation has changed, publishers could start paying their reviewers and demanding a more rigorous appraisal from them. Not only would this lead to the rejection of more papers, however for which the publishers would still have to pay the reviewer’s fees but even if reviewers only spent an extra couple days going through an author’s data and methodology, the additional cost could be the final straw for those who have to foot the bill, thereby pushing the whole industry over the edge.

More to the point, a lack of critical depth and rigour in the reviews carried out by referees is not the only problem with the current system of peer review. There are also problems of both scale and anonymity. For 2.2 million scientific papers published every year do not just require 2.2 million referees, but 2.2 million suitably qualified and impartial referees, who, in principle, should not know the authors of the papers they are reviewing: something which it is hard enough to achieve even in the most heavily populated areas of science, but is made even more difficult by the fact that, in order to stand out in any given field, most scientists today quite naturally gravitate towards some sort of niche specialism, in which they can make a name for themselves as one of only a handful of experts. What this also means, however, is that, sometimes, it can actually be difficult for a publisher to find a suitably qualified referee at all, let alone one who does not know the author of the paper to be reviewed. For niche specialisms create niche worlds, in which the participants regularly attend and speak at the same conferences, making anonymity almost impossible.

Worse still, publishers are in the business of publishing. If within a certain niche specialism there are rivals and competitors, the last thing they want to do, therefore, is send out a paper to be reviewed by a referee they know is going to be hostile. The inevitable result is that networks of like-minded scientists are formed, who are known to be sympathetic to each other’s ideas and who, through the intermediation of a common publisher, regularly review each other’s papers, bringing the whole concept of impartial and independent peer review into question.

What makes this whole system even more insidious, however, is the fact that, even when they are acting corruptly, it is extremely unlikely that those doing so actually see themselves as corrupt. Whether they are manipulating data in order to get the results they want or rubber-stamping the paper of a colleague whose views they share, the likelihood is that they simply see it as normal: as the way in which science is conducted these days. For this is not the corruption of individual scientists; it is the corruption of the culture of science as a whole, which makes it all the more difficult to reform. For if wrong-doers do not believe that they are doing anything wrong, it is very hard to get them to change their ways, especially if changing their ways would be to their detriment and might eventually involve them in trying to amend the ways of others, thereby earning for themselves a reputation for being  trouble-makers and further disadvantaging their careers.

Indeed, once corruption of this type has taken hold of an institution, it is almost impossible to eradicate it, especially when the institution in question is almost completely opaque to outsiders: a condition which, in itself, tends to foster corruption. For having only a limited understanding of how the institution works, those looking in from the outside are not only rendered more or less incapable of imposing reform from without, but can be far more easily manipulated into giving the institution their support. What’s more, due to this general level of ignorance, any support given, will be largely under the control of the institution itself. For in order to determine how this support should be allocated, those providing it have very little choice but to co-opt or seek the advice of institution members, thereby opening the door to even greater corruption.

Not that the real-world relationship between science and government is actually quite this one-sided. For as in any relationship of patronage, the patron always has a great deal of leverage over the patronised, especially where the patronage is largely financial in nature and where those receiving it are entirely dependent upon it, as is the case with respect to nearly all non-commercial science throughout the west. While scientists may still have a lot influence, especially in advising government as to where their money should be spent, the need to keep the funding flowing creates a far more symbiotic relationship, in which scientists are not only obliged to subordinate many of their own scientifically prompted aims to the government’s more strategic agenda, but have frequently shown an enthusiastic willingness to be of service to government in ways that are not always particularly good for them and which have further corrupting effects.

One of the most pernicious of these has been the tendency, in recent years, for scientists to push science beyond its traditional role of merely explaining how the universe works and to turn it, instead, into a tool for predicting the future. This they have done through the almost ubiquitous application of computer models or simulations, the increased influence of which, especially in the development of government policy, has been allowed to go unchecked largely because neither governments nor the public, nor most scientists, themselves, fully understand either the proper place of such models or the potentially catastrophic effects of their misuse.

To understand these dangers, however, one must first understand what computer simulation actually are and how their application differs from the application of standard scientific method. And one of the best ways I have found of explaining this difference is to start by comparing what might be called the relative ‘directionality’ of the two approaches. For all methodologically rooted disciplines follow what is largely a step by step process which has, as a consequence, a directional flow. In the case of standard scientific method, this process starts with observations and measurements which are then analysed to reveal patterns or anomalies. Hypotheses are then formulated to explain these patterns or anomalies and experiments designed to test the hypotheses. Those hypotheses which do not fall at the first hurdle but need some modification are then refined on an iterative basis, with further experiments designed to test each refinement, until a stable theory is finally reached.

The process of building a computer model, however, is very different. In fact, it flows in completely the opposite direction, in that it actually starts with a theory. It then turns this theory into a set of algorithms, which it then uses to predict future observations and measurements under different conditions. Modification are then iteratively made to the algorithms to improve predictive accuracy.

This reversal of directionality is absolutely crucial. For it means that the soundness of any computer model largely depends on the soundness of the underlying theory, the old adage of ‘rubbish in, rubbish out’ saying it all. As a result, the most reliable computer models are usually to be found in areas of applied science in which the underlying theories and principles have long been established. Good examples include fields like fluid dynamics, of which I have some personal if second-hand knowledge, having once had a friend who gained his Ph.D. in mathematics modelling turbulence in gas pipes. Then, of course, there are the long-standing applications of computer modelling in both engineering and construction, where it is an accepted principle that it is far better to run a computer simulation to find out whether a building, built to a certain design, will withstand an earthquake of a specified magnitude, than it is to actually build it and find out the hard way.

Even here, however, one must sound a note of caution. For even when basing one’s computer model on a theory of long standing, one never knows when exceptions may be discovered. This is because scientific theories are essentially constructs of the imagination designed to explain the inner workings of the observable universe and are not, themselves, observable. What this means, therefore, as Sir Karl Popper explained in ‘The Logic of Scientific Discovery’, is that no scientific theory can ever actually be proven. For no matter how well established a theory may be, there is always the possibility that someone, someday, will discover some new piece of evidence inconsistent with its possible truth, thereby proving it false. Indeed, as Popper also famously argued, the possibility of falsification, or falsifiability, is actually a condition of a theory being scientific, in that if it cannot be falsified or one cannot say what piece of empirical evidence would prove it false, then it is simply not a scientific theory. 

What’s more, history shows us that even theories which have been held true for centuries and which have had, in their time, a high level of predictive accuracy, can eventually be proven false. A prime example of this is Sir Isaac Newton’s theory of gravity, in which gravity was conceived as an attractive force pulling bodies together, a bit like magnetism. Indeed, many people still think of it in this way today. More to the point, the mathematics based on this way of conceiving of gravity accurately predicted most of the observable universe for more than two hundred years. In fact, the only observation it did not correctly predict was the orbit of the planet Mercury, which remained an unexplained anomaly until the beginning of the 20th century, when Albert Einstein produced a new mathematical formula based on a completely different concept: one in which gravity was now conceived, not as an attractive force, but as a warping of space due to the mass of the bodies within it, the calculated effects of which based on relative mass and proximity far more accurately approximate the orbit of the tiny planet Mercury around its massively larger star.

That the mathematics derived from Newton’s concept can still be used to accurately predict the movements of the other planets has, of course, led some people to mistakenly assume that, in these unexceptional cases, Newton’s theory still holds true. However, this is clearly a misconception. For either Newton’s theory is correct in every instance, or Einstein’s is. And since Newton’s theory has been found to be false in at least one instance, the betting is on Einstein, though even here, of course, our acceptance of Einstein’s theory is still only provisional. For, in time, it too may be proven false. For such is the nature of science.

To avoid this kind of confusion, however, it may be helpful to consider yet another, even clearer example of a long-term theory being overturned and replaced by a new theory, the old theory, in this case, being one which no one, today, would mistakenly believe was still true. For the theory I have in mind is that which was the foundation of what is now generally known as phlogistic chemistry, which held that those forms of matter which lose mass when heated do so because they give off a substance called phlogiston.

Because, today, we now know or think we know that no such substance exists, this, of course, seems laughable. But phlogistic chemistry was actually very successful in its time and lasted for more than three hundred years, which is significantly longer than modern chemistry has so far survived. Indeed, the only thing that phlogistic chemistry could not explain was why certain forms of matter, i.e. metals, gained mass when heated, which we now know or think we know is the result of oxidation, making it somewhat inevitable, therefore, that it was the discovery of oxygen, by the French chemist Antoine Lavoisier, which eventually brought phlogistic chemistry to an end.

Or, at least, this is the account given in most potted histories of science. It is, however, a total misrepresentation of what actually happened. For in that the gas we now know as oxygen was first isolated by the English scientist Joseph Priestley, it is questionable whether Lavoisier can be said to have discovered it at all, especially as he was actually shown how to isolate the gas by Priestley, himself, at a gathering of French scientists in Paris in 1774. It was just that Priestley did not call it ‘oxygen’. He called it ‘dephlogisticated air’ and continued to practice the science of phlogistic chemistry for decades after Lavoisier coined the new name.

So why is Lavoisier credited with the discovery, and what did he actually do other than give Priestley’s dephlogisticated air a new name which ultimately proved to be just as mistaken? I say this because the word ‘oxygen’ is derived from the Greek words oxys, meaning ‘sharp’, and genes, meaning ‘to create’, and was chosen by Lavoisier because he thought the gas had an important role in the formation of acids, which turned out to be false. In fact, had Lavoisier ended his study of ‘oxygen’ at this point, it is likely that his name would have gone down as a little more than a footnote in history and that we’d have ended up calling the second most abundant constituent of the earth’s atmosphere something else entirely. It was while he was studying some of the other properties of this incorrectly named gas, however, especially its role in combustion and the roasting of metals to produce powders or calxes, that he had his ‘eureka’ moment. For noting as others had done before him that the powders which resulted from the calcination process were heavier than the metals with which the process started, and reasoning that it could not be the application of heat alone which caused this weight gain, since ‘heat’, itself, had no mass, he hypothesised that the calcined metals had to be drawing something else out of the atmosphere, with Priestley’s gas, now his own ‘oxygen’, being the prime contender: a hypothesis which was then given even more traction when he learned how to isolate yet another new gas, one which was isolated for the first time in 1776 by yet another English scientist, Henry Cavendish. For in studying the properties of this second new gas, he discovered, quite remarkably, that, on combustion, small amounts of water were produced, suggesting that, as in the case of the calcined metals, combustion actually caused this second gas which he duly called ‘hydrogen’, or ‘creator of water’ to combine with something else, with oxygen, again, being the most likely suspect.

Not, of course, that he had any way of proving this. For I repeat once again that scientific theories cannot be proven. All one can do is accumulate supportive evidence and disprove alternative hypotheses, which, indeed, is what Lavoisier spent the next two decades doing, right up to the day of his execution on the guillotine in 1794, which is surely one of the most shocking and unjust ends to befall one of the world’s greatest scientists. For while he may not have discovered either of the two gases to which he gave names, what he did was something far more fundamental and far-reaching. For he gave the world the first two building blocks of a whole new chemistry: one in which the entire material universe would come to be seen as assembled out of a finite number of elemental constituents, chemically bonded in different combinations to produce different substances. And it was this that was his real achievement. Not the discovery of any particular gas. For, in the strictest sense, he never discovered anything at all. What he did, was create a whole new way of conceiving of the material world much like Einstein altering our conceptual framework in a way that also has implications for the nature of science itself. For instead of progressing in the manner of a steady, incremental accumulation of knowledge which is how science is so often represented, especially by scientists themselves it would appear from the examples of both Einstein and Lavoisier that, occasionally, a science will completely renew itself, throwing away the old and starting again on the basis of whole new paradigm.

Even more astonishingly, historical evidence would suggest that such paradigm shifts, as Thomas Kuhn called them, are far more commonplace than one might imagine, their periodic occurrence being made almost inevitable by the unprovable nature of scientific theories combined with our own stubborn reluctance to abandon established theories no matter how full of holes they may be until someone has come up with something better: the inevitable result being that, within any given field of science, problems tend to build up until, eventually, the damn bursts.

In his seminal work, ‘The Structure of Scientific Revolutions’, Kuhn, in fact, describes how this usually comes about, outlining the typical lifecycle of a scientific theory from its inception, through its maturity, to its eventual demise. Unsurprisingly, he describes how the introduction of a new theory is almost invariably met by fierce resistance from the existing scientific establishment, which, by dint of simply being established, usually has a lifetime of investment in the previous theory. Indeed, we have already seen an instance of this in the case of Joseph Priestly and other diehard phlogistic chemists, who resisted Lavoisier’s new-fangled ideas long after it was reasonable to do so. What this also means is that the early-adopters of any new theory tend to be younger scientists who have yet to make a reputation for themselves, have nothing to lose, and are excited by the prospect that, by adopting these revolutionary new ideas, they may be the ones to finally solve the many outstanding problems which their science has accumulated over the years.

As take up of the new paradigm increases, however, it eventually begins to raise as many questions as it answers. For reality is invariably richer and more complicated than we initially conceive it to be, such that the more we study it, the more questions it poses. And while some of these questions may lead to new discoveries thereby raising the level of excitement still further a body of trickier, more stubborn questions inevitably starts to accumulate, creating problems which can begin to seem almost as intractable as those which beset the old paradigm. In fact, it is not unusual for a new paradigm to even reinstate problems which the old paradigm had actually solved. With no new theory as yet on the horizon, however, a whole new generation of scientists now finds itself in much the same position as their predecessors, having to conjure up additional, supplementary or subordinate theories in order to explain the anomalies and exceptions for which the main theory cannot account.

The problem with this, however, is that every supplementary theory that is needed to bolster the main theory effectively weakens the paradigm, which, in most cases, will have been conceived and initially embraced because it promised to make everything simpler. Now, the whole thing has become a complete mess, with ad hoc fixes all over the place, making the entire science ripe for someone to finally come along and say: ‘You know what, we’ve been looking at this in completely the wrong way. Instead of thinking of it like this, we should think of it like this.’ And thus a new paradigm is born. And the whole cycle starts all over again.

Needless to say, most scientists don’t much like this model of how science works. They much prefer the paradigm in which science is seen as a steady accumulation of knowledge, to which each contribution is equally valid and of equal value. For what is most disturbing about Thomas Kuhn’s revolutionary vision to most scientists is not just the idea that any scientist could wake up tomorrow morning to find their whole life’s work invalidated rendering them as ridiculous and irrelevant as Joseph Priestly but that science might actually demand of them something more than a mere journeyman’s contribution to a collective effort: that it might, indeed, demand of some those who are to be deemed great an individual leap of genius of the kind Immanuel Kant described in the ‘Critique of Judgement’, where ‘genius’ is defined not as mere cleverness however extraordinary such cleverness may be but as the possession of precisely this rare ability to get others to see the world in a new and different way, whether this be in the visual arts, philosophy of the type which Kant himself wrote, or, indeed, science.

In defense of their preferred paradigm, therefore, most scientists will almost certainly argue that, while such revolutionary paradigm shifts may have happened in the past the historical evidence for their occurrence, from Lavoisier to Einstein, being undeniable because science proceeds by eliminating false theories, it follows that their frequency will naturally decline  over time as more and more false theories are removed until, eventually, they cease altogether, a point we may already have reached.

This argument, however, is based on a belief in what is known as convergence between our scientific or theoretical conception of the universe and the reality of the universe as it exists in itself and contains two main flaws. The first is the assumption that, as we replace old, falsified theories with new theories, these new theories will necessarily be ‘true’ in the sense of corresponding to reality. Because all scientific theories are constructs of the imagination, however, there is actually no good reason to believe this, in that we could simply go on continually replacing one false theory with another, which, in time, also turns out to be false.

The second flaw in the argument, however, is even more significant. For even if the above assumption were correct and we gradually replaced all false theories with ‘true’ ones, such that eventually we ended up with a perfect correspondence between our theoretical conception of the universe and the reality of the universe as it exists in itself, we could never know this was the case. For in that it is only through our theoretical conception of the universe that we are able to apprehend it, the one thing we cannot do is step outside of ourselves and compare that theoretical conception with the ‘real’ thing to see whether they correspond. Indeed, the only indication we could have that correspondence had been reached would be if scientists suddenly ran out of questions to ask. And even then we wouldn’t know whether we’d reached correspondence, or whether we’d simply arrived at a set of theories which were perfectly consistent with all the empirical evidence.

To those unfamiliar with these concepts, this distinction may, of course, seem somewhat strained. If a theory is consistent with all the empirical evidence, how could it not correspond to reality? As an exercise in clarification, therefore, ask yourself whether it is possible for two competing scientific theories to both be fully consistent with all the currently available empirical evidence. Assuming that you answer this question in the affirmative, now ask yourself whether it’s possible for both of these theories to be true in the sense of correspondence. If the answer to this question is ‘No’, then this means that at least one of the two theories, both of which are fully consistent with all the empirical evidence, cannot logically correspond to reality. And if it’s possible for one of the theories to be fully consistent with all the empirical evidence and still not correspond to reality, then it follows that it is possible for both theories to fall into this category. Indeed, it’s possible for all our scientific theories to be fully consistent with all the empirical evidence and yet for none of them to correspond to the way the universe actually is in itself.

What this teaches us, however, is not that science is somehow defective, but rather the importance, not just of knowing which criterion of ‘truth’ ‘consistency’ or ‘correspondence’ is applicable in any particular context, but of ensuring that where only the weaker criterion of consistency applies, as in the case of a scientific theory, the scientific theory in question is properly grounded in those basic elements of science, namely observation and measurement, to which the stronger criterion of correspondence is applicable. For while we may never be able know whether our scientific theories correspond to reality, we can certainly find out whether our measurements do. And while this may be stating the obvious, sometimes the obvious needs to be stated: that the integrity of scientific theories depends on them being grounded in empirical evidence. For it is this that prevents them from simply coming adrift and floating away on flights of imaginative fancy, which is precisely what can happen in the case of computer simulations.

This is because, in building such simulations, scientists have a tendency to make three fundamental errors, two of which we have more or less already covered. The first of these is the mistaken belief that, if a theory accurately predicts future events then it must be true, even though there have been numerous scientific theories throughout history with a high degree of predictive accuracy which have ultimately turned out to be false. The second mistake is then to take the ‘truth’ of these theories to mean ‘correspondence’, which then opens the door to two further conceptual errors. For not only does the belief in a theory’s correspondence to reality remove any threat that the theory might one day be overturned in yet another scientific revolution – thereby reducing the level of caution with which scientists might otherwise regard the reliability of any computer model based on it – but it actually elevates the theory to the status of something absolute and immutable, thereby reducing the perceived need to validate it empirically.

Indeed, we see this in the way in which many computer models are developed, in a continuous process which starts by running the model against historical data and adjusting the algorithm until its output approximately matches the data record. It’s a bit like running ‘Goal Seek’ in Excel. You know what answers you want; so you just keep tweaking the computational engine until you get them. The trouble with this, however, is that it also effectively refines the theoretical construct upon which the algorithm is based. And while every effort is usually made to empirically validate these changes to the underlying theory, such that they are not just random and have some basis in the real world, the verification process the matching of the model’s output to real world data takes precedence, such that once data congruence is achieved, the validity of the model, as an accurate representation of the world, is seen as less important. Indeed, the mere fact that the output now corresponds to the historical record is taken as evidence that the modified model is correct.

This tendency to implicitly favour verification over validation is further emphasised in large multiscale simulations, where the outputs from smaller, lower level models are fed into a larger, higher level models as mathematical parameters, which are regularly tweaked without validation in order to ensure that the outputs from the top level model correspond to the real world. This process, known as parameterization, is especially prevalent is cases where the lower levels systems being modelled are inherently chaotic and are only statistically determinate at the macro level, thereby seeming to justify the lack of validation.

Nor are these the only ways in which computer simulations tend towards artifice rather than real world representation. For in a manner very similar to Thomas Kuhn’s description of how supplementary, ad hoc theories are used to explain exceptions to a main theory during the mature phase of scientific paradigm’s lifecycle, so too supplementary, ad hoc programs are often written into computer models simply in order to ‘make the model work’, whether or not these supplementary programs have any real world correlation. The result, as described by Eric Winsberg in his essay ‘Computer Simulations In Science’, is that the outputs from computer models:

typically depend not just on theory but on many other model ingredients and resources as well, including parameterizations (discussed above), numerical solution methods, mathematical tricks, approximations and idealizations, outright fictions, ad hoc assumptions… and perhaps most importantly, the blood, sweat, and tears of much trial and error.

Indeed, some large, multiscale computer simulations take years even decades to develop, and are less built than grown organically, which can give rise to its own set of problems, especially in a university setting where numerous generations of post-graduate students may have worked on the simulation, each making their own modifications without always adequately documenting them, such that eventually it becomes very difficult to say how the model actually works, or even what it represents, rendering the idea that it could somehow correspond to reality not just laughable but conceptually confused.

Indeed, we have seen an example of this very recently in the case of the epidemiological computer model developed by Professor Neil Ferguson and his team at Imperial College London, which predicted that over half a million people in the UK could die of Covid-19, thereby forcing the government to impose draconian lockdown measures. Not only has this prediction turned out to have been massively awry, however, but it has since been discovered that the sixteen year old code from which it was generated has been so frequently altered during its lifetime that large parts of it are more or less unintelligible to any computer programmer trying to work out what they do. In short, the whole program has effectively become a ‘black box’: a mystical engine for issuing predictions, some of which turn out to be correct, though nobody knows how or why.

This is not just bad science; it is no longer science at all. It’s more like the Oracle at Delphi, which, of course, is exactly what governments really want from science: not the pure and disinterested kind of science which merely seeks to describe and explain how the universe works, but a shaman’s tool which they can use to take the uncertainty and political exposure out of decision-making. For this, of course, is how it has always been. In the past, rulers consulted priests who read the auguries for them. Now they consult their scientific advisers: a modern alternative which has the added benefit of allowing them to say that they are simply following scientific advice, thus absolving them of any responsibility for anything that might go wrong should the auguries turn out to be false.

Thus, while it is likely that a number of individual scientists will eventually be made scapegoats for the catastrophic economic consequences which will undoubtedly follow from the mishandling of the coronavirus pandemic by governments around the world, the biggest casualty resulting from this folly is almost certain to be science itself. For while science, in the abstract, may not be responsible for the fact that so many of those who practice it don’t really understand it and not knowing what they do are therefore willing to mangle and corrupt it for their own self-advancement, it is doubtful whether those whose lives are devastated by its misuse will make such a fine distinction. Seeing only scientists to blame, they will likely blame science too, thus almost certainly undermining one of the most important pillar upon which our civilization has been built, much to the detriment of us all.

 

Sunday 14 June 2020

Why We All have Twice As Many Female Ancestors As Male (Part II): Male Accentuated Natural Selection and its Ongoing Effects


In the first part of this essay, which you can find here, I put forward an evolutionary theory to explain the recent genetic discovery that all human beings alive today have roughly twice as many female ancestors as male. The main premise underlying this theory, which I call Male Accentuated Natural Selection or MANS, is that, while it is necessary for the survival of any sexually reproductive species that as many of its females produce offspring as possible thereby ensuring that the next generation of the species is as well populated as it can be it not necessary for all its males to do so. In fact, there is actually an evolutionary advantage to be gained from restricting the number of males that are able to mate, either by having them fight each other for this honour, as in most species of herbivore, for instance, or by having the females choose the most genetically attractive among them, as in most species of birds. Either way, the result is that only the strongest, fittest and most well-adapted of the males are able to mate and pass on their genes, thereby weeding out any genetic weaknesses within the species on the otherwise largely superfluous male side of the mating equation.

As I explained in more detail in Part I, whether a species instantiates MANS through male combat or female choice is determined by a number of factors, including size, diet and whether both parents are required to feed their offspring: a condition which rather precludes the possibility of the males having multiple mates and more mouths to feed than they can possibly satisfy. Another important factor is whether the members of each sub-population of a species are required to cooperate and work together in order to obtain food, as in the case of most canine species, for instance: a requirement which precludes the possibility of the males continually fighting each other and weakening their cooperative capabilities. 

As pack animals, this also applies to human beings, who also instantiate MANS through female selection, which is to say that the males court the females and the females choose who they will have and who they won’t. Unlike most canine species, however, in which the females are able to fight off the sexual advances of any male they don’t want, other evolutionary developments in human beings have made this mode of MANS instantiation rather more problematic. 

The most significant of these was the development of our big brains, which, of course, gave us a massive evolutionary advantage, but which constituted an equally massive inconvenience to women during pregnancy. During our many thousands of years of evolutionary development as hunter-gathers, this therefore not only resulted in a separation of the roles of men and women in daily life, with only the men going out to hunt, but an evolutionary divergence in the physical characteristics of the two sexes, in which men became significantly larger, stronger and faster than women, thereby placing the latter in the desperately precarious position of being both dependent on men for sustenance and protection while being largely defenseless against them.

Indeed, so perilous is this outcome in evolutionary terms, that it could not have been possible without a counterbalancing social development in which the dominant males in any hunter-gatherer community would have had to have become protective of their women in order for that community to remain stable and survive: a development which I argued in Part I would have probably come about quite naturally as a result of the fact that protective males at the top of the social hierarchy would have been doubly attractive to the females, who would thus have consistently chosen them as mates, thereby encouraging the evolutionary development of protective tendencies in men as part of the MANS process.

The problem with this, however, is that while it may have achieved some level of stability within hunter-gatherer tribes and secured a commensurate level of safety for their women, both would still have remained precarious, not least because the solution, in itself, almost certainly magnified the MANS effect, as evidenced by our genetic history. For instead of perhaps 10% or 20% of the males being rejected and not passing on their genes as one might expect to find in most species that instantiate MANS through female selection what we ended up with was a massive 50% of the male population effectively being excluded from having sex within their own tribal community. Not only would this have caused a huge amount resentment, animosity and even hatred among the rejected group, therefore, but it would also have led to increased inter-tribal conflict, in which the rape, capture and enslavement of another tribe’s women would have been the primary objective of these sexually disfavoured males.

Nor was this merely a passing phase in human development, confined solely to the era of our hunter-gatherer forebears. For not only has such behaviour been a prominent feature of inter-tribal conflict throughout our history as I hope my account of the sack of Magdeburg sufficiently demonstrated it is actually the inevitable consequence of our unique instantiation of MANS through female selection in circumstances in which the females in question are unable to adequately defend themselves against unwanted sexual advances.

Another way in which we have continually attempted to mitigate this threat, therefore, has been through the social and cultural conditioning of young men. Throughout history, in cultures as otherwise diverse as those of western Europe and Japan, young men have regularly been inducted into codes of honour and chivalry in which the protection of women has almost always been central. Indeed, a respect for women and a gentlemanly code of conduct determining how men interact with them is almost universally regarded as a prerequisite for any civilized society. The problem, however, is that because we have never really understood our evolutionary contradiction – or even properly recognised it – the reason why such codes of conduct are necessary is also only poorly understood, with the result that the discipline required to maintain them has all too often been allowed to lapse.

Worse still, there have even been times in our ignorance – not least the present – when we have actually deluded ourselves into thinking that such constraints upon our behaviour are not required, and that being in some way naturally ‘good’ – unless, of course, irredeemably toxic – both men and women should therefore be allowed to live however they like. Against all evidence to the contrary and disregarding our entire history as a species, the inevitable result has been that the one institution which largely kept us civilized – that of marriage – which not only, for the most part, ensured the safety of women but greatly limited the number of men who were excluded from the mating pool, thereby also limiting the amount of anger and resentment with which society has had to deal – has now fallen into decline, abandoned in pursuit of what, on the surface, appear to be other, more desirable social, economic and political goals.

Even in this, however, we reveal just how little we understand ourselves. For by viewing the decline in marriage over the last forty years or so as largely the result of our society’s more enlightened attitude towards women – and therefore of our own political will – we completely fail to recognise the far more fundamental evolutionary forces at work in shaping society. In fact, we mistake ourselves entirely. For as I aim to demonstrate in this second part of ‘Why We All Have Twice As Many Female Ancestors As Male’, the revolutionary changes which have so transformed society since the end of the second world war have had far less to do with political philosophy than with the biological imperatives programmed into our genes, the change in the relationship between men and women, in particular, having been brought about by nothing less than what I aim to show has been a reassertion of the MANS principle as instantiated through female choice.

To understand this, however, we first need to understand why such a reassertion was necessary, especially given the fact that at the end of Part I of this essay, human beings were still living in hunter-gatherer communities in which women’s sexual freedom was more or less guaranteed, both by the communal raising of children within the matriarchal clan, and by the self-interest of the dominant males at the head of the tribal hierarchy, who knew that by defending a woman’s right to choose, they more or less guaranteed that they would always be the ones chosen. 

The first question we need to ask, therefore, is: What happened to change this? And the short answer, at least, is: The Climate!

Around 11,500 years ago, after a period of approximately 103,000 years during which the earth experienced its most recent ice age of period of glaciation, the planet finally began to warm up again, sufficiently extending the growing season in places like the Nile delta and the Mesopotamian basin so as to permit the sowing and harvesting of certain varieties of grass, the relatively large seeds of which could be milled to produce flour, which could then be used to make bread. Over the next thousand years or so, as the ice sheet retreated and temperatures gradually increased at latitudes further and further away from the equator, human beings all over the world consequently started to give up their nomadic, hunter-gatherer lifestyle and became settled farmers instead, finding it both safer and more reliable to grow crops and husband various breeds of domesticated animal than forage and hunt.

As fundamental as this thorough-going transformation in our way of life may seem, however, it was what flowed from it that far more significantly altered women’s position in society. For settling in one place inevitably brought with it a whole train of other economic and social changes, the first and most important of which was the creation of property.

I say this because, as hunter-gatherers, human beings had had very few possessions and nothing that could really be described as ‘property’ in the way in which we normally use this term: just a few furs, perhaps, along with a stone axe and flint knife. Now, as settled farmers, we discovered the most important and fundamental property of all: land. For it was only through the physical possession of land that farmers could farm and hence provide for themselves. Moreover, it was only by providing for themselves that they could maintain their independence. For anyone who did not physically possess land had to work for someone else who did, thereby placing themselves in a position of subordination. Land ownership, therefore, did not just mean independence and personal sovereignty, it also meant power. And the one thing every land owner who had such power wanted, therefore, was to pass it on, after his death, to someone who was effectively a continuation of himself: a son and heir of his own blood.

Thus it was through property that fatherhood now began to be recognised, bringing into being a new ‘patriarchal’ social structure, in which families were organised along the line of patrilineal descent: the very development, indeed, which feminists now so much decry, not least because it inevitably curtailed many of the freedoms which women had previously enjoyed. For any man who wanted to pass on his land to his son, first had to ensure that his son was actually his. This meant that the woman who bore him this son not only had to be faithful to him throughout the duration of their union but had to be a virgin when that union commenced. Thus property not only brought about the recognition of fatherhood but the institution of marriage. 

Worse still, with respect to current sensibilities, it also brought about capitalism. For land is not fungible, in the sense that not all land has the same value. Some land is simply better for growing crops than other land, with the better land producing higher yields. Any man who owned better quality land than his neighbours, therefore, would not only have been able to grow more food than them but, assuming that, in most years, most farmers grew enough food for their own consumption, would have actually been able to produce a surplus: an additional quantity of food which he could then trade or sell, providing him with the capital required to invest in more land, while also, perhaps, hiring landless labourers to work it. This, in turn, would have produced an even bigger surplus the following year, which, again, he would have been able to sell, allowing him to invest in even more land. And so on and so on. Indeed, it is this cycle of production and reinvestment which is the very essence of capitalism, encapsulating its most essential truth: that, if well managed, property tends to accrue to those who already have it, the inevitable result being that the rich get richer while the poor get poorer, the inequality becoming ever more exaggerated over time, particularly in societies which practice primogeniture, where the eldest son inherits all of his father’s wealth, as opposed to societies in which the deceased’s property is divided equally among his surviving offspring.

Once accumulated wealth started being inherited in this way, this then brought about another, even more profound change in the relationship between men and women, especially among the rich. For while, in hunter-gatherer communities, there had been no distinction between the men who commanded the community’s resources and the men whom women found most attractive the men at the top of the social hierarchy generally being the fittest, strongest and most capable men in the tribe in a land-owning society in which those who inherited their fathers’ wealth were not necessarily of the same calibre as the fathers who amassed it, this was no longer the case. Indeed, in this new society, it would have been perfectly possible, and probably quite commonplace for a woman to marry a wealthy husband purely for the standard of living he could give her, while not being physically attracted to him in the least, very probably preferring the fit, young men whom her husband very likely hired to guard his property, including, of course, his wife. The inevitable result, therefore, was the emergence of a wholly new crime: one which would have been unimaginable within a matriarchal society, but which could now cause an adulterous wife and her imprudent lover to lose their lives.

Nor was this new crime of adultery the only way in which the new patriarchal order sought to control female sexuality and influence a woman’s choice of mate, especially if the new institution of marriage involved the payment of a dowry: a tradition which may well have come about as a hangover from the matriarchal age when economic responsibility for a woman lay with her matrilineal male relatives. In his study of the Trobriand Islanders, the fieldwork for which actually took place during the island’s transition from a matriarchal to a patriarchal order, Bronislaw Malinowski describes, for instance, how the brothers and maternal uncles of marriageable young women would sometimes quite literally lock their sisters and nieces away in order to keep them from forming attachments to attractive but impecunious young men who, on taking responsibility for the young woman’s material needs, would then demand payment from those whose responsibility her welfare had previously been.

Not, of course, that we can be certain that this is how the tradition of providing a bride with the means of her future support actually arose, or indeed that, in every instance, the transition from a matriarchal to a patriarchal society occurred precisely in the way Malinowski recounts. What I have outlined above are merely the logical steps of such a transition. How long it would have taken in a real world context and how much more complicated the process may have been we have absolutely no way of knowing. For, of course, there is no historical record stretching back that far. The one thing that seems highly likely from Malinowski’s account, however, is that, in many societies, the two orders the matriarchal and the patriarchal may well have coexisted, side by side, for quite some time perhaps even thousands of years before the transition was complete, with only the rich, or those for whom the passing on of property was important, initially getting married, while the poor enjoyed more flexible sexual relationships, with the women collectively rearing children from multiple fathers in still largely matriarchal communities. 

Indeed, it’s possible that this division may have continued right up to the start of our recorded history and perhaps even beyond, the separation of the two cultures being maintained by the ever-widening gulf between the rich and the poor. For as capitalism developed, and more and more of a society’s wealth inexorably accumulated in the hands of a few rich landowners, economic logic dictates that most such societies would have become increasingly divided into three main social classes: the wealthy few who owned all the land; a middle order of landless but physically and intellectually capable men who served the landowners as stewards, scribes, overseers and enforcers; and the toiling masses who may have started out as hired hands or day labourers but who were eventually reduced to a state of slavery. In fact, for at least the first thousand years of our written history, and for probably some millennia before then, most agricultural workers throughout the Middle East and much of Mediterranean Europe indeed, almost anywhere where a fertile soil and temperate climate allowed wealth to be accumulated in this way would have actually been slaves.

This is because extreme inequality, on the scale here envisaged, would have created two very serious problems for the landed class. Firstly, being in a very small minority, they had to keep the impoverished landless majority in a constant state of intimidated oppression so as to stave off any possible rebellion, which, should it occur, had to be suppressed with merciless brutality. To accomplish this, however, they also needed to maintain the loyalty of the men who would carry out this ruthless suppression: the men of the middle order whose services had to be rewarded and whose aspirations needed to be met. If the land owners were not to be forced to share their own wealth with these men, therefore, the necessary rewards and opportunities for advancement had to come from somewhere else: something which could only really be achieved through military conquest. 

The inevitable result was the age of ancient empires: an almost unique period in human history in which the MANS principle was instantiated, not by female selection, but by male force, not in single combat between individuals competing for the right to mate, but in wars of conquest between competing civilizations, in which the entire male population of an economically and technologically less advanced culture could be removed from the gene pool, either through death on the battlefield or a life of enslavement, while their women were given out as rewards to the victorious soldiery in much the same way as they had been following inter-tribal clashes between hunter-gatherers, only now on a much larger scale.

Not, of course, that all ancient empires operated in exactly the same way, the above model being largely confined to early empires, such as that of the Assyrians, whose predations on neighbouring populations are well documented. Extensive archaeological excavations in northern Israel, for instance, have revealed dozens of ancient villages which simply disappeared around 800 BC following an Assyrian invasion, their entire populations having been either killed or taken elsewhere .

The Assyrians’ successors as rulers of Mesopotamia, the Babylonians, also seem to have followed this model. After their conquest of the Kingdom Judah in the early 6th century BC, the Babylonian king, Nebuchadnezzar, famously took large numbers of Jews into captivity. In doing so, however, he also revealed what is probably the model’s most serious drawback. For having taken a conquered population back to one’s homeland as slaves, one then has to decide whether to keep the men and women together as a viable breeding population, or whether to segregate the two sexes, putting the men to work in the fields as agricultural labourers, while typically selling the women into prostitution or domestic service. 

The problem, however, is that while both of these options have something to be said for them, they each also have some seriously negative implications. If one chooses to keep the men and women together, for instance, the upside of this is that they will naturally reproduce and the empire’s slave population will thus be self-sustaining. However, one will have also brought a breeding population of foreigners into the heart of one’s empire, who, like the troublesome Jews in Babylon, will always pose a potential threat. If, on the other hand, one chooses to segregate them, one may forestall this obvious danger, but only at the cost of constantly having to find more lands to conquer in order to replenish one’s slave population from outside. For although most of the captured women those sold into prostitution or domestic service are likely to have children, most of them being made pregnant by their new masters, it is unlikely that any of the male agricultural slaves will ever replace themselves. What’s more, it’s also unlikely that many of the children of the female captives will grow up to take their places either. For while traditions in this regard may have differed between different imperial cultures, being the illegitimate sons and daughters of their owners, it is far more likely that these children would eventually be integrated into the native population, with the boys, in particular, being trained as soldiers in order to meet the army’s continual need for new recruits in order to conquer new lands and thus obtain fresh supplies of slaves. 

As a consequence, it was almost certainly to get off this slave-driven treadmill that the Babylonian’s successors as the imperial masters of Mesopotamia, the Persians, took a completely different approach to empire. In 539 BC, the new Persian king, Cyrus the Great, not only allowed the still intact Jewish community in Babylon to return home to Judah, but set out a new policy with respect to future conquests, in which conquered peoples would remain in their native lands and would work there for their new imperial overlords, either directly on estates taken from the defeated landed class, or indirectly through taxes.

This had two main advantages. Firstly, by not transporting whole peoples back to one’s homeland, one did not end up with potentially rebellious aliens in one’s midst. Even more importantly, however, the administration of an empire’s foreign possessions quite naturally provided new opportunities for its middle class to move up in the world, particularly if part of the reward for their services comprised a share in the sequestered lands in the conquered territory.

The problem, however, was that this also constituted one of the new model’s major disadvantages. For in addition to the army of bureaucrats required to administer these colonial possessions, the empire also needed a huge standing army of actual soldiers to keep the conquered territories under control and prevent rebellion, while also guarding an ever-expanding border. This further meant that the empire’s homeland was increasingly denuded of young men, with the result that, eventually, in the case of Rome, for instance, which also followed the Persian model, the population of its Italian homeland was almost entirely reduced to unmarried women, freedmen who had effectively become the empire’s principal domestic overseers and administrators, which is to say its new middle class and the slaves who were still, of course, required to work the estates of the landed aristocracy.

Moreover, these slaves still had to be imported. Which meant that the empire still had to keep expanding. And while the threat of internal rebellion from within the inner provinces could be mitigated by eventually making most of their native populations citizens of the empire thereby also enabling them to be recruited into the army and the civil service the ever-growing needs of an ever-expanding periphery not only meant that the empire eventually became overstretched and vulnerable, but that even this revised imperial model was essentially unsustainable, as the Romans, themselves, eventually admitted, when, in the middle of the 2nd century AD, the Emperor Hadrian started building walls and fortresses along the Rhine and the Danube, thereby placing the empire on a new defensive footing and marking the end of its expansion.

Over the next several centuries, therefore, as the Roman and Parthian empires gradually disintegrated, wealthy landowners throughout Europe and the Middle East had to find a more sustainable way of husbanding their workforce, with the result that a new, modified form of slavery generally referred to as serfdom began to emerge, in which serfs were still tied to their masters’ estates, for which they had to work a set number of days each year, but, in return, were allocated plots of ‘common’ land which they could collectively work for their own subsistence. More to the point, given the principal subject of this essay, they were also now encouraged to marry and have children in order to replace themselves, thereby giving rise to self-sustaining communities which, to some extent, were also self-governing, at least with regard to such internal domestic issues as to who should marry whom among a village’s young men and women, thereby allowing the instantiation of MANS through female selection to, once again, assert itself.

That’s not to say, of course, that women instantly regained the sexual freedom they had enjoyed as hunter-gatherers or, indeed, anything like. For even though serfs owned no land and could, in principle, have lived communally, in practice monogamous marriage had now become the only acceptable form of union between men and women, at least in Christian Europe. Even without owning any land, moreover, the question as to whom a woman should marry was still as much determined by the economics of agricultural production as it was by the woman’s own inclination. For the land owners weren’t the only ones with an interest in the reproductive success of their workforce. The peasants, themselves, also had a lot at stake. For above all else, they needed to ensure that the young men and women who would comprise the next generation of their community were as fit and strong as possible, not only so that they could fulfil the village’s obligations to their feudal lord, but to ensure that they would also have enough energy left over to work the village’s own land, in addition to producing strong, healthy children of their own. The decision as to who was to marry whom among a village’s young men and women, therefore, was as collectively important to the village as decisions regarding any of their other breeding livestock and, as Germaine Greer remarked in ‘The Female Eunuch’, was usually made in much the same way. 

As Germaine Greer also explains, however, for many women this may not have been a bad thing. For even though they had still not regained complete control over their choice of mate, in some ways, they were actually in a better position than the women of later periods. For due to the very fact that the economic implications of any marriage were communal rather individual the well-being of the entire village resting on each couple’s reproductive success for the most physically favoured women, in particular, the criteria which determined which unions should take place almost always produced much the same pairings as the women, themselves, would have chosen, with the fittest, strongest and most able young men naturally being paired with the most reproductively promising young women, leaving only the less favoured women disappointed with their match or unmarried altogether. Indeed, medieval peasant marriages, as Germaine Greer’s describes of them, probably get as close to being about pure natural selection as any marriages in the institution’s history, and for that very reason were likely to have been more acceptable to women than some later forms of marriage, which became more and more determined by the economic circumstances and needs of individuals. 

For this next transformation in relations between the sexes to take place, however, two further changes in the external environment were necessary. The first of these occurred during the early 14th century, when the Black Death wiped out whole swathes of Europe’s rural population, thereby effectively increasing the value of agricultural labour. The result was that land owners now had to compete for workers, usually by offering them small plots of land of their own to work, rather than just a share in the produce of common land. By luring peasants away from their former feudal masters, moreover, the tie between a serf and his master’s estate was effectively severed, creating a new mobility within the agricultural labour market, which had the further consequence, therefore, of effectively ending the institution of serfdom altogether.

While this shifted some of the economic focus away from the collective and on to the individual, however, it did not necessarily enhance the individual’s economic circumstances to any great extent, at least not to the degree that women might discern any real economic differences among prospective mates and hence be guided in their choices by economic factors. For this to occur, a second external or environmental intervention was needed: one which came in the form of yet another change in the climate, albeit one which was significantly smaller than 9°C of warming which transformed the planet at the end of the last ice age. 

While smaller in degree, the 2°C of cooling which occurred during the 17th century, resulting in what is generally known to as the Little Ice Age, was still, nevertheless, no trivial matter, with effects that were far more catastrophic than is generally recognised. Throughout the world, a shortening of the growing season led to widespread crop failures, especially at latitudes that were already marginal for agriculture, such as those in northern Europe, which suffered decades of famine and higher food prices. What made this even worse, however, was the fact that the absolutist monarchs of the day, such as Charles I of England and Louis XIV of France, didn’t realise or understand what was happening and continued to levy taxes that were increasingly beyond what the middle class, in particular, were able to afford. This, in turn therefore, precipitated nearly a century of revolutions and civil wars in which it is estimated that Europe lost around a third of its population.

This time, however, it was not population loss, itself, which led to the social and economic changes which eventually led to the emergence of modern marriage. Instead, it was rather the fact that, with so many crop failures over the previous century, farming itself had become far too risky a business for the land owners, themselves, to indulge in. Instead of hiring agricultural labourers to work their estates, therefore, they decided that it was safer to lease individual farms to these same labourers for an annual rent, thereby guaranteeing themselves an annual income, while their new tenants who had to pay the rent regardless of the quality of the harvest took all the risk. 

This is not to say, however, that the advent of ‘rentier capitalism’ in which the owner of an asset does not work that asset himself to produce a return, but merely rents it out to someone else was only to the benefit of the rentier class. For although tenants had to pay the rent and did, indeed, take all the risk, a long term tenancy, which could be passed down from father to son, was still, nevertheless, an asset, which could be developed and improved, giving the tenant far more economic control over his life than mere agricultural labourers had ever had. Moreover, once the climate began to warm up again and farming became inherently less precarious, these new tenant farmers now had something far more tangible to offer prospective wives than the mere relative attractiveness of their persons: they had more or less guaranteed incomes and hence an almost guaranteed standard of living.

Even more importantly, they also had incentive. For by shifting the responsibility for an individual’s economic security from the community to the individual, himself as the creation of long-term tenancies now effectively did it placed the individual’s fate in his own hands, giving rise to a culture of competitive endeavour in which marriage, itself, now became a major driver. For whether it was in the countryside or in any of Europe’s rapidly expanding cities, where new technologies and industries were providing ever more diverse opportunities for enterprising young men to make their fortune, if a man wanted to marry he knew that he had to be able to offer his prospective bride a future commensurate with her expectations, or at least those of her family. Thus marriage, in what we might now call its ‘modern’ form, became a goal which drove men on, fuelling the agricultural and industrial revolutions of the modern era.

The problem was, of course, that while this was good for men, providing them with achievable goals and a clearly laid out path towards both success and happiness, for women, who are genetically programmed to select men on the basis of their physical and personal attributes, it was far less so. For as in the case of the very rich in the earliest patriarchal societies, the new socially acceptable criteria for the selection of a mate, which women were now forced to adopt, shifted the emphasis away from a man’s personal qualities and onto his economic circumstances and prospects, such that the man who made a young lady’s pulse race was not necessarily the man whom her family would urge her to marry. 

Moreover, as technological innovation and improvements in productivity increased economic growth, this placed more and more men in the position of being economically eligible, with the further consequence that, statistically, more and more women found themselves in the unenviable position of ‘choosing’ men who, from a purely personal perspective, they would probably never have chosen before. And while this may not have always been something they would later lament, in every generation throughout the modern era there would always have been a number of women who, somewhere down the road looking back over all the men they could have chosen found themselves regretting the one for whom they had ‘settled’. Indeed, the idea that they could have done better for themselves is a thought which has very probably crossed the minds of millions of women over the years and which, in many cases, has given rise to a level of disaffection which has eventually made the marriage intolerable to both parties. 

That’s not to say, of course, that men have never regretted their own choice in this regard. However, a husband’s regret nearly always reciprocates that of his wife. This is because men and women have very different attitudes and expectations when it comes to marriage, which, in turn, stem not just from the obvious biological difference between the two sexes, but from the very different reproductive strategies to which these differences give rise. For while women can only have a small number of children during their reproductive years, making it entirely logical, therefore as well as biologically conditioned that they choose the father, or fathers of these children with great care, a single man is biologically capable of siring literally thousands of children during his lifetime, therefore making a scatter gun approach the strategically best option for passing on his genes. 

Not, of course, that most men achieve this. For the selectivity of women means that most men get very few opportunities to do much scattering. Moreover, they are generally well aware of their limitations in this regard, along with their relative standing in women’s eyes. It’s why, for instance, they are usually very careful in their choice of women they ask out on a date, steering well away from any woman they regard as ‘out of their league’. It is also why traditional courtship, in which they could sell themselves as good economic providers, suited them. For it allowed them to compensate for any physical and personal shortcomings they may have felt they had by demonstrating other attributes and abilities. 

Being thus both generally quite realistic with respect to their prospects with women and also less choosey than women, men also therefore tend to be rather more passive in their approach to finding a mate, with very few men having the courage to go up to a woman unless she has given him some very clear signals that she would welcome his attention. Given that, to most men, this happens very rarely, the result is that, as long as the woman who has raised their hopes is a congenial companion, makes them feel good about themselves and when the relationship has reached this stage is enthusiastically happy to have sex with them on most of the occasions on which they feel so inclined three admittedly very important conditions most men are more than happy to therefore ‘settle’ for what fortune has seen fit to bestow upon them. It is only when the women themselves, therefore, begin to regret their choice and start to demonstrate this regret in a far less loving and companionable manner that men, too, then typically start to regret it.

That’s not to say, of course, that there aren’t men who, disappointed and frustrated by their own inadequacies and failures in life, take their bitterness, resentment and anger out on their wives. The world is a hard and unforgiving place and is consequently full of disappointed and unhappy people. As I pointed out in ‘Women’s Liberation and the Monetarisation of the Economy’, however, violent and abusive marriages can hardly have ever been the norm or women, having watched their mothers go through hell, would have simply stopped getting married.

A far more common impediment to a happy and successful marriage, therefore, is almost certainly one which has its basis in yet another evolutionary difference between men and women. This is the fact that, being less choosey than women, men don’t need to be wooed or courted. By this I mean that if a woman shows even the mildest interest in a man, as long as he finds her moderately attractive though not so attractive, of course, that he is made to feel uncomfortable or suspicious he will not generally require a lot of persuading to respond positively. After all, most men don’t get that many offers. More to the point, once in a relationship, they rarely need very much more from a wife or girlfriend than the three requirements outlined above. They certainly don’t need or expect to be continually romanced. In fact, most men are likely to regard any over-attentiveness on behalf of their wives or girlfriends as a sign that they want something or have already done something which the men, themselves, are not going to like. And they’re usually right. 

This, however, is not how it is for women. For having spent far more of their evolutionary development in matriarchal hunter-gather tribes, from which they selected a series of mates over time, than in patriarchal societies which now confine them to a single partner for life, even within the bounds of a monogamous marriage, women still consequently need to be courted and wooed on a continual basis. 

This is not something women demand of men in order to make life difficult for them though it often has that effect it is rather an essential part of the mating process: something that women require in order that they may continually renew their selection. For every time a woman accepts a man into her bed, it is, in effect, a MANS instantiating choice. And to justify that choice, the man in question must continually prove, not just his merit though, importantly, that too but his love and devotion. And although men may never be able to truly comprehend this having no such need themselves if they want a long and happy marriage, they nevertheless have to understand it and act accordingly, remembering their wedding anniversary being generally a good start, closely followed by taking their wives out to dinner to celebrate, or arranging a romantic weekend away for just the two of them, so that they may relive something of the closeness of their first days together.

The mere fact that marriage has to be worked at in this way, however especially by men reveals its inherent weakness. For while, from a societal perspective, monogamous marriage may have seemed the optimal solution to the problem of instantiating MANS through female selection in a species with such potentially catastrophic gender differences, the fact that this form of union does not come naturally to either men or women requiring both of them, in fact, to work at it meant that, for women in particular, a reversion to something closer to that for which they were evolutionally prepared was more or less inevitable once the social and economic conditions for it had been laid. And that is precisely what has happened over the last sixty years or so, as two further changes in the external environment once again transformed the relationship between men and women. 

The first of these occurred in 1960, when the US Food and Drug Administration approved the sale and distribution of the first safe and reliable oral contraceptive, thereby allowing women to engage in sexual relationships without first securing a legally binding contract providing for any children they might have as a consequence. 

While this restored some of the sexual freedom that women had enjoyed in matriarchal societies, however, on its own it still did not free them from the requirement to marry altogether. All it really did was give them a little more sexual license before making their final marital choice. In order to free them from marriage completely, what they also required was economic freedom from men, which not only required a much large societal transformation but the active cooperation and support of society as a whole, including the two most powerful economic agencies in the modern world: big business and government.

Indeed, it was big business and government that really made this whole transition possible, though not because either had any particular regard for the women’s movement. For as I explained in ‘Women’s Liberation and the Monetarisation of the Economy’, each had their own agenda for wanting greater female economic independence. 

Big business primarily wanted women to go out to work because it wanted the increased purchasing power which this would give them, making them better consumers. However, it also wanted the abundant cheap labour which women could supply, women at that time being paid considerably less than men. Similarly, governments wanted women to join the paid labour force, firstly because they wanted the increased tax revenue which this would produce, but also because they wanted to take credit for the apparent economic growth and increase in GDP to which the monetarisation of what women had previously produced unmonetarised in the home would seemingly give rise. 

The result was that within just two decades, from 1960 to sometime around 1980, women had not only regained their sexual freedom but had actually secured a level of economic independence from men which they’d probably never had before in the whole of human history, thereby freeing them from the need to get married at all if that is what they wanted. 

The problem, however, as I also explained in ‘Women’s Liberation and the Monetarisation of the Economy’, was that the monetarised production of food, clothing and other goods and service which women had previously produced in the home without monetary remuneration, introduced a significant level of hidden inflation into the economy, which wasn’t reflected in either the Retail Price Index or the wages that were negotiated on the basis of official inflation figures. The result was that, whereas in the 1950s, families with three, four or even five children could live quite comfortably on the monetary income of a single wage-earner, it very soon became difficult for even one child families to get by, even with both parents working.

Worse still, with both the husband and wife needing to hold down full-time jobs, married couples now have far less time to simply be together, and even less for a man to periodically woo and court his wife anew. For even when they have finished work, the shopping still has be done, the children have to be picked up from day care, supper still has to be cooked and the house still needs to be tidied and cleaned. To many married couples, as a result, the institution of marriage and the practicalities of bringing up children has come to seem like an endless treadmill of demands from which they simply want to escape, making it hardly surprising, therefore, that 42% of all marriages now end in divorce and more and more men and women are putting off marriage until much later in life or never getting married at all. 

In the United States, for instance, the proportion of those aged between 25 and 34 who are married declined from 55.1% in 2000 to 44.9% in 2009, while the proportion of the ‘never-marrieds’ as distinct from the simply unmarried, which includes both the divorced and the widowed increased from 34.5% to 46.3%. What is even more significant, however, is the change in the ‘dating’ behaviour of the never-marrieds during this period. For with the marriageability of men or even their potential as a long term partners no longer featuring very highly among a woman’s criteria in choosing a date, the basis upon which this choice is made has increasingly reverted to men’s physical attractiveness: a trend which has been further accentuated by the use of online dating apps and the very limited scrutiny given to most of the candidates who upload their profiles.

The result is that a large proportion of the never-married male population, like the many low status males in matriarchal societies, now effectively find themselves outside the dating/mating pool. A recent study by the US Institute for Family Studies (IFS), for instance, found that around 23% of never-married men aged between 22 and 35 usually the most sexually active demographic had not had any sexual encounters within the previous twelve months: some by choice, but mostly not. 

In fact, online, some involuntary celibates, or Incels as they are called, are vociferously angry about their situation in a way that has earned them something of an anti-social and even misogynistic reputation. For most involuntarily celibate males, however, one suspects that the reality is somewhat different. For resigned to the fact that they will probably never marry or even have a girlfriend, it is far more than likely that most of them keep their misery and shame to themselves, hiding it behind a semblance of jovial camaraderie or an obsession with computer games. 

Others, of course, attempt to find solace in alcohol or drugs and, in so doing, very likely contribute to the epidemic of over-dose deaths that is currently gripping America, with 70,237 fatalities in 2017, compared to the mere 16,849 in 1999. Even more sadly, some are driven to go even further, adding to the steadily rising US suicide rate which, in 2018, stood at 48,344, up from 42,773 in 2014, with 70% of all suicides being carried out by white males.

That’s not to say, of course, that it’s all doom and gloom for men in today’s dating world. Because for every involuntarily celibate male who is repeatedly rejected by women or has given up even trying there have got to be other males, at the other end of the dating spectrum, who are repeatedly chosen by women. And, indeed, this is the case. For while, according to the same IFS study cited above, the most-favoured 20% of men, based on their attractiveness to women, may not be having quite the 80% of outside-marriage sex attributed to them by urban legend, they get fairly close to it with 60%. What such statistics imply, however, is a lifestyle that doesn’t come cheap in either economic or emotional terms: one which not only involves dozens of hours in the gym each week keeping their bodies toned to Chippendale perfection and a small fortune spent on casual but fashionably expensive designer clothes, but a superficiality in relationships and in their attitude to life in general which is in stark contrast to the men who once built fortunes in order to be able to marry. 

Not that women emerge from the current dating environment looking very much better, not least because unlike the women in hunter-gatherer communities, who would seldom have had any difficulty in attracting a high status male most of the women of the tribe being, at any one time, either pregnant or nursing a small child never-married women of today, who rarely if ever have children, face much stiffer competition. In order to attract a desirable mate, therefore, they too have to work much harder and spend more money, not just on clothes, make-up, hair and accessories, but even on having their bodies surgically altered, and all to secure the attention of some latter-day Lothario who will have moved on to his next conquest by the middle of next week. 

Not, of course, that all women have the values of an Instagram model. But given their genetically encoded predisposition to always choose the ‘bad boy’ in the room, along with the fact that economic pressure is constantly making marriage less and less appealing giving them very little reason to look beyond mere physical attractiveness they don’t really have a lot of options, other, that is, than to withdraw themselves from the mating game altogether: a course which it would appear that more and more women are actually taking, as they experiment with alternative lifestyles which do not involve men at all.

The irony, of course, is that in severing themselves from men in this way, women inevitably end up blaming men their uselessness and toxicity for the necessity of going down this road, which, in turn, then more or less requires them to espouse the very feminist and progressive causes which, in part at least, are actually responsible for their predicament. For while it may have been pharmacology which separated sex from reproduction, and government and big business which lured women out of the home with the promise of financial independence, it was an ideology that convinced them that this new order was not only right and natural but came without consequences. 

One such consequence, however which may well turn out to be far more profound and far-reaching than anyone has yet understood is that, to many people, the loss of the common purpose which men and women once shared in bringing up children, and which kept them together through thick and thin, has actually left them bereft of any sense of purpose at all. Yes, some women, like some men, may now be able to find fulfillment in their career or a position in public life. But this has always been the preserve of the few. For most people, meaning has always had to be found in their private life, in the building of a home and the successful raising of children, through whom both men and women could justly feel that, in their own small way, they had made a contribution to the future.

Not only is this now being made more difficult to achieve economically, however, but its value and legitimacy are being further undermined by an ideology which finds it socially unacceptable for anyone let alone women to find satisfaction and meaning in the mere biological function of having children. After all, women are not just baby-making machines and have a right, therefore, to seek meaning and fulfillment elsewhere. In fact, in order to assert this right, they are more or less obligated to do just this, whether they want to or not. For anything else would be a betrayal of the ideology, itself, which, for many of its adherents has thus become the cause or purpose in life they would otherwise lack. 

Indeed, with marriage becoming less and less attractive to both men and women, and fulfillment in our personal lives thus becoming ever harder to find, it is hardly surprising, therefore, that so many young people today are now looking to find their sense of purpose through just this kind of political activism. The result is a world awash with youthful political movements, nearly all of them trying to right the wrongs of the patriarchal, capitalist system, whether this be by solving the problem of inequality or saving the planet from climate change. The problem, of course, is that while adherence to such causes may temporarily make us feel that we are part of something bigger and more important than ourselves, not only is there little that is personal in such crusades, but the virtue they bestow upon us is bought at the price of condemning almost everything upon which our success as a people was previously built: not the just patriarchy and capitalism, but the whole of European civilization to which it gave rise, its art, literature, science and philosophy, leaving us with even less to hold on to and believe in.

As the consumerist economics and big government welfarism of the last sixty years now drive us inexorably towards what will almost certainly be the worst financial and economic collapse in our history, the question we all have to ask, therefore, is not just how, once the dust has settled, we should rebuild our shattered economy on a sounder basis than the pile of irredeemable debt upon which it currently sits, but whether, given our current social-sexual malaise, we really want to continue down the same ideological path along which the last sixty years has been leading us, or whether, at this moment of civilizational reset, we might rather use our knowledge of our evolutionary legacy, not to return to the past for knowing what we know, that is probably not an option but to perhaps find another, kinder solution to the MANS principle: one which might yet bring men and women back together in some new form of relationship, in which, hopefully, they might once again find purpose and happiness.