Since my article on The Provisional Nature of Mathematical Models, some people have inevitably asked me whether I really mean to say that the computer models of climatologists are as cavalierly designed and populated as the Drake equation. The answer, of course, is ‘No’. Given the funding that has gone into climate science over the last thirty years, and the hundreds if not thousands of researchers it employs, I am as sure as it is possible to be in such matters that every variable that is included in these models has been meticulously considered, the relations between them painstakingly calculated, and the values assigned to them drawn from reliable and statistically meaningful datasets. If I have any doubt at all, it is simply that I do not know this for a fact.
This, in itself, however, is one of the great problems with respect to the whole climate change debate. For few of us, I suspect, other those directly involved in the climate science industry, have any detailed knowledge of any of the mathematical models upon which the debate is based. We don’t actually know what variables these models comprise, we are not privy to how they relate to each other, and we have little or no idea where the data to populate them comes from. For the most part, therefore, we are simply asked to take all this on trust. And this is something which I, personally, always find somewhat difficult. I am reminded, in fact, of the motto of the Royal Society, ‘Nullius in verba’, ‘By nobody else’s word’, which was chosen to express the Society’s rejection of any claim to knowledge based on ‘authority’, particularly the authority of the church. Science, it believed, had to be transparent and open to scrutiny. Otherwise it would be yet another form of dogma, ministered to by another unaccountable and unchallengeable elite. And although I have no reason to suspect any member of the climate science community of actually harbouring such authoritarian impulses, the principle remains.
Of course, it will be pointed out that, in 1660, when the Royal Society was founded, it would have been possible for someone to have read every scientific treatise ever written. Now, in stark contrast, with hundreds of thousands of scientific papers published every year, even men like Robert Boyle, Christopher Wren and Robert Hooke would be hard pressed to read and digest them all. Indeed, with ever increasing specialisation, particularly at post-doctoral level, when scientists are probably at their most productive, it is possible that, even within a particular field, some researchers today may have only a partial view of what others, around them, are working on. We have no choice, therefore, but to accept the work of others on trust. Otherwise science would grind to a halt. It is for this reason that we have such rigorous standards of documentation and peer review, critical parts of an extensive system of independently operating checks and balances, which, like the human immune system, has evolved to keep the body of science both whole and healthy.
Strange as it may seem, however, it is precisely this which is part of my concern. For the collective and distributed management of science as an institution gives it what could be described as an intelligence of its own, with emergent properties which greatly amplify some of the latent attributes of the individuals that make up its collective body. In this regard, it is a bit like one those massive flocks of starlings, which gather at dusk in early autumn, and which, for a while each evening, soar and swirl in the gathering twilight, their movements determined by deep undercurrents in the collective psyche of the swarm, none of which may be discernable in any of the individual birds. In fact, taken on their own, none of the individual members of the flock ever exhibit this kind of behaviour. It is only as a member of the collective that it is induced, and it is only at the level of the collective that the pattern emerges. And the same is true within science.
Crucially, for instance, it is what gives scientific revolutions, as described by Thomas Kuhn, their characteristically cyclical structure, in which periods of slow accretion are followed by moments of sudden and catastrophic change, often without adequate reason or cause. This is due to the fact that where scientific theories are later discovered to be incorrect, counter-evidence usually accumulates over time. At first, as a consequence, the problems tend to be regarded as slight. When inconsistencies occur the typical response is to introduce additional subsidiary theories to explain the exceptions to the rule. But the main theory remains intact. Eventually, however, one of two things invariably happens. Either some anomalous fact is discovered which just cannot be explained away – as in the case of the orbit of the planet Mercury with respect to Newtonian physics, for instance – or the sheer weight of additional ad hoc theories begins to make the main theory unworkable. The model simply becomes too complicated. And it is at this point that, historically, someone has generally come along to turn the whole model upside down and make us look at the problem in a different way. It is what Kuhn calls a ‘paradigm shift’, the most obvious example of which, of course, is the Copernican revolution of the 16th century, which replaced the former terracentric model of the universe with the heliocentric model of our solar system. From Lavoisier to Einstein, Darwin to Stephen Hawking, however, paradigm shifts of this kind have gone on at every level in science right up to the present.
Of course, most scientists today – or those, at least, who have read Thomas Kuhn, which is probably very few – like to believe that this process has now come to an end and that, in their particular science at least, they have finally reached the truth. In this, however, they are exhibiting just the kind of behaviour that gives scientific revolutions their sudden or catastrophic aspect. For, historically, scientists have tended to be very conservative. In most cases, as a result, proposed paradigm shifts have usually been met with stiff resistance, the majority of scientists refusing to accept that everything they have believed for most of their working life could have been fundamentally wrong. The early adopters, as a consequence, have tended to be the exceptions, mavericks whom the establishment has often shunned, particularly as, at the beginning at least, most new theories have little supporting evidence. The change, when it comes, therefore, has often been as irrational as the resistance to it.
Probably the best example of this fundamental opposition between scientists of vastly different character on the cusp of revolutionary change is that demonstrated by Antoine Lavoisier and Joseph Priestley, who, in 1774, prior to Lavoisier, almost certainly discovered oxygen. By focusing the sun's rays on a sample of mercuric oxide, he produced a gas which he described in a paper to the Royal Society as being ‘five or six times better than common air for the purpose of respiration, inflammation, and, I believe, every other use of common atmospherical air.’ The only problem was that he didn’t call it oxygen. Indeed, he refused to call it oxygen for the next thirty years, even though, by the time of his death in 1804, that was what everyone else was calling it. Instead he called it ‘dephlogisticated air’.
Since 1667, when the physician Johann Joachim Becher first postulated its existence, phlogiston had been generally accepted as the element within all combustible materials, which both accounted for that combustibility and explained why these materials lost mass when burned, the phlogiston in them being given off as a gas. The one anomalous fact the theory couldn’t explain was why some materials, mostly metals, actually gained weight when heated; and it was this that Lavoisier seized upon in propounding his own new theory, which, using some of Priestley’s own data, placed this highly reactive new gas at the centre of a whole new chemical paradigm, one which, today, we recognise as modern chemistry. Nor did it take very long for the rest of the world to see the coherence and simplicity of this revolutionary new insight. By 1789, when it was translated into English, Lavoisier’s Traité Élémentaire de Chimie (Elementary Treatise on Chemistry) was regarded worldwide as the primary textbook on this new analytical science. Right up to the bitter end, however, Priestley refused to accept it, and went on producing papers and giving lectures on phlogistic chemistry almost to the day he died.
Why he remained so intransigent for so long has long been a subject for speculation. Pride? Jealousy? An embittered hatred of what he called ‘French principles’? In the end we’ll probably never know. The more important point, however, is that the history of science is littered with such personal and emotional conflicts. Nor are the human traits which produce them in any way diminished when institutionalised and made global. They are merely channelled in different ways, in which the system, itself, has come to play an important part. Indeed, anyone who has ever worked in a university will know that there are no politics more mean-spirited and vicious than university politics. In the battle for honours, the struggle for funding, and the competition to secure the services of the best doctoral and post-doctoral researchers, the tense, ego-driven dynamic of science is played out annually throughout the academic world, with most of the rewards, of course, going to those whose reputations are already established and who, through the allocation of resources and the bestowal of preferment, are ideally placed to protect and perpetuate their own positions. In his book Against Method, Paul Feyerabend even suggested that vested interests had already become so entrenched in most sciences by the last quarter of the 20th century that any further paradigm shifts were simply impossible. In physics in particular, it was his view that even though many of the main theoretical building blocks had become so riddled with inconsistencies as to have brought the whole subject to an intractable impasse, the majority of physicists were either unaware of this – focused as they were on resolving the problems in their own narrowly defined areas – or had too much at stake to admit it.
Whether this was actually true in the 70’s and 80’s I don’t know. Much of the physics about which Feyerabend wrote was prior to Steven Hawking, who has, himself, of course, been responsible for more than one paradigm shift since then. It is possible, therefore, that Feyerabend simply despaired too early, and that things inevitably have to get pretty bad before a paradigm shift can occur. Moreover, I’m inclined to believe that no matter how institutionally moribund a science becomes, genius will always shine through and find willing supporters. It is just that the bigger the science, the harder it is to turn the juggernaut around.
Nor is this simply a matter of vested interests and institutional conservatism. For at the heart of the matter there is also a fundamental philosophical issue: one that is driven by the fact that, ontologically, the majority of scientists implicitly espouse a common sense realism, which assumes the existence of an objective universe which it is the role of science to describe. It is for this reason that scientists, for the most part, don’t like the idea of scientific revolutions, and why, if they countenance them at all, they place them firmly in the past, where an unfortunate detour down a blind alley may once have required a major change of direction. The idea that this could still yet happen again, however, is almost unthinkable. For given the objective reality of the universe, mistakes apart, our scientific description of it should be an ever more accurate approximation, converging on the truth. Once achieved, moreover, this ought to make further fundamental changes impossible.
Reasonable as this may sound, however, there is something fundamentally wrong with this whole ontological paradigm. In particular, it fails to take into account the very profound implications of the fact that not everything we know about the universe is learnt from it. Indeed, as Immanuel Kant pointed out more than two hundred years ago, not only do we know the truth of certain axiomatic or ‘categorial’ concepts a priori – that is, prior to experience – but it is only through the possession of these concepts that any experience is possible at all. To use a more modern way of talking about these things, this is because they constitute something like our fundamental operating system, a small core of basic expressions and key functions, without which our minds would remain unformatted hard-drives incapable of receiving information.
First among these ‘laws of thought’, as they are commonly called, are the basic rules of logic. These include the law of non-contradiction and excluded middle, which states that a proposition is either true or not true – that it cannot be both and cannot be neither – and is thus an expression of the same fundamental binary principle which is the basis of all computing. In addition to the rules of logic we then have the concept of quantity or number, from which, in conjunction with the rules of logic, all of mathematics can be derived, as Bertrand Russell demonstrated in Principia Mathematica. Add the concept of causality, or more particularly our unshakable belief in its necessity – that everything that happens must have a cause – and, again in conjunction with the laws of logic, one gets inductive reasoning and the whole of science. If we finally add our spatial and temporal awareness, from which we derive our concepts of space and time, we then have a infinite three dimensional universe, with an infinite, unidirectional temporal dimension, in which maths and science can come out to play.
Of course, one can always argue over any of the individual concepts which Kant postulated as categorial in this way. One can argue, for instance, that our concept of causality could be arrived at through induction: that by observing that everything that has happened in the past has always had a cause, one might conclude that everything that will happen in the future will also have a cause. One only gets to take this step, however, if one already has the certainty that a change in this pattern would, itself, have to have a cause. Inductive reasoning only works, that is, because we believe in causality. The more important point, however, is that regardless of any particular argument over potential or candidate categorial concepts, the basic principle – that in order to experience anything at all, one needs to have some way of processing, ordering and making sense of that experience, and that some core cognitive functionality has therefore got to be in place from the moment we gain consciousness – would seem to be irrefutable. It is certainly the case that no one over the last two hundred years has yet managed to refute it.
It is epistemological and ontological implications of all this, however, which most strongly conspire to undermine the common sense realism which science so insouciantly takes for granted. The first of these is the fact that if everything we ever experience – our entire phenomenal reality – is shaped by or channelled through these categorial concepts or laws of thought, it follows that nothing we could ever experience – or ever experience as real – could ever conflict with them. The possibility of discovering some region in the universe where two plus two equalled five, for instance, is quite literally unimaginable, not simply because we all already know, a priori, that it is impossible, but also because, as Wittgenstein pointed in Remarks on the Foundations of Mathematics, it is not at all clear how this discovery could be made, or what sort of evidence would actually lead us to this conclusion.
Independently of this – although it follows from it as well – the possession of these hard-wired concepts also accords us the kind of absolute certainty about certain of aspects of our phenomenal reality which inductive reasoning, alone, could never provide. So confident are we, for instance, that things won’t simply pop into existence ex nihilo, that even those who do occasionally experience a world in which such things occur are more inclined to conclude that they are either dreaming, drugged or insane, than that the world is really like this. The question they more commonly ask is not ‘What is happening to the world?’ but ‘What is happening to me?’ The irony is that although we put this down to the existence of an objective universe, governed by constant and immutable laws, the only thing that could actually give us this level of confidence is the inbuilt nature of the concepts on which those immutable laws are based and which we are therefore incapable of doubting.
Indeed, it is this basis in the way our own minds our constituted that ultimately gives us the confidence that whatever scientific description of the universe we develop by applying the rules of logic, mathematics and inductive reasoning to observed regularities in that universe, this description will not be undermined by random and inexplicable changes in those regularities. By turning the ontological paradigm of common sense realism on it head, Kant was thus the first philosopher ever to provide a basis in certainty, not for individual scientific theories, of course, which can always go astray for a number of entirely explicable and often entirely human reasons, but for the practice of science as a human activity.
Not, of course, that many scientists have ever been particularly grateful for this. I doubt that many have ever regarded a philosopher’s endorsement as necessary. And in this case they would have almost certainly regarded it as two-edged sword. For if we accept that the ordering principles which govern our phenomenal reality originate in ourselves, the $64,000 ontological question to which this inevitably gives rise, of course, is whether we have any reason to believe that these same ordering principles also apply to the noumenal reality of the universe as it is in itself, outside of our experience of it. And the answer, of course, is once again ‘No’.
Not that this means that they don’t. For we don’t have any reason to believe that either. The point is rather that we can only know what we are constituted to know by the laws of thought: that is to say, the phenomenal universe as we experience it. Anything outside of this is, by definition, unknowable. It is also, therefore, quite pointless to talk about it, a fact which Kant frequently stressed and which Wittgenstein later reiterated at the end of the Tractatus, in his now famous remark that ‘What we cannot speak about we must pass over in silence’.
Not, of course, that this has ever prevented scientists from both attempting and seeming to enter this unknowable realm where the laws governing our phenomenal universe may or may not hold. There are aspects of quantum mechanics, for instance, or, more specifically, Heisenberg’s uncertainty principle, which appear to if not contravene the law of non-contradiction exactly, then certainly to suspend it. More generally, in positing multiple dimensions in space/time, and in defining a point in time at which time, itself, began, we are certainly talking about things outside of our phenomenal experience. To do this, moreover, science has invented an interrelated set of formal languages to express, in a rule-governed way, what we are constitutionally incapable of imagining and to thereby circumvent this limitation. One may not be able to think, for instance, that Schrödinger's cat is both alive and dead at one and the same time, but in the form of
Δx Δp ≥ ħ/2
which is Heisenberg’s mathematical expression of the uncertainty principle, one appears to be able to write it.
The trouble with all such formal languages, however, or more particularly with their denotation – the way they are often translated back into something we can understand in phenomenal terms – is that they have a tendency, as the above example demonstrates, to come adrift from phenomenal reality in ways which then appear to allow them to do things which are clearly specious.
To illustrate this more clearly and make it more accessible, take, for example, the analogous way in which, for hundreds of years, theologians have attempted to attribute properties to God. Importantly, let me first say that within Kantian cosmology, there is nothing to preclude God’s existence, or to make faith in Him irrational. It is just that, if God does exist, He does so outside of our phenomenal universe and is therefore neumenal, unknowable and, as the hymn says, utterly ineffable, an entity about whom we can say literally nothing. Headless of Kant’s warnings, however, theologians have spent innumerable hours inventing their own technical, if not formal language to describe Him, a language which includes such concepts as omnipotence, omniscience and sempiternity. Ungrounded in phenomenal reality, however, all of these terms have internal contradictions which invariably lead to paradoxes when pursued at any length. The most famous of these, for example, is the omnipotence paradox, which after establishing that there cannot be two omnipotent beings in the universe, then concludes that one omnipotent being cannot therefore create another, and cannot therefore be omnipotent.
When I was a young Ph.D. student, I actually attended a lecture on this very subject, given by someone who, in all seriousness, thought he had found a way around the problem, not realising that the whole issue was based on a linguistic mistake. For not describing anything that can be found in our phenomenal universe, languages which are based on logical constructs of this type, no matter how reasonable they may seem, are simply word games, linguistic slights of hand. One may appear to be saying something meaningful and even intelligent, but it is simply a bubble of language which has floated free of reality and relates to nothing at all.
Not that I’m saying that the formal language in which the latest theoretical physics is expressed is guilty of or even susceptible to this kind of error. To be honest, like most of us, I simply do not know enough to make this judgement. As is so often the case, institutionalised science has again put us in this position. Moreover, I tend to believe that, on balance, formal languages, in which the functional operators are all taken from mathematics or formal logic, are intrinsically less likely to lead one down this road than a natural language in which technical terms are merely introduced. It is why, after all, such mathematically based formal languages are used. I am also fairly convinced that Heisenberg’s uncertainty principle only appears paradoxical when translated into natural language and illustrated in the form of Schrödinger's cat. Despite the fact that it was intended to make the subject more accessible, quantum physics has therefore been done a great disservice by this misleading and rather silly analogy. Sometimes, one has to say, scientists are their own worst enemies. From what I know of more recent physics, however, centred on providing science with a unified theory, one has reason to be more cautious.
Since its inception in 1995, for instance, not only have the number of variables encompassed by M-Theory increased with almost every revision, as additional particles have had to be incorporated into the model to make the maths work, but the number of dimensions in which these particles are thought to move has also increased and now stands at a mind-boggling eleven, making both the model and the maths so complicated that there are probably only a handful of people in the world who can truly claim to understand them, with each of the members of this exclusive club therefore peer-reviewing each other. What worries me most about this situation, however, is not that we seem to have brought about the very state of affairs which the Royal Society most fervently strove to banish – the authority of an unchallengeable elite, who are only able to explain what they are doing in terms of specious and often paradoxical analogies – but that we, as a culture, have somehow come to accept their own characterisation of these activities as of cosmic importance in answering some primordial question, when, on the basis of Kant, we have very little reason to believe this, and every reason to doubt it. For even if M-Theory, or whatever theory may eventually succeed it, were to find a coherent phenomenal denotation, or, better still, were to be cashed in the form of some real world application – as one might say that Einstein’s relativity theories were cashed in terms of nuclear power – thus demonstrating its truth in terms other than the soundness of the mathematical model – we still wouldn’t know, and would still have every reason to doubt whether it described the universe as it exists in itself. The only thing we could say with any certainty is that physics would have gained for a new paradigm which would then provide us with a more coherent description of our phenomenal universe than was provided severally by relativity theory and quantum mechanics.
Even if a unified theory were forthcoming in this way, moreover, this would still not necessarily mean the end of pure science, or that science, thereafter, would merely become a matter of filling in the gaps, as the convergence theorists like to suppose. For given that this theory would have to describe or otherwise be applicable to our phenomenal reality not only to be considered true but even to be thought meaningful, and given that the universe as it exists in itself could, for all we know, be governed by laws which we could never comprehend even if we could know them, it is quite possible that the universe might continue to throw up anomalies, requiring new paradigms to describe it on an infinite and ongoing basis. It is quite consistent with Kant’s epistemology, in fact, that scientists could be employed forever, continually opening up more interesting windows upon reality, which, in turn, might lead to more interesting technology options, without, of course, ever being able to open the one window they most fervently desire to look through: the one which reveals the universe as it is in itself, the way a noumenal being, without our phenomenal limitations might see it – if only we could comprehend the first thing about such a being.
The really sad thing about this desire, however, is not only how few scientists seem to realise how naïve and, indeed, childish it is, but how few seem to understand its hopelessness. For even if science were to see back through the Big Bang to the point zero at which our universe, both temporally and spatially, popped into existence, and even if we agreed not to ask what things were like half an hour earlier – which, for us, is always a meaningful question – would this actually answer the question that scientists seem to think it would, or, indeed, any question at all? For no matter how far one pushes the frontiers of knowledge, eventually one comes up against the bounds of sense, the point at which our understanding can penetrate no further. Call it the noumenon, or point zero or, indeed, God, these are simply the labels we use to denote our intrinsic limitations, the signs we plant in the ground to say ‘Beyond here be dragons!’ What we should also consider, however, is that it is precisely these intrinsic limitations, as Kant pointed out, that make all our knowledge and understanding possible, and it is to the possible that we should keep our faces turned.
No comments:
Post a Comment