Showing posts with label Scientific Revolutions. Show all posts
Showing posts with label Scientific Revolutions. Show all posts

Friday, 8 November 2024

Language & Thought

 

1.    The Relationship Between Thought &Language

In terms of its consequences, one of the worst philosophical errors of the modern era is the implicit assumption that all thought is mediated by language: an error which it is all too easy to make due to the fact that most of the thoughts of which we aware are those which we either articulate or could articulate if asked to say what we were thinking. There are, however, times when we say something like ‘Wait! I’ve just had a thought’ and then take some time and effort to put that thought into words, strongly suggesting, therefore, that the thought preceded its articulation. What’s more, there are also times when, nagged by the feeling that we might not have got it quite right, we are then dissatisfied by the way we have actually expressed a thought, further suggesting that thoughts are, or at least can be, independent of language.

Of course, it may be argued that whatever subterranean cognitive processes go on prior to the articulation of a thought, these do not actually constitute ‘thinking’ in that the act of thinking actually consists in putting our thoughts into words. Even if one were to accept this as a definition of one form of thinking, however, it is very different from ‘thinking in words’ or using language to think, as when we construct a rational argument, for instance. What it does, in fact, is reveal three different levels in the relationship between thought and language: the pre-linguistic base level at which we have an unarticulated thought; the level at which we then struggle to make sense of this thought by putting it into words; and the level at which we then use language to examine, analyse and criticise the now publicly available ideas which, through their articulation, our thoughts have become.

For those who feel uncomfortable talking about thought in any way that hints at it being more primitive and basic than language, what we have done, however, is actually make the problem worse. For we now have two levels at which our cognitive processes our hidden from us: the base level at which we have a thought we haven’t yet expressed and cannot therefore identify or say from whence it came, and the almost equally opaque transformational interface between this base level and the fully articulated world of ideas, which T. S. Eliot famously described as a ‘raid on the inarticulate’ without thereby making it any more transparent.

Indeed, it is this lack of phenomenological transparency that is at the heart of all our problems when it comes to the relationship between thought and language. For it not only makes what’s going on at the subterranean levels of this relationship essentially unknowable but consequently precludes any further philosophical investigation of them. I say this for the very good reason that if something is unknowable, there’s not very much we can say about it. And as Wittgenstein stipulated at the end of the Tractatus: ‘Whereof one cannot speak, thereof one must be silent.’ The problem with this, however, is that if we eschew all talk of those aspects of the relationship between thought and language that are hidden from us and concentrate purely on the one aspect that is phenomenologically accessible, namely our use of language as a medium for thought, then we are in grave danger of forming a very distorted view of our relationship to language as a whole, which has some very unfortunate consequences.

One of the most glaring of these is that fact that if one fails to take it into account Eliot’s raid on the inarticulate, our creative use of a language would appear to be limited to the possibilities already inherent in it. That is to say that, while it may not be impossible to say something new, without being able to bend or repurpose words to new uses, typically through the use of metaphor as I explained in ‘The Role and Importance of Metaphorical Truth’ the use of any language is rigidly constrained by the current definition of its terms. Not only is this contrary to everything we know about the history of our intellectual development, however to which I shall return later but it also runs counter to Kant’s famous dictum in the ‘Critique of Judgement’ that true genius lies precisely in extending or developing a language so as to enable us to say something that could not have been said before.

Of course, it will be pointed out that no one is actually denying that we are able to bend language to our will so as articulate something that was previously beyond our grasp. Indeed, all that is being said is that we don’t know how we do this and so cannot really talk about it. There is, however, a huge difference between not talking about something and treating it as if it doesn’t exist. Even if we merely use it as a placeholder to fill in the blank created by its unknowability, moreover, there is a lot to be gained from acknowledging the existence of that about which we cannot speak if it consequently prevents us from making other philosophical errors, the most significant of which, in this case, is a tendency, reinforced by scientific materialism, to treat human beings as tropistic.

The best way to illustrate this is to use once again Dan Dennett’s example of the tropistic wasp, which he first introduced in a paper I heard him give at Birmingham University some forty-odd years ago and which I also used in ‘The Role and Importance of Metaphorical Truth’. The story goes like this. The female of this particular species of wasp excavates a nest into which she lays her eggs before going out to hunt for grasshoppers or locusts, which she stings and paralyses but does not kill so that they remain alive and fresh in order to provide food for her offspring during their larval stage. She then brings the paralysed grasshopper or locust back to the nest and leaves it on the threshold while she first checks inside to ensure that everything is as it should be. Satisfied that all is well, she then comes outside again, retrieves her prey and drags it into the nest.

On the surface, therefore, this not only seems like intelligent and purposeful behaviour but something akin to what we would regard as maternal. What entomologists discovered when they conducted further experiments on the wasp, however, was that if, during the time the mother spent checking the nest, they moved the paralysed locust a few inches away from the entrance, on coming back outside, she would drag her prey back to the entrance once again before going back inside to check the nest once more. If, while she was inside, they then moved the paralysed locust again, on coming back outside, she would once more repeat the process. And, as long they kept moving the locust, she would go on doing this over and over again, indefinitely.

Thus, what initially looked like intelligent behaviour is more like the product of a computer program, which, in this case, gets stuck in a loop. Professor Dennett’s point in presenting such a starkly clear example of tropistic behaviour, however, is the contention put forward by all materialist philosophers of the mind that, if one thinks of language as a kind of computer program a fairly reasonable analogy then while our own programming may be quantitatively more advanced and sophisticated than that of the tropistic wasp, it is qualitatively the same, in that both we and wasp are biological machines whose behaviour, it may be reasonably assumed, is entirely determined by our programming.

There is, however, a major difference between the two in that, whereas the wasp’s programming is entirely hardwired into its genes as, indeed, is much of our own programming our linguistic programming is acquired, much like software: a fact which, in itself, militates against the materialist position. This is because how we acquire this software, or learn a language, is as phenomenologically opaque to us as our ability to then alter or modify it to better express our thoughts. If we accept, therefore, that, even though we don’t know how we do it, we do, in fact, learn a language, there is no reason why we should not accept that we also have the ability to develop and extend that language so as to say something we could not have said before, even though we have no idea how we do this either.

In fact, the only thing stopping us from embracing this view of the relationship between thought and language is the fact that it also means embracing the idea that there are some things about the universe and even, indeed, about ourselves, that are unknowable but which must necessarily exist in order to explain things we do actually know, such as the fact that throughout our history we have continually found new ways to think and talk about the universe, as is particularly well demonstrated by the occurrence of paradigm shifts in science.

2.    Cultural Resistance to the Concept of Paradigm Shifts

Although I have written about this before, for those who haven’t read my previous essays on this and related subjects, the concept of a paradigm shift was first introduced by the American philosopher of science, Thomas Kuhn, in his 1962 book ‘The Structure of Scientific Revolutions’, in which he put forward a new model or paradigm of the way in which science, itself, develops. Instead of proceeding incrementally, as the traditional paradigm would have it, with one building block being laid upon another, Kuhn argued that science proceeds in stages, some of which are necessarily revolutionary. In fact, the usual lifecycle of any given field of science almost invariably starts with a revolution, when someone puts forward a new theory. As this gains acceptance, older theories are then abandoned and the science enters a stable phase. As time passes, however, some of the predictions which the new theory makes turn out to be false, requiring additional subsidiary theories to be developed in order to explain these exceptions. Over time, however, the number of exceptions increases, requiring more subsidiary theories to be developed, to which exceptions may also be found, requiring further subsidiary theories until the whole thing becomes so unwieldy that someone eventually says ‘Wait! I’ve just had a thought. What if we have been looking at this whole thing the wrong way round? What if we look at it like this instead?’ thereby introducing a new theory and starting the whole lifecycle all over again.

One of the best examples of this is Lavoisier’s creation of modern chemistry in the late 18th century: one of the most remarkable contributions to science in all of history, which most people still do not understand. In fact, the general view is that Lavoisier discovered oxygen. But he did not. Oxygen was discovered by Joseph Priestly, who actually taught Lavoisier how to isolate it. It was just that Priestly didn’t call it oxygen. He called it dephlogisticated air. It was Lavoisier who called it oxygen, just as he called hydrogen ‘the maker of water’ when he discovered that, when he ignited it in the presence of oxygen, the two gasses combined to form H2O. What Lavoisier did, therefore, was not just discover a new element but create a whole new language for thinking about and describing the material world, in which the ancient concept of phlogiston, which had dominated chemistry for more than three hundred years, was discarded in favour of the concept of elementary particles, called atoms, which are combined in different ways and quantities to form different substances.

That’s not to say, of course, that he did it all on his own or in a vacuum. The term ‘atom’, for instance, had been introduced into modern science more than a century earlier by Robert Boyle, who derived it from the Greek word ‘atomos’, meaning ‘indivisible’, which was first used by the Greek philosopher Democritus in the 5th century BC. What’s more, there was still a long way to go. Other elements and a whole host of new laws describing how they combine and act upon each other had yet to be discovered. It was Lavoisier, however, who created the basic model or conceptual framework upon which successive generations of chemists were consequently able to build, an achievement far greater than the mere discovery of a single element.

In fact, if more people understood what Lavoisier actually did, he would be regarded with far more esteem than he actually is, which rather begs the question as to why he is not. The answer, however, is really quite simple. It is because most scientists or, perhaps more accurately, the very institution of science, itself, if such there be, doesn’t like the idea of scientific revolutions, much preferring the traditional paradigm of science in which science proceeds in an incremental and orderly fashion and where every contribution, no matter how small, adds to the sum total of scientific knowledge. Despite all of the historical evidence to the contrary, therefore, from Copernicus to Einstein, wherever possible, the institution of science refuses to acknowledge that scientific revolutions and their concomitant paradigm shifts occur.

One of the reasons for this is the belief that the very idea of scientific revolutions undermines science. For if scientific revolutions have happened in the past, they can happen in the future, replacing current scientific paradigms with new ones that have yet to be conceived, thereby placing all current science under a provisional cloud. Even if it is conceded that scientific revolutions have happened in the past, therefore, it is generally agreed that they cannot happen in the future: an article of faith based on the implicit assumption that all of science’s current theories especially its more foundational theories are correct.

This, of course, is nonsense and is made demonstrably so by the application of Sir Karl Popper’s irrefutable argument that scientific theories cannot be proven, only falsified, which means that even if all our current theories were correct, we couldn’t know this. For even if a theory has so far survived three hundred years without being proven false, there is no guarantee that it will survive another three hundred years or even three hundred days. All scientific theories are therefore essentially provisional, which the existence of scientific revolutions only makes more uncomfortably clear.

There is, however, an even more profound reason why the institution of science doesn’t like the idea of scientific revolutions. For at the heart of every scientific revolution, of course, there is the creation of a new scientific paradigm, a new way of thinking about the world which requires precisely the kind of creative genius Kant describes: someone who is able to extend or reshape the language so as to say something that could not have been said before, someone, indeed, like Copernicus, Lavoisier or Einstein. The problem is that we do not know how these geniuses did what they did or, indeed, how anyone can rewrite a language so as to say something new. The experience is simply not accessible to us and being inaccessible, is therefore unknowable, which gives the institution of science yet another problem. For if something is unknowable, it is also, of course, unteachable.

In fact, in order to be teachable, a process or method has got to be completely transparent. Any institution attempting to teach science, therefore, must teach a scientific method that precludes the need for genius. Indeed, any predisposition or tendency among its students to think outside the box has got to be discouraged and a strict adherence to the prescribed method and current orthodoxy cultivated.

This is principally achieved by fostering a culture of both methodological exactitude and cooperation within the student body, such that students work together in a clearly defined and highly disciplined manner, rather than compete against each other by thinking along their own lines. This fostering of a strictly disciplined scientific mentality is also significantly helped by the fact that the first articulation of a new paradigm, like the first expression of any new idea, as Thomas Kuhn himself was all too well aware, is almost invariably inchoate: only half formed and full of holes and therefore very easily derided. Anyone putting forward such new ideas consequently has to work very hard to make any impression on the established order, especially when it is that established order that is handing out the jobs and research grants. The result is that institutional science is almost invariably conservative science, which has no place for revolutionary thinking. The problem with this, however, is not just that it suppresses something vital in the dynamic nature of science and leads to a kind of ossification, but that it also leads to science becoming corrupted, not just in the sense of financial corruption – though this too – but in the sense that it is no longer scientific.

3.    The Corruption of Science

In fact, it is the corruption of science, itself, that actually leads to financial corruption and therefore comes first. And it does so primarily by inverting the relationship between empirical data and theory, such that empirical data no longer has primacy.

To understand how this happens, we need to go back to the most basic model of science in which theories are created to explain empirical observations. The theories are then tested by making predictions based on these theories and conducting experiments to find out whether the predictions are accurate. If the predictions are accurate, then there is a chance that the theory is correct, though only a chance. For it is perfectly possible that experiments conducted to test other predictions based on a theory may prove the predictions false, which may prove the entire theory false. On the other hand, we may attempt to explain these exceptions by either modifying the original theory or by creating subsidiary theories in the way described above. Indeed, in Kuhn’s description of the standard lifecycle of a scientific theory, it is only when the number of subsidiary theories gets out of hand, leaving us with more patches than original fabric, that we are eventually forced to abandon the theory altogether.

In today’s institutionalised science, however, the decision to abandon a theory altogether is even more difficult. For that would be to admit that for years, perhaps, the institution has been teaching something that it now regards as false. And this is something that it is very difficult for any institution to do. After all, people’s careers and reputations are at stake. What’s more, with revolutionary thinking having been institutionally suppressed in science for at least the last two generations, the chances of there being an alternative theory or paradigm available to replace the one that now needs to be abandoned are very slim. No matter how much empirical evidence has built up to prove a theory false, therefore, it is the errant data that must now always be explained away rather than the theory abandoned, which is to say that it is the theory, rather than the data, that now has primacy.

Once the primacy of theory has been established in a culture, this then has two further consequences. The first of these is that scientists no longer feel quite so constrained by the sanctity of data. If the data does not conform to the theory, therefore, they are now far more inclined to either discard or change it than was previously the case. Nor is this necessarily cynical. After all, if one sincerely believes in the theory one is putting forward or defending, errant data must surely be faulty data. Once selecting or modifying data to fit one’s chosen theory has become commonplace within an institution, however, it becomes very easy to start doing it, not because one particularly believes in the theory, but simply in order to get the results one needs in order to maintain one’s research funding and keep one’s job.

In fact, there is a considerable amount of evidence to suggest that such corruption is now endemic throughout most scientific institutions. The editor of one journal I quoted in a previous essay on this subject actually believed that up to 20% of all the papers submitted to his journal for publication were not just based on selected or modified data but on no data at all, it all having been made up. What is even more disturbing, however, is the fact that, believing in their theories rather than the sanctity of data, many scientists do not seem to think that there is anything wrong with this, a development in the very culture of science which has been further encouraged by the use of computer models which, themselves, have little in the way of empirical grounding.

Indeed, most computer models start with a theory which is turned into a set of algorithms, which it then uses to predict future observations and measurements under different conditions. Modifications are then iteratively made to the algorithms to improve their predictive accuracy, although this in itself can be very problematic. For while modifying an algorithm to make its predictions conform to reality may seem very similar to the development of subsidiary theories which Kuhn describes, in traditional science these patches developed to explain exceptions to a main theory had to have some theoretical basis. Simply modifying an algorithm to make its predictions fit the facts, on the other hand, can leave us in a position in which the model’s predictions are now correct but we have no idea why. Worse still, many large scale computer models are subject to continual development over many years, to which a lot of people may contribute, especially in a university setting, with the result that there comes a point at which it’s possible that no one single person actually knows how the model works. It becomes a magical black box which issues oracular prophecies without anyone knowing how it does so. And yet, believing in this magical black box, we still believe in the prophesies, even when they don’t come true.

One of the best examples of this is the Coupled Model Intercomparison Project (CMIP), in which 102 institutions from around the world were originally funded to predict future changes in the world’s climate based on two key assumptions: that such changes are primarily driven by the accumulation of carbon dioxide in the atmosphere and that, without a modification in our own behaviour, this accumulation will continue at a rate of 1% per year.

On this basis, the project’s first set of predictions were published in 1995 and covered the next twenty year period leading up to 2015. In fact, it was this first set of CMIP predictions that led Al Gore to predict that summer arctic sea ice would have disappeared by 2014. By the time 2014 arrived, however, it was perfectly obvious that the predictions of all 102 institutions taking part in the project were wildly inaccurate, with some of them being out by more than 1°C when compared with actual data from satellites and weather balloons, the two most reliable sources of such data we have. And yet it is the predictions of these models that the world continues to believe.

4.    An Absence of Critical Thinking

So how is this possible? In previous essays on this subject, I have put forward two possible answers. The first is that there are just too few people in the world who actually know the science, leaving the rest of us to just take their word for it. The problem with this, however, is that there are some people who know the science and one would expect at least some of them to say, ‘Hold on  minute, this isn’t right’. In fact, I have actually based some of my own essays on the work of two such upstanding scientists: Richard Lindzen, Emeritus Professor of Meteorology at MIT, and William Happer, Emeritus Professor of Physics at Princeton University.

This then led me to my second answer: that there is a much larger contingent of scientists who have a vested interest in the theory of anthropogenic global warming than those who are simply committed to honest science with the result that it is the former group whose voices are always heard. The problem with this, however, is that it would seem to entail what would have to be the biggest conspiracy in history. For it is not just scientists who would be required to continually espouse something they did not believe, but everyone who comes into contact with any aspect of reality upon which global warming should be having an effect. Anyone working in the arctic, for instance, would have surely noticed that this summer, ten years on from when Al Gore said it would all be gone, arctic sea ice was as extensive and as thick as it has been throughout the last century. While corruption and our predisposition to uncritically accept the authority of experts may both have contributed to inducing our current state of mass delusion, therefore, there is clearly something else going on here, which raises the possibility, indeed, that it is actually our current state of mass delusion, itself, that is that something.

After all, what is a mass delusion other than a highly prevalent way of thinking that is not actually supported by real world evidence, of which there have been hundreds if not thousands of examples  throughout our history. Indeed, it could be said that our entire history is a history of such delusions. We acquire them, they dominate our way of thinking for a while, and then a Copernicus, Lavoisier or Einstein comes along and says something that eventually makes us see the world in a completely different way. The problem, of course, is that it is never quite that easy. For not only are we resistant to such revolutionary changes in our world view but are so of necessity, in that language could not exist if we changed our way of thinking every other minute. In fact, language actually needs periods of stability in which everyone uses the language in the same way in order for us to explore the logical the ramifications of the prevailing paradigm, thereby revealing its flaws and paving the way for the next revolution.

The problem is that, sometimes, our resistance to the next revolution is so strong that it effectively blocks it, sometimes for centuries. No matter how much evidence accumulates showing that the old way of thinking is wrong, those who have a vested interest in perpetuating it continually prevent change from occurring, sometimes even going so far as to systematically kill the proponents of change.

Fortunately, we are not actually doing that at the moment. Because we do not understand the process by which one way of thinking replaces another, however, and refuse to accept that it even occurs, we have now become so trapped in our current way of thinking that we cannot get out of it even though its flaws are not just glaringly obvious but are starting to cause us real world problems.

Take, for instance, the quest to achieve Net Zero carbon emissions with which the west is currently obsessed and which is largely focused on two main goals: the decarbonisation of our electricity grids and the replacement of petrol and diesel engined vehicles with purely electric vehicles. If the objective of these goals is as stated, however, not only is this focus far too narrow, omitting such forms of transport as airlines and cargo ships, both of which have massive carbon footprints, but the goals themselves are incompatible, as any critical analysis very quickly reveals.

In fact the problem is almost immediately apparent as soon as one considers that in 2023, wind and solar power constituted just 34.3% of the UK’s total electricity generation. When conditions were optimal, there were indeed periods during which they actually contributed more than this; but this was their average contribution across the year. If the UK is going to completely decarbonise its electricity grid by 2050, this means, therefore, that it is going to have triple its wind and solar generating capacity over the next 25 years, an objective which, under any conditions, would be very challenging. If, at the same time, however, we are going to replace all of the 41.2 million petrol and diesel engined vehicles on our roads with EVs, we are going to have to increase electricity production by another 37.5%, which effectively means quadrupling our wind and solar generating capacity over this period. Even putting aside the cost, therefore, which, given the current state of our finances, presents yet another challenge, it is very doubtful whether this is even remotely feasible.

If our goal were solely to decarbonise our electricity grid, while keeping petrol and diesel engined vehicles on our roads, we might be able to manage it. Similarly, if our objective were solely to replace all petrol and diesel engined vehicles with EVs while continuing to power our electricity grid with natural gas, this too might be possible. Trying to do both at the same time, however, is something which only a religious zealot who hasn’t actually thought about it would even consider.

What’s more, this doesn’t take into account the very real possibility that running an electricity grid purely on wind and solar power is actually impossible. I say this because, being dependent on the weather and hence intermittent in their electricity generation, an entirely wind and solar powered grid would have to have some form of battery storage back up for when the wind doesn’t blow and the sun doesn’t shine. This, however, is far more expensive and difficult to achieve than advocates of an entirely decarbonised grid would appear to think. In my 2021 essay on this subject, for instance, I calculated that, using the Tesla Powerpack 2 4HR battery system, at that time the leading large scale battery storage system on the market, it would cost £550 billion just to store one day’s output from our then wind and solar capacity of around 32 GW.

This being clearly non-viable, people are now therefore talking about using ‘green’ hydrogen as our storage medium, the idea being that the hydrogen would be produced by electrolysis using wind and solar generated electricity on days when the wind does blow and the sun does shine and then burnt in modified gas fired power stations when an alternative source of energy is required. What this doesn’t take into account, however, is how much more electricity one would have to generate in order to produce the hydrogen, or the fact that it takes 50% more energy to produce hydrogen by electrolysis than one actually recovers by burning it. As a solution to the storage problem, therefore, this would only make sense if ‘free’ wind and solar power really were as cheap as people like to believe, thereby making their profligate use to produce hydrogen economically viable. The fact is, however, that wind and solar farms are only ‘viable’, themselves, if they are heavily subsidized. For as I have demonstrated elsewhere, they too consume more energy in their manufacture, installation, operation and maintenance than they ever produce in their lifetime, making the whole renewables industry an exercise in economic futility and corruption.

The entire Net Zero project is therefore a total fantasy, unsupported by either economic or engineering reality. Not only can it not be achieved, however, but if we continue pursuing it, it could easily result in a disaster. For not only is it inevitable that, if forced down this road, the electricity grid would eventually fail, along with all the computer systems that depend on it, but it is also fairly certain that this would be very shortly followed by the failure of everything that depends on a computer, which, in today’s world, is just about everything. With no power to heat our homes and no food in the shops, societal collapse would then shortly follow, with riots in the streets, widespread looting and the complete breakdown of law and order.

Not, of course, that it will actually come to this. For what can’t be done, won’t be done. The only question is how much damage will be done before the reality of the situation finally sinks in. For no matter how much evidence accumulates demonstrating that the whole Net Zero project is a fool’s errand, there will be those who will sill resist abandoning it. Nor will these diehard advocates of Net Zero be confined to those with a vested interest in having it continue, including those paid by government to advise them on the subject, who will no doubt insist to the very end that it can be made to work. An even louder voice will almost certainly come from those who believe that there is no alternative, the alternative being that the planet is destroyed. After all, 95% of scientists agree that the accumulation of carbon dioxide in the atmosphere is the most significant cause of global warming and that we, ourselves, are the most significant cause of this accumulation. What’s more, based on our traditional view of science, combined with our current inversion of the roles of theory and data within it, this is not just regarded as a theory but as a matter of fact.

Nor does it help when faced with such a mind-set to point out that, before Lavoisier put forward a much better theory one which more accurately predicted what happens in the real world 95% of scientists believed that all non-metallic materials lost weight when heated because they gave off phlogiston. For in order to see these two situations as analogous one actually has to subscribe to Thomas Kuhn’s paradigm of the way in which science proceeds, which we, of course, do not. Indeed, one could say that this was our real or underlying problem if there weren’t something even more fundamental still yet underlying it. For our problem is not just our choice of paradigm for describing and understanding the activity we call science, but our entire view of the universe. For believing in an entirely material universe, entirely knowable and explicable by science, we are unable to believe that anyone could do what Thomas Kuhn claims Lavoisier did: rewrite his own linguistic programming so as to think and say something he was not able to think or say before, a feat which is not only phenomenologically inaccessible to those who are able to do it but completely inexplicable in materialist terms.

For those of us who are completely wedded to the materialist worldview, therefore which is just about everybody in the modern world giving up the traditional paradigm of science and adopting Thomas Kuhn’s is simply unthinkable. It would be akin to an atheist converting to Christianity and world require something just as revolutionary to cause it. Unless we undertake this philosophical journey, however, and accept that there are aspects of the universe that are fundamentally unknowable, including ourselves, we shall remain as trapped in our closed way of thinking as Dan Dennett’s tropistic wasp until our failure to see its flaws eventually destroys us.

 

Sunday, 6 February 2011

Human Frailty & The Structure of Scientific Revolutions, with an Introduction to the Philosophy of Immanuel Kant

Since my article on The Provisional Nature of Mathematical Models, some people have inevitably asked me whether I really mean to say that the computer models of climatologists are as cavalierly designed and populated as the Drake equation. The answer, of course, is ‘No’. Given the funding that has gone into climate science over the last thirty years, and the hundreds if not thousands of researchers it employs, I am as sure as it is possible to be in such matters that every variable that is included in these models has been meticulously considered, the relations between them painstakingly calculated, and the values assigned to them drawn from reliable and statistically meaningful datasets. If I have any doubt at all, it is simply that I do not know this for a fact.

This, in itself, however, is one of the great problems with respect to the whole climate change debate. For few of us, I suspect, other those directly involved in the climate science industry, have any detailed knowledge of any of the mathematical models upon which the debate is based. We don’t actually know what variables these models comprise, we are not privy to how they relate to each other, and we have little or no idea where the data to populate them comes from. For the most part, therefore, we are simply asked to take all this on trust. And this is something which I, personally, always find somewhat difficult. I am reminded, in fact, of the motto of the Royal Society, ‘Nullius in verba’, ‘By nobody else’s word’, which was chosen to express the Society’s rejection of any claim to knowledge based on ‘authority’, particularly the authority of the church. Science, it believed, had to be transparent and open to scrutiny. Otherwise it would be yet another form of dogma, ministered to by another unaccountable and unchallengeable elite. And although I have no reason to suspect any member of the climate science community of actually harbouring such authoritarian impulses, the principle remains. 

Of course, it will be pointed out that, in 1660, when the Royal Society was founded, it would have been possible for someone to have read every scientific treatise ever written. Now, in stark contrast, with hundreds of thousands of scientific papers published every year, even men like Robert Boyle, Christopher Wren and Robert Hooke would be hard pressed to read and digest them all. Indeed, with ever increasing specialisation, particularly at post-doctoral level, when scientists are probably at their most productive, it is possible that, even within a particular field, some researchers today may have only a partial view of what others, around them, are working on. We have no choice, therefore, but to accept the work of others on trust. Otherwise science would grind to a halt. It is for this reason that we have such rigorous standards of documentation and peer review, critical parts of an extensive system of independently operating checks and balances, which, like the human immune system, has evolved to keep the body of science both whole and healthy.
Strange as it may seem, however, it is precisely this which is part of my concern. For the collective and distributed management of science as an institution gives it what could be described as an intelligence of its own, with emergent properties which greatly amplify some of the latent attributes of the individuals that make up its collective body. In this regard, it is a bit like one those massive flocks of starlings, which gather at dusk in early autumn, and which, for a while each evening, soar and swirl in the gathering twilight, their movements determined by deep undercurrents in the collective psyche of the swarm, none of which may be discernable in any of the individual birds. In fact, taken on their own, none of the individual members of the flock ever exhibit this kind of behaviour. It is only as a member of the collective that it is induced, and it is only at the level of the collective that the pattern emerges. And the same is true within science.

Crucially, for instance, it is what gives scientific revolutions, as described by Thomas Kuhn, their characteristically cyclical structure, in which periods of slow accretion are followed by moments of sudden and catastrophic change, often without adequate reason or cause. This is due to the fact that where scientific theories are later discovered to be incorrect, counter-evidence usually accumulates over time. At first, as a consequence, the problems tend to be regarded as slight. When inconsistencies occur the typical response is to introduce additional subsidiary theories to explain the exceptions to the rule. But the main theory remains intact. Eventually, however, one of two things invariably happens. Either some anomalous fact is discovered which just cannot be explained away – as in the case of the orbit of the planet Mercury with respect to Newtonian physics, for instance – or the sheer weight of additional ad hoc theories begins to make the main theory unworkable. The model simply becomes too complicated. And it is at this point that, historically, someone has generally come along to turn the whole model upside down and make us look at the problem in a different way. It is what Kuhn calls a ‘paradigm shift’, the most obvious example of which, of course, is the Copernican revolution of the 16th century, which replaced the former terracentric model of the universe with the heliocentric model of our solar system. From Lavoisier to Einstein, Darwin to Stephen Hawking, however, paradigm shifts of this kind have gone on at every level in science right up to the present.

Of course, most scientists today – or those, at least, who have read Thomas Kuhn, which is probably very few – like to believe that this process has now come to an end and that, in their particular science at least, they have finally reached the truth. In this, however, they are exhibiting just the kind of behaviour that gives scientific revolutions their sudden or catastrophic aspect. For, historically, scientists have tended to be very conservative. In most cases, as a result, proposed paradigm shifts have usually been met with stiff resistance, the majority of scientists refusing to accept that everything they have believed for most of their working life could have been fundamentally wrong. The early adopters, as a consequence, have tended to be the exceptions, mavericks whom the establishment has often shunned, particularly as, at the beginning at least, most new theories have little supporting evidence. The change, when it comes, therefore, has often been as irrational as the resistance to it.

Probably the best example of this fundamental opposition between scientists of vastly different character on the cusp of revolutionary change is that demonstrated by Antoine Lavoisier and Joseph Priestley, who, in 1774, prior to Lavoisier, almost certainly discovered oxygen. By focusing the sun's rays on a sample of mercuric oxide, he produced a gas which he described in a paper to the Royal Society as being ‘five or six times better than common air for the purpose of respiration, inflammation, and, I believe, every other use of common atmospherical air.’ The only problem was that he didn’t call it oxygen. Indeed, he refused to call it oxygen for the next thirty years, even though, by the time of his death in 1804, that was what everyone else was calling it. Instead he called it ‘dephlogisticated air’. 

Since 1667, when the physician Johann Joachim Becher first postulated its existence, phlogiston had been generally accepted as the element within all combustible materials, which both accounted for that combustibility and explained why these materials lost mass when burned, the phlogiston in them being given off as a gas. The one anomalous fact the theory couldn’t explain was why some materials, mostly metals, actually gained weight when heated; and it was this that Lavoisier seized upon in propounding his own new theory, which, using some of Priestley’s own data, placed this highly reactive new gas at the centre of a whole new chemical paradigm, one which, today, we recognise as modern chemistry. Nor did it take very long for the rest of the world to see the coherence and simplicity of this revolutionary new insight. By 1789, when it was translated into English, Lavoisier’s Traité Élémentaire de Chimie (Elementary Treatise on Chemistry) was regarded worldwide as the primary textbook on this new analytical science. Right up to the bitter end, however, Priestley refused to accept it, and went on producing papers and giving lectures on phlogistic chemistry almost to the day he died. 

Why he remained so intransigent for so long has long been a subject for speculation. Pride? Jealousy? An embittered hatred of what he called ‘French principles’? In the end we’ll probably never know. The more important point, however, is that the history of science is littered with such personal and emotional conflicts. Nor are the human traits which produce them in any way diminished when institutionalised and made global. They are merely channelled in different ways, in which the system, itself, has come to play an important part. Indeed, anyone who has ever worked in a university will know that there are no politics more mean-spirited and vicious than university politics. In the battle for honours, the struggle for funding, and the competition to secure the services of the best doctoral and post-doctoral researchers, the tense, ego-driven dynamic of science is played out annually throughout the academic world, with most of the rewards, of course, going to those whose reputations are already established and who, through the allocation of resources and the bestowal of preferment, are ideally placed to protect and perpetuate their own positions. In his book Against Method, Paul Feyerabend even suggested that vested interests had already become so entrenched in most sciences by the last quarter of the 20th century that any further paradigm shifts were simply impossible. In physics in particular, it was his view that even though many of the main theoretical building blocks had become so riddled with inconsistencies as to have brought the whole subject to an intractable impasse, the majority of physicists were either unaware of this – focused as they were on resolving the problems in their own narrowly defined areas – or had too much at stake to admit it.

Whether this was actually true in the 70’s and 80’s I don’t know. Much of the physics about which Feyerabend wrote was prior to Steven Hawking, who has, himself, of course, been responsible for more than one paradigm shift since then. It is possible, therefore, that Feyerabend simply despaired too early, and that things inevitably have to get pretty bad before a paradigm  shift can occur. Moreover, I’m inclined to believe that no matter how institutionally moribund a science becomes, genius will always shine through and find willing supporters. It is just that the bigger the science, the harder it is to turn the juggernaut around.

Nor is this simply a matter of vested interests and institutional conservatism. For at the heart of the matter there is also a fundamental philosophical issue: one that is driven by the fact that, ontologically, the majority of scientists implicitly espouse a common sense realism, which assumes the existence of an objective universe which it is the role of science to describe. It is for this reason that scientists, for the most part, don’t like the idea of scientific revolutions, and why, if they countenance them at all, they place them firmly in the past, where an unfortunate detour down a blind alley may once have required a major change of direction. The idea that this could still yet happen again, however, is almost unthinkable. For given the objective reality of the universe, mistakes apart, our scientific description of it should be an ever more accurate approximation, converging on the truth. Once achieved, moreover, this ought to make further fundamental changes impossible. 

Reasonable as this may sound, however, there is something fundamentally wrong with this whole ontological paradigm. In particular, it fails to take into account the very profound implications of the fact that not everything we know about the universe is learnt from it. Indeed, as Immanuel Kant pointed out more than two hundred years ago, not only do we know the truth of certain axiomatic or ‘categorial’ concepts a priori – that is, prior to experience – but it is only through the possession of these concepts that any experience is possible at all. To use a more modern way of talking about these things, this is because they constitute something like our fundamental operating system, a small core of basic expressions and key functions, without which our minds would remain unformatted hard-drives incapable of receiving information. 

First among these ‘laws of thought’, as they are commonly called, are the basic rules of logic. These include the law of non-contradiction and excluded middle, which states that a proposition is either true or not true – that it cannot be both and cannot be neither – and is thus an expression of the same fundamental binary principle which is the basis of all computing. In addition to the rules of logic we then have the concept of quantity or number, from which, in conjunction with the rules of logic, all of mathematics can be derived, as Bertrand Russell demonstrated in Principia Mathematica. Add the concept of causality, or more particularly our unshakable belief in its necessity – that everything that happens must have a cause – and, again in conjunction with the laws of logic, one gets inductive reasoning and the whole of science. If we finally add our spatial and temporal awareness, from which we derive our concepts of space and time, we then have a infinite three dimensional universe, with an infinite, unidirectional temporal dimension, in which maths and science can come out to play.  

Of course, one can always argue over any of the individual concepts which Kant postulated as categorial in this way. One can argue, for instance, that our concept of causality could be arrived at through induction: that by observing that everything that has happened in the past has always had a cause, one might conclude that everything that will happen in the future will also have a cause. One only gets to take this step, however, if one already has the certainty that a change in this pattern would, itself, have to have a cause. Inductive reasoning only works, that is, because we believe in causality. The more important point, however, is that regardless of any particular argument over potential or candidate categorial concepts, the basic principle – that in order to experience anything at all, one needs to have some way of processing, ordering and making sense of that experience, and that some core cognitive functionality has therefore got to be in place from the moment we gain consciousness – would seem to be irrefutable. It is certainly the case that no one over the last two hundred years has yet managed to refute it. 

It is epistemological and ontological implications of all this, however, which most strongly conspire to undermine the common sense realism which science so insouciantly takes for granted. The first of these is the fact that if everything we ever experience – our entire phenomenal reality – is shaped by or channelled through these categorial concepts or laws of thought, it follows that nothing we could ever experience – or ever experience as real – could ever conflict with them. The possibility of discovering some region in the universe where two plus two equalled five, for instance, is quite literally unimaginable, not simply because we all already know, a priori, that it is impossible, but also because, as Wittgenstein pointed in Remarks on the Foundations of Mathematics, it is not at all clear how this discovery could be made, or what sort of evidence would actually lead us to this conclusion. 

Independently of this – although it follows from it as well – the possession of these hard-wired concepts also accords us the kind of absolute certainty about certain of aspects of our phenomenal reality which inductive reasoning, alone, could never provide. So confident are we, for instance, that things won’t simply pop into existence ex nihilo, that even those who do occasionally experience a world in which such things occur are more inclined to conclude that they are either dreaming, drugged or insane, than that the world is really like this. The question they more commonly ask is not ‘What is happening to the world?’ but ‘What is happening to me?’ The irony is that although we put this down to the existence of an objective universe, governed by constant and immutable laws, the only thing that could actually give us this level of confidence is the inbuilt nature of the concepts on which those immutable laws are based and which we are therefore incapable of doubting.

Indeed, it is this basis in the way our own minds our constituted that ultimately gives us the confidence that whatever scientific description of the universe we develop by applying the rules of logic, mathematics and inductive reasoning to observed regularities in that universe, this description will not be undermined by random and inexplicable changes in those regularities. By turning the ontological paradigm of common sense realism on it head, Kant was thus the first philosopher ever to provide a basis in certainty, not for individual scientific theories, of course, which can always go astray for a number of entirely explicable and often entirely human reasons, but for the practice of science as a human activity. 

Not, of course, that many scientists have ever been particularly grateful for this. I doubt that many have ever regarded a philosopher’s endorsement as necessary. And in this case they would have almost certainly regarded it as two-edged sword. For if we accept that the ordering principles which govern our phenomenal reality originate in ourselves, the $64,000 ontological question to which this inevitably gives rise, of course, is whether we have any reason to believe that these same ordering principles also apply to the noumenal reality of the universe as it is in itself, outside of our experience of it. And the answer, of course, is once again ‘No’.
Not that this means that they don’t. For we don’t have any reason to believe that either. The point is rather that we can only know what we are constituted to know by the laws of thought: that is to say, the phenomenal universe as we experience it. Anything outside of this is, by definition, unknowable. It is also, therefore, quite pointless to talk about it, a fact which Kant frequently stressed and which Wittgenstein later reiterated at the end of the Tractatus, in his now famous remark that ‘What we cannot speak about we must pass over in silence’.

Not, of course, that this has ever prevented scientists from both attempting and seeming to enter this unknowable realm where the laws governing our phenomenal universe may or may not hold. There are aspects of quantum mechanics, for instance, or, more specifically, Heisenberg’s uncertainty principle, which appear to if not contravene the law of non-contradiction exactly, then certainly to suspend it. More generally, in positing multiple dimensions in space/time, and in defining a point in time at which time, itself, began, we are certainly talking about things outside of our phenomenal experience. To do this, moreover, science has invented an interrelated set of formal languages to express, in a rule-governed way, what we are constitutionally incapable of imagining and to thereby circumvent this limitation. One may not be able to think, for instance, that Schrödinger's cat is both alive and dead at one and the same time, but in the form of 

Δx Δp ≥ ħ/2

which is Heisenberg’s mathematical expression of the uncertainty principle, one appears to be able to write it.
The trouble with all such formal languages, however, or more particularly with their denotation –  the way they are often translated back into something we can understand in phenomenal terms – is that they have a tendency, as the above example demonstrates, to come adrift from phenomenal reality in ways which then appear to allow them to do things which are clearly specious.

To illustrate this more clearly and make it more accessible, take, for example, the analogous way in which, for hundreds of years, theologians have attempted to attribute properties to God. Importantly, let me first say that within Kantian cosmology, there is nothing to preclude God’s existence, or to make faith in Him irrational. It is just that, if God does exist, He does so outside of our phenomenal universe and is therefore neumenal, unknowable and, as the hymn says, utterly ineffable, an entity about whom we can say literally nothing. Headless of Kant’s warnings, however, theologians have spent innumerable hours inventing their own technical, if not formal language to describe Him, a language which includes such concepts as omnipotence, omniscience and sempiternity. Ungrounded in phenomenal reality, however, all of these terms have internal contradictions which invariably lead to paradoxes when pursued at any length. The most famous of these, for example, is the omnipotence paradox, which after establishing that there cannot be two omnipotent beings in the universe, then concludes that one omnipotent being cannot therefore create another, and cannot therefore be omnipotent.

When I was a young Ph.D. student, I actually attended a lecture on this very subject, given by someone who, in all seriousness, thought he had found a way around the problem, not realising that the whole issue was based on a linguistic mistake. For not describing anything that can be found in our phenomenal universe, languages which are based on logical constructs of this type, no matter how reasonable they may seem, are simply word games, linguistic slights of hand. One may appear to be saying something meaningful and even intelligent, but it is simply a bubble of language which has floated free of reality and relates to nothing at all.

Not that I’m saying that the formal language in which the latest theoretical physics is expressed is guilty of or even susceptible to this kind of error. To be honest, like most of us, I simply do not know enough to make this judgement. As is so often the case, institutionalised science has again put us in this position. Moreover, I tend to believe that, on balance, formal languages, in which the functional operators are all taken from mathematics or formal logic, are intrinsically less likely to lead one down this road than a natural language in which technical terms are merely introduced. It is why, after all, such mathematically based formal languages are used. I am also fairly convinced that Heisenberg’s uncertainty principle only appears paradoxical when translated into natural language and illustrated in the form of Schrödinger's cat. Despite the fact that it was intended to make the subject more accessible, quantum physics has therefore been done a great disservice by this misleading and rather silly analogy. Sometimes, one has to say, scientists are their own worst enemies. From what I know of more recent physics, however, centred on providing science with a unified theory, one has reason to be more cautious.

Since its inception in 1995, for instance, not only have the number of variables encompassed by M-Theory increased with almost every revision, as additional particles have had to be incorporated into the model to make the maths work, but the number of dimensions in which these particles are thought to move has also increased and now stands at a mind-boggling eleven, making both the model and the maths so complicated that there are probably only a handful of people in the world who can truly claim to understand them, with each of the members of this exclusive club therefore peer-reviewing each other. What worries me most about this situation, however, is not that we seem to have brought about the very state of affairs which the Royal Society most fervently strove to banish – the authority of an unchallengeable elite, who are only able to explain what they are doing in terms of specious and often paradoxical analogies – but that we, as a culture, have somehow come to accept their own characterisation of these activities as of cosmic importance in answering some primordial question, when, on the basis of Kant, we have very little reason to believe this, and every reason to doubt it. For even if M-Theory, or whatever theory may eventually succeed it, were to find a coherent phenomenal denotation, or, better still, were to be cashed in the form of some real world application – as one might say that Einstein’s relativity theories were cashed in terms of nuclear power – thus demonstrating its truth in terms other than the soundness of the mathematical model – we still wouldn’t know, and would still have every reason to doubt whether it described the universe as it exists in itself. The only thing we could say with any certainty is that physics would have gained for a new paradigm which would then provide us with a more coherent description of our phenomenal universe than was provided severally by relativity theory and quantum mechanics. 

Even if a unified theory were forthcoming in this way, moreover, this would still not necessarily mean the end of pure science, or that science, thereafter, would merely become a matter of filling in the gaps, as the convergence theorists like to suppose. For given that this theory would have to describe or otherwise be applicable to our phenomenal reality not only to be considered true but even to be thought meaningful, and given that the universe as it exists in itself could, for all we know, be  governed by laws which we could never comprehend even if we could know them, it is quite possible that the universe might continue to throw up anomalies, requiring new paradigms to describe it on an infinite and ongoing basis. It is quite consistent with Kant’s epistemology, in fact, that scientists could be employed forever, continually opening up more interesting windows upon reality, which, in turn, might lead to more interesting technology options, without, of course, ever being able to open the one window they most fervently desire to look through: the one which reveals the universe as it is in itself, the way a noumenal being, without our phenomenal limitations might see it – if only we could comprehend the first thing about such a being. 

The really sad thing about this desire, however, is not only how few scientists seem to realise how naïve and, indeed, childish it is, but how few seem to understand its hopelessness. For even if science were to see back through the Big Bang to the point zero at which our universe, both temporally and spatially, popped into existence, and even if we agreed not to ask what things were like half an hour earlier – which, for us, is always a meaningful question – would this actually answer the question that scientists seem to think it would, or, indeed, any question at all? For no matter how far one pushes the frontiers of knowledge, eventually one comes up against the bounds of sense, the point at which our understanding can penetrate no further. Call it the noumenon, or point zero or, indeed, God, these are simply the labels we use to denote our intrinsic limitations, the signs we plant in the ground to say ‘Beyond here be dragons!’ What we should also consider, however, is that it is precisely these intrinsic limitations, as Kant pointed out, that make all our knowledge and understanding possible, and it is to the possible that we should keep our faces turned.