1. The Relationship Between Thought &Language
In terms of its consequences, one of the worst philosophical errors of the modern era is the implicit assumption that all thought is mediated by language: an error which it is all too easy to make due to the fact that most of the thoughts of which we aware are those which we either articulate or could articulate if asked to say what we were thinking. There are, however, times when we say something like ‘Wait! I’ve just had a thought’ and then take some time and effort to put that thought into words, strongly suggesting, therefore, that the thought preceded its articulation. What’s more, there are also times when, nagged by the feeling that we might not have got it quite right, we are then dissatisfied by the way we have actually expressed a thought, further suggesting that thoughts are, or at least can be, independent of language.
Of course, it may be argued that whatever subterranean cognitive processes go on prior to the articulation of a thought, these do not actually constitute ‘thinking’ in that the act of thinking actually consists in putting our thoughts into words. Even if one were to accept this as a definition of one form of thinking, however, it is very different from ‘thinking in words’ or using language to think, as when we construct a rational argument, for instance. What it does, in fact, is reveal three different levels in the relationship between thought and language: the pre-linguistic base level at which we have an unarticulated thought; the level at which we then struggle to make sense of this thought by putting it into words; and the level at which we then use language to examine, analyse and criticise the now publicly available ideas which, through their articulation, our thoughts have become.
For those who feel uncomfortable talking about thought in any way that hints at it being more primitive and basic than language, what we have done, however, is actually make the problem worse. For we now have two levels at which our cognitive processes our hidden from us: the base level at which we have a thought we haven’t yet expressed and cannot therefore identify or say from whence it came, and the almost equally opaque transformational interface between this base level and the fully articulated world of ideas, which T. S. Eliot famously described as a ‘raid on the inarticulate’ without thereby making it any more transparent.
Indeed, it is this lack of phenomenological transparency that is at the heart of all our problems when it comes to the relationship between thought and language. For it not only makes what’s going on at the subterranean levels of this relationship essentially unknowable but consequently precludes any further philosophical investigation of them. I say this for the very good reason that if something is unknowable, there’s not very much we can say about it. And as Wittgenstein stipulated at the end of the Tractatus: ‘Whereof one cannot speak, thereof one must be silent.’ The problem with this, however, is that if we eschew all talk of those aspects of the relationship between thought and language that are hidden from us and concentrate purely on the one aspect that is phenomenologically accessible, namely our use of language as a medium for thought, then we are in grave danger of forming a very distorted view of our relationship to language as a whole, which has some very unfortunate consequences.
One of the most glaring of these is that fact that if one fails to take it into account Eliot’s raid on the inarticulate, our creative use of a language would appear to be limited to the possibilities already inherent in it. That is to say that, while it may not be impossible to say something new, without being able to bend or repurpose words to new uses, typically through the use of metaphor – as I explained in ‘The Role and Importance of Metaphorical Truth’ – the use of any language is rigidly constrained by the current definition of its terms. Not only is this contrary to everything we know about the history of our intellectual development, however – to which I shall return later – but it also runs counter to Kant’s famous dictum in the ‘Critique of Judgement’ that true genius lies precisely in extending or developing a language so as to enable us to say something that could not have been said before.
Of course, it will be pointed out that no one is actually denying that we are able to bend language to our will so as articulate something that was previously beyond our grasp. Indeed, all that is being said is that we don’t know how we do this and so cannot really talk about it. There is, however, a huge difference between not talking about something and treating it as if it doesn’t exist. Even if we merely use it as a placeholder to fill in the blank created by its unknowability, moreover, there is a lot to be gained from acknowledging the existence of that about which we cannot speak if it consequently prevents us from making other philosophical errors, the most significant of which, in this case, is a tendency, reinforced by scientific materialism, to treat human beings as tropistic.
The best way to illustrate this is to use once again Dan Dennett’s example of the tropistic wasp, which he first introduced in a paper I heard him give at Birmingham University some forty-odd years ago and which I also used in ‘The Role and Importance of Metaphorical Truth’. The story goes like this. The female of this particular species of wasp excavates a nest into which she lays her eggs before going out to hunt for grasshoppers or locusts, which she stings and paralyses but does not kill so that they remain alive and fresh in order to provide food for her offspring during their larval stage. She then brings the paralysed grasshopper or locust back to the nest and leaves it on the threshold while she first checks inside to ensure that everything is as it should be. Satisfied that all is well, she then comes outside again, retrieves her prey and drags it into the nest.
On the surface, therefore, this not only seems like intelligent and purposeful behaviour but something akin to what we would regard as maternal. What entomologists discovered when they conducted further experiments on the wasp, however, was that if, during the time the mother spent checking the nest, they moved the paralysed locust a few inches away from the entrance, on coming back outside, she would drag her prey back to the entrance once again before going back inside to check the nest once more. If, while she was inside, they then moved the paralysed locust again, on coming back outside, she would once more repeat the process. And, as long they kept moving the locust, she would go on doing this over and over again, indefinitely.
Thus, what initially looked like intelligent behaviour is more like the product of a computer program, which, in this case, gets stuck in a loop. Professor Dennett’s point in presenting such a starkly clear example of tropistic behaviour, however, is the contention put forward by all materialist philosophers of the mind that, if one thinks of language as a kind of computer program – a fairly reasonable analogy – then while our own programming may be quantitatively more advanced and sophisticated than that of the tropistic wasp, it is qualitatively the same, in that both we and wasp are biological machines whose behaviour, it may be reasonably assumed, is entirely determined by our programming.
There is, however, a major difference between the two in that, whereas the wasp’s programming is entirely hardwired into its genes – as, indeed, is much of our own programming – our linguistic programming is acquired, much like software: a fact which, in itself, militates against the materialist position. This is because how we acquire this software, or learn a language, is as phenomenologically opaque to us as our ability to then alter or modify it to better express our thoughts. If we accept, therefore, that, even though we don’t know how we do it, we do, in fact, learn a language, there is no reason why we should not accept that we also have the ability to develop and extend that language so as to say something we could not have said before, even though we have no idea how we do this either.
In fact, the only thing stopping us from embracing this view of the relationship between thought and language is the fact that it also means embracing the idea that there are some things about the universe and even, indeed, about ourselves, that are unknowable but which must necessarily exist in order to explain things we do actually know, such as the fact that throughout our history we have continually found new ways to think and talk about the universe, as is particularly well demonstrated by the occurrence of paradigm shifts in science.
2. Cultural Resistance to the Concept of Paradigm Shifts
Although I have written about this before, for those who haven’t read my previous essays on this and related subjects, the concept of a paradigm shift was first introduced by the American philosopher of science, Thomas Kuhn, in his 1962 book ‘The Structure of Scientific Revolutions’, in which he put forward a new model or paradigm of the way in which science, itself, develops. Instead of proceeding incrementally, as the traditional paradigm would have it, with one building block being laid upon another, Kuhn argued that science proceeds in stages, some of which are necessarily revolutionary. In fact, the usual lifecycle of any given field of science almost invariably starts with a revolution, when someone puts forward a new theory. As this gains acceptance, older theories are then abandoned and the science enters a stable phase. As time passes, however, some of the predictions which the new theory makes turn out to be false, requiring additional subsidiary theories to be developed in order to explain these exceptions. Over time, however, the number of exceptions increases, requiring more subsidiary theories to be developed, to which exceptions may also be found, requiring further subsidiary theories until the whole thing becomes so unwieldy that someone eventually says ‘Wait! I’ve just had a thought. What if we have been looking at this whole thing the wrong way round? What if we look at it like this instead?’ thereby introducing a new theory and starting the whole lifecycle all over again.
One of the best examples of this is Lavoisier’s creation of modern chemistry in the late 18th century: one of the most remarkable contributions to science in all of history, which most people still do not understand. In fact, the general view is that Lavoisier discovered oxygen. But he did not. Oxygen was discovered by Joseph Priestly, who actually taught Lavoisier how to isolate it. It was just that Priestly didn’t call it oxygen. He called it dephlogisticated air. It was Lavoisier who called it oxygen, just as he called hydrogen ‘the maker of water’ when he discovered that, when he ignited it in the presence of oxygen, the two gasses combined to form H2O. What Lavoisier did, therefore, was not just discover a new element but create a whole new language for thinking about and describing the material world, in which the ancient concept of phlogiston, which had dominated chemistry for more than three hundred years, was discarded in favour of the concept of elementary particles, called atoms, which are combined in different ways and quantities to form different substances.
That’s not to say, of course, that he did it all on his own or in a vacuum. The term ‘atom’, for instance, had been introduced into modern science more than a century earlier by Robert Boyle, who derived it from the Greek word ‘atomos’, meaning ‘indivisible’, which was first used by the Greek philosopher Democritus in the 5th century BC. What’s more, there was still a long way to go. Other elements and a whole host of new laws describing how they combine and act upon each other had yet to be discovered. It was Lavoisier, however, who created the basic model or conceptual framework upon which successive generations of chemists were consequently able to build, an achievement far greater than the mere discovery of a single element.
In fact, if more people understood what Lavoisier actually did, he would be regarded with far more esteem than he actually is, which rather begs the question as to why he is not. The answer, however, is really quite simple. It is because most scientists or, perhaps more accurately, the very institution of science, itself, if such there be, doesn’t like the idea of scientific revolutions, much preferring the traditional paradigm of science in which science proceeds in an incremental and orderly fashion and where every contribution, no matter how small, adds to the sum total of scientific knowledge. Despite all of the historical evidence to the contrary, therefore, from Copernicus to Einstein, wherever possible, the institution of science refuses to acknowledge that scientific revolutions and their concomitant paradigm shifts occur.
One of the reasons for this is the belief that the very idea of scientific revolutions undermines science. For if scientific revolutions have happened in the past, they can happen in the future, replacing current scientific paradigms with new ones that have yet to be conceived, thereby placing all current science under a provisional cloud. Even if it is conceded that scientific revolutions have happened in the past, therefore, it is generally agreed that they cannot happen in the future: an article of faith based on the implicit assumption that all of science’s current theories – especially its more foundational theories –are correct.
This, of course, is nonsense and is made demonstrably so by the application of Sir Karl Popper’s irrefutable argument that scientific theories cannot be proven, only falsified, which means that even if all our current theories were correct, we couldn’t know this. For even if a theory has so far survived three hundred years without being proven false, there is no guarantee that it will survive another three hundred years or even three hundred days. All scientific theories are therefore essentially provisional, which the existence of scientific revolutions only makes more uncomfortably clear.
There is, however, an even more profound reason why the institution of science doesn’t like the idea of scientific revolutions. For at the heart of every scientific revolution, of course, there is the creation of a new scientific paradigm, a new way of thinking about the world which requires precisely the kind of creative genius Kant describes: someone who is able to extend or reshape the language so as to say something that could not have been said before, someone, indeed, like Copernicus, Lavoisier or Einstein. The problem is that we do not know how these geniuses did what they did or, indeed, how anyone can rewrite a language so as to say something new. The experience is simply not accessible to us and being inaccessible, is therefore unknowable, which gives the institution of science yet another problem. For if something is unknowable, it is also, of course, unteachable.
In fact, in order to be teachable, a process or method has got to be completely transparent. Any institution attempting to teach science, therefore, must teach a scientific method that precludes the need for genius. Indeed, any predisposition or tendency among its students to think outside the box has got to be discouraged and a strict adherence to the prescribed method and current orthodoxy cultivated.
This is principally achieved by fostering a culture of both methodological exactitude and cooperation within the student body, such that students work together in a clearly defined and highly disciplined manner, rather than compete against each other by thinking along their own lines. This fostering of a strictly disciplined scientific mentality is also significantly helped by the fact that the first articulation of a new paradigm, like the first expression of any new idea, as Thomas Kuhn himself was all too well aware, is almost invariably inchoate: only half formed and full of holes and therefore very easily derided. Anyone putting forward such new ideas consequently has to work very hard to make any impression on the established order, especially when it is that established order that is handing out the jobs and research grants. The result is that institutional science is almost invariably conservative science, which has no place for revolutionary thinking. The problem with this, however, is not just that it suppresses something vital in the dynamic nature of science and leads to a kind of ossification, but that it also leads to science becoming corrupted, not just in the sense of financial corruption – though this too – but in the sense that it is no longer scientific.
3. The Corruption of Science
In fact, it is the corruption of science, itself, that actually leads to financial corruption and therefore comes first. And it does so primarily by inverting the relationship between empirical data and theory, such that empirical data no longer has primacy.
To understand how this happens, we need to go back to the most basic model of science in which theories are created to explain empirical observations. The theories are then tested by making predictions based on these theories and conducting experiments to find out whether the predictions are accurate. If the predictions are accurate, then there is a chance that the theory is correct, though only a chance. For it is perfectly possible that experiments conducted to test other predictions based on a theory may prove the predictions false, which may prove the entire theory false. On the other hand, we may attempt to explain these exceptions by either modifying the original theory or by creating subsidiary theories in the way described above. Indeed, in Kuhn’s description of the standard lifecycle of a scientific theory, it is only when the number of subsidiary theories gets out of hand, leaving us with more patches than original fabric, that we are eventually forced to abandon the theory altogether.
In today’s institutionalised science, however, the decision to abandon a theory altogether is even more difficult. For that would be to admit that for years, perhaps, the institution has been teaching something that it now regards as false. And this is something that it is very difficult for any institution to do. After all, people’s careers and reputations are at stake. What’s more, with revolutionary thinking having been institutionally suppressed in science for at least the last two generations, the chances of there being an alternative theory or paradigm available to replace the one that now needs to be abandoned are very slim. No matter how much empirical evidence has built up to prove a theory false, therefore, it is the errant data that must now always be explained away rather than the theory abandoned, which is to say that it is the theory, rather than the data, that now has primacy.
Once the primacy of theory has been established in a culture, this then has two further consequences. The first of these is that scientists no longer feel quite so constrained by the sanctity of data. If the data does not conform to the theory, therefore, they are now far more inclined to either discard or change it than was previously the case. Nor is this necessarily cynical. After all, if one sincerely believes in the theory one is putting forward or defending, errant data must surely be faulty data. Once selecting or modifying data to fit one’s chosen theory has become commonplace within an institution, however, it becomes very easy to start doing it, not because one particularly believes in the theory, but simply in order to get the results one needs in order to maintain one’s research funding and keep one’s job.
In fact, there is a considerable amount of evidence to suggest that such corruption is now endemic throughout most scientific institutions. The editor of one journal I quoted in a previous essay on this subject actually believed that up to 20% of all the papers submitted to his journal for publication were not just based on selected or modified data but on no data at all, it all having been made up. What is even more disturbing, however, is the fact that, believing in their theories rather than the sanctity of data, many scientists do not seem to think that there is anything wrong with this, a development in the very culture of science which has been further encouraged by the use of computer models which, themselves, have little in the way of empirical grounding.
Indeed, most computer models start with a theory which is turned into a set of algorithms, which it then uses to predict future observations and measurements under different conditions. Modifications are then iteratively made to the algorithms to improve their predictive accuracy, although this in itself can be very problematic. For while modifying an algorithm to make its predictions conform to reality may seem very similar to the development of subsidiary theories which Kuhn describes, in traditional science these patches developed to explain exceptions to a main theory had to have some theoretical basis. Simply modifying an algorithm to make its predictions fit the facts, on the other hand, can leave us in a position in which the model’s predictions are now correct but we have no idea why. Worse still, many large scale computer models are subject to continual development over many years, to which a lot of people may contribute, especially in a university setting, with the result that there comes a point at which it’s possible that no one single person actually knows how the model works. It becomes a magical black box which issues oracular prophecies without anyone knowing how it does so. And yet, believing in this magical black box, we still believe in the prophesies, even when they don’t come true.
One of the best examples of this is the Coupled Model Intercomparison Project (CMIP), in which 102 institutions from around the world were originally funded to predict future changes in the world’s climate based on two key assumptions: that such changes are primarily driven by the accumulation of carbon dioxide in the atmosphere and that, without a modification in our own behaviour, this accumulation will continue at a rate of 1% per year.
On this basis, the project’s first set of predictions were published in 1995 and covered the next twenty year period leading up to 2015. In fact, it was this first set of CMIP predictions that led Al Gore to predict that summer arctic sea ice would have disappeared by 2014. By the time 2014 arrived, however, it was perfectly obvious that the predictions of all 102 institutions taking part in the project were wildly inaccurate, with some of them being out by more than 1°C when compared with actual data from satellites and weather balloons, the two most reliable sources of such data we have. And yet it is the predictions of these models that the world continues to believe.
4. An Absence of Critical Thinking
So how is this possible? In previous essays on this subject, I have put forward two possible answers. The first is that there are just too few people in the world who actually know the science, leaving the rest of us to just take their word for it. The problem with this, however, is that there are some people who know the science and one would expect at least some of them to say, ‘Hold on minute, this isn’t right’. In fact, I have actually based some of my own essays on the work of two such upstanding scientists: Richard Lindzen, Emeritus Professor of Meteorology at MIT, and William Happer, Emeritus Professor of Physics at Princeton University.
This then led me to my second answer: that there is a much larger contingent of scientists who have a vested interest in the theory of anthropogenic global warming than those who are simply committed to honest science with the result that it is the former group whose voices are always heard. The problem with this, however, is that it would seem to entail what would have to be the biggest conspiracy in history. For it is not just scientists who would be required to continually espouse something they did not believe, but everyone who comes into contact with any aspect of reality upon which global warming should be having an effect. Anyone working in the arctic, for instance, would have surely noticed that this summer, ten years on from when Al Gore said it would all be gone, arctic sea ice was as extensive and as thick as it has been throughout the last century. While corruption and our predisposition to uncritically accept the authority of experts may both have contributed to inducing our current state of mass delusion, therefore, there is clearly something else going on here, which raises the possibility, indeed, that it is actually our current state of mass delusion, itself, that is that something.
After all, what is a mass delusion other than a highly prevalent way of thinking that is not actually supported by real world evidence, of which there have been hundreds if not thousands of examples throughout our history. Indeed, it could be said that our entire history is a history of such delusions. We acquire them, they dominate our way of thinking for a while, and then a Copernicus, Lavoisier or Einstein comes along and says something that eventually makes us see the world in a completely different way. The problem, of course, is that it is never quite that easy. For not only are we resistant to such revolutionary changes in our world view but are so of necessity, in that language could not exist if we changed our way of thinking every other minute. In fact, language actually needs periods of stability in which everyone uses the language in the same way in order for us to explore the logical the ramifications of the prevailing paradigm, thereby revealing its flaws and paving the way for the next revolution.
The problem is that, sometimes, our resistance to the next revolution is so strong that it effectively blocks it, sometimes for centuries. No matter how much evidence accumulates showing that the old way of thinking is wrong, those who have a vested interest in perpetuating it continually prevent change from occurring, sometimes even going so far as to systematically kill the proponents of change.
Fortunately, we are not actually doing that at the moment. Because we do not understand the process by which one way of thinking replaces another, however, and refuse to accept that it even occurs, we have now become so trapped in our current way of thinking that we cannot get out of it even though its flaws are not just glaringly obvious but are starting to cause us real world problems.
Take, for instance, the quest to achieve Net Zero carbon emissions with which the west is currently obsessed and which is largely focused on two main goals: the decarbonisation of our electricity grids and the replacement of petrol and diesel engined vehicles with purely electric vehicles. If the objective of these goals is as stated, however, not only is this focus far too narrow, omitting such forms of transport as airlines and cargo ships, both of which have massive carbon footprints, but the goals themselves are incompatible, as any critical analysis very quickly reveals.
In fact the problem is almost immediately apparent as soon as one considers that in 2023, wind and solar power constituted just 34.3% of the UK’s total electricity generation. When conditions were optimal, there were indeed periods during which they actually contributed more than this; but this was their average contribution across the year. If the UK is going to completely decarbonise its electricity grid by 2050, this means, therefore, that it is going to have triple its wind and solar generating capacity over the next 25 years, an objective which, under any conditions, would be very challenging. If, at the same time, however, we are going to replace all of the 41.2 million petrol and diesel engined vehicles on our roads with EVs, we are going to have to increase electricity production by another 37.5%, which effectively means quadrupling our wind and solar generating capacity over this period. Even putting aside the cost, therefore, which, given the current state of our finances, presents yet another challenge, it is very doubtful whether this is even remotely feasible.
If our goal were solely to decarbonise our electricity grid, while keeping petrol and diesel engined vehicles on our roads, we might be able to manage it. Similarly, if our objective were solely to replace all petrol and diesel engined vehicles with EVs while continuing to power our electricity grid with natural gas, this too might be possible. Trying to do both at the same time, however, is something which only a religious zealot who hasn’t actually thought about it would even consider.
What’s more, this doesn’t take into account the very real possibility that running an electricity grid purely on wind and solar power is actually impossible. I say this because, being dependent on the weather and hence intermittent in their electricity generation, an entirely wind and solar powered grid would have to have some form of battery storage back up for when the wind doesn’t blow and the sun doesn’t shine. This, however, is far more expensive and difficult to achieve than advocates of an entirely decarbonised grid would appear to think. In my 2021 essay on this subject, for instance, I calculated that, using the Tesla Powerpack 2 4HR battery system, at that time the leading large scale battery storage system on the market, it would cost £550 billion just to store one day’s output from our then wind and solar capacity of around 32 GW.
This being clearly non-viable, people are now therefore talking about using ‘green’ hydrogen as our storage medium, the idea being that the hydrogen would be produced by electrolysis using wind and solar generated electricity on days when the wind does blow and the sun does shine and then burnt in modified gas fired power stations when an alternative source of energy is required. What this doesn’t take into account, however, is how much more electricity one would have to generate in order to produce the hydrogen, or the fact that it takes 50% more energy to produce hydrogen by electrolysis than one actually recovers by burning it. As a solution to the storage problem, therefore, this would only make sense if ‘free’ wind and solar power really were as cheap as people like to believe, thereby making their profligate use to produce hydrogen economically viable. The fact is, however, that wind and solar farms are only ‘viable’, themselves, if they are heavily subsidized. For as I have demonstrated elsewhere, they too consume more energy in their manufacture, installation, operation and maintenance than they ever produce in their lifetime, making the whole renewables industry an exercise in economic futility and corruption.
The entire Net Zero project is therefore a total fantasy, unsupported by either economic or engineering reality. Not only can it not be achieved, however, but if we continue pursuing it, it could easily result in a disaster. For not only is it inevitable that, if forced down this road, the electricity grid would eventually fail, along with all the computer systems that depend on it, but it is also fairly certain that this would be very shortly followed by the failure of everything that depends on a computer, which, in today’s world, is just about everything. With no power to heat our homes and no food in the shops, societal collapse would then shortly follow, with riots in the streets, widespread looting and the complete breakdown of law and order.
Not, of course, that it will actually come to this. For what can’t be done, won’t be done. The only question is how much damage will be done before the reality of the situation finally sinks in. For no matter how much evidence accumulates demonstrating that the whole Net Zero project is a fool’s errand, there will be those who will sill resist abandoning it. Nor will these diehard advocates of Net Zero be confined to those with a vested interest in having it continue, including those paid by government to advise them on the subject, who will no doubt insist to the very end that it can be made to work. An even louder voice will almost certainly come from those who believe that there is no alternative, the alternative being that the planet is destroyed. After all, 95% of scientists agree that the accumulation of carbon dioxide in the atmosphere is the most significant cause of global warming and that we, ourselves, are the most significant cause of this accumulation. What’s more, based on our traditional view of science, combined with our current inversion of the roles of theory and data within it, this is not just regarded as a theory but as a matter of fact.
Nor does it help when faced with such a mind-set to point out that, before Lavoisier put forward a much better theory – one which more accurately predicted what happens in the real world – 95% of scientists believed that all non-metallic materials lost weight when heated because they gave off phlogiston. For in order to see these two situations as analogous one actually has to subscribe to Thomas Kuhn’s paradigm of the way in which science proceeds, which we, of course, do not. Indeed, one could say that this was our real or underlying problem if there weren’t something even more fundamental still yet underlying it. For our problem is not just our choice of paradigm for describing and understanding the activity we call science, but our entire view of the universe. For believing in an entirely material universe, entirely knowable and explicable by science, we are unable to believe that anyone could do what Thomas Kuhn claims Lavoisier did: rewrite his own linguistic programming so as to think and say something he was not able to think or say before, a feat which is not only phenomenologically inaccessible to those who are able to do it but completely inexplicable in materialist terms.
For those of us who are completely wedded to the materialist worldview, therefore – which is just about everybody in the modern world – giving up the traditional paradigm of science and adopting Thomas Kuhn’s is simply unthinkable. It would be akin to an atheist converting to Christianity and world require something just as revolutionary to cause it. Unless we undertake this philosophical journey, however, and accept that there are aspects of the universe that are fundamentally unknowable, including ourselves, we shall remain as trapped in our closed way of thinking as Dan Dennett’s tropistic wasp until our failure to see its flaws eventually destroys us.