1.
Creativity and Discontinuity
Forty years ago I wrote a Ph.D. thesis on the subject of
creativity, taking as my starting point Kant’s description of genius in ‘The
Critique of Judgement’, where he says that true genius does not just work out
the possibilities inherent in an already existing language but creates a new
language which makes it possible for us to say and think things we had not been
able to say and think before: a proposition which has the further implication
that, if this new language enables us to think things which were previously unthinkable,
then it cannot, itself, be derived from the old way of thinking, thereby
necessitating some form of discontinuous leap in its creation.
These leaps I called discursive discontinuities, borrowing
the term from Michel Foucault, whose seminal work on this subject, ‘The Order
of Things’, describes one of the most momentous discursive discontinuities in
our history, as we made the transition from medieval thought, based on the
central concepts of ‘sympathy’ and ‘antipathy’, to modern, scientific thought,
based on the concept of ‘causality’, a transition which was so radical, in
fact, that most people today cannot even imagine how the medieval mind worked.
A classic example I have often used to illustrate this is
the fact that medieval herbalists believed that, because the flesh of the
walnut resembles the human brain, while its shell resembles a human skull, taking
powdered walnut as a medicine would alleviate headaches, the two objects, the
flesh of the walnut and the human brain, having a sympathetic connection. What’s
more, they believed this because they also believed that the resemblance
between the two objects was a ‘sign’ placed on them by God to show us that this
sympathetic connection existed, making it the primary task of the ‘natural
philosopher’, therefore, to learn the language of these signs and thus gain
power over nature.
Even more alien to our way of thinking is the idea that
there could also be a sympathetic connection between the sign for something and
the thing itself, signs in this context also including the ‘true’ names of
things, such that if one knew the true name of something one would gain power
over it simply by saying that name. What’s more, it was generally believed that
the only thing that prevented us from exercising this power was the fact that
the true names of things had been lost during the destruction of the Tower of
Babel, when mankind’s original language had been fragmented and corrupted,
stripping words of the sympathetic and, indeed, antipathetic power they once
had: a belief which consequently led to the development of another field of
medieval study, the principal aim of which was the reconstitution of this
original, pre-Babel language, which would give those who mastered it power over
nature through the utterance of words alone, as was thought to be the case, for
instance, with respect to the word ‘abracadabra’.
Then, of course, there was the sympathetic relationship that
was thought to exist between the macrocosm of the heavens and the microcosm of
the earth and the individual human beings who inhabit it, whose lives were thought
to be influenced by the positions of the planets both at the time of their
birth and in the future, which could be predicted by casting future planetary charts
in the form of horoscopes, upon which even physicians relied in selecting what
they thought were likely to be the most efficacious treatments for their
patients.
Today, of course, we not only wonder how people could have
believed such patent absurdities –
as they did for more than a thousand years – but how the world could have actually functioned under
such a deluded belief system. Not only is it perfectly possible that people a
thousand years from now will look back on our beliefs with the same
incredulity, however, but the medieval world functioned, just as ours
functions, precisely because those parts of any belief system that are deluded
are simply bubbles of language which have no connection to the real world and
which cannot therefore affect it, at least not positively.
That is not to say, of course, that they cannot negatively
affect it, as when, for instance, a physician prescribes some medicine based on
some bodily resemblance which then has an adverse effect. But most medieval
‘knowledge’, like much of our theoretical knowledge today, simply didn’t touch
the world in which most people lived and went about their business, using whatever
practical knowledge they needed without reference to such concepts as sympathy
and antipathy. Being tokens in what was just an intellectual game, such ideas were
thus mere distractions for people with enough time on their hands to indulge them:
idle pastimes for the idle rich which the world could afford because, in an era
in which hardly anything had changed for more than a thousand years, the world
simply didn’t need any significant contribution from the practitioners of these
arts.
That all changed, however, at the end of the 16th
and the beginning of the 17th century, when global cooling led to
shorter growing seasons and hence frequent famines throughout the world, particularly
at more northern latitudes, which in turn led to mass migrations from the
countryside to the cites, where endemic poverty gave rise to both revolutionary
and religious wars, such that, by the second half of the 17th
century, what people wanted more than anything else was ‘order’, not just in
their political systems but in their very representation of the world, itself, which
the infinitely idiosyncratic and peculiar relationships of sympathy and
antipathy simply could not provide.
What people wanted, indeed, was a world which not only
behaved according to universal and immutable laws but was also logical and
clear in its structure: two requirements which thus meant that the transition
from the medieval to the modern mode of thinking actually had to proceed along
two main fronts.
The first of these was a thorough reorganisation of the
world into more orderly formations, the most obvious examples of which were the
various taxonomic tables classifying the natural world which began to appear at
the beginning of the 17th century with the publication of the Pinax Theatri Bolanici by the Swiss Bauhin
brothers in 1623. This was followed in 1682 by the Methodus Plantarum Nova, by the English naturalist John Ray, who
classified around 18,000 plant species. The most successful taxonomy of the
pre-Linnaean era, however, was that constructed by Joseph Pitton de Tournefort,
whose Institutiones Rei Herbaria,
published in 1700, emphasised the classification of genera, many of which were
accepted by Linnaeus and are still in use today.
It was Carl Linnaeus, however, who made the big breakthrough,
not only by simplifying the Latin naming convention that was then in use, but as
a result of his choice of classification principles. For unlike de Tournefort
and, indeed, others, who had largely based their classifications on often quite
deceptive floral characteristics, Linnaeus famously based his on species’
reproductive systems, thereby seeming to anticipate Darwin's Origin of Species in that species which
were close to each other on the taxonomic table were likely to have common
ancestors, something which could not be the case, of course, if their
reproductive systems were radically different. It also allowed him to apply the
same classification principles to other parts of the natural world, eventually
allowing others to bring every living thing under a common hierarchical
umbrella.
If a large part of late 17th century science was
thus about bringing tabulated order to a chaotic world, much of the rest was
about formulating the laws under which this world operated, which again made 17th
century science slightly different from the science we practice today, modern
science being generally regarded as proceeding in three phases: the observation
of a regularity or correlation; the formulation of a hypothesis which would
explain this regularity or correlation; and the testing of this hypothesis by means
of experimentation. Much of 17th century science, however, more or
less ended with the first phase.
That’s not to say, of course, that experimentation and the formulation
of hypotheses did not take place. However, much of the experimentation was
performed to confirm the regularity or correlation rather than to test any
theory explaining it.
Take, for instance, Boyle’s Law, which was published in 1662
and states that pressure is inversely proportional to volume, such that the pressure
of a gas decreases as the volume of the container in which it is held
increases. This was demonstrated by using a closed ‘J’ shaped tube containing different
amounts of mercury for each iteration of the experiment. The mercury was then
used to force the air on the other side of the tube to contract, the multiple
iterations revealing that the inverse relationship between the pressure exerted
by the mercury and the volume of the air remained constant. At no point,
however, did Boyle attempt to theoretically explain why this was the case.
Indeed, it was not satisfactorily explained until James Clerk Maxwell addressed
the issue two centuries later. For Boyle, it was enough that he had identified
a universal law, this being what the 17th century primarily demanded
of its scientists.
What this also did, however, was contribute significantly to
the mythology of science, which is to say those things which scientists like to
believe about science which are not actually true. Because men like Robert
Boyle, Robert Hook and Sir Isaac Newton were building this great edifice of
universal laws, one of these beliefs, for instance, was that science was
continuous and linear, a bit like building a house. You start with sound
foundations and then build upwards, laying one brick on top of another in a
process of slow and methodical accretion in which there are no gaps and no great
leaps of imagination. As soon as one admits that one cannot really know that
something is a universal law until one also knows why it is so, and accepts,
therefore, that no science is complete without a theoretical explanation of its
laws, one is forced to recognise that hypothetical speculation is an essential
part of this process, and that far from being continuous and linear, it is very
often discontinuous and cyclical, as described by Thomas Kuhn in the ‘Structure
of Scientific Revolutions’.
Because scientific theories, as I have explained elsewhere,
can never be proven –
only disproven –
what Kuhn describes, in fact, is a process in which it is actually the purpose
of scientific theories to be shot down. I say this because, from the day they
are introduced, they instantly come under scrutiny from other scientists
looking to find flaws in them, not out malice or anything like that, but
because this is how science actually works, not by the slow accumulation of
building blocks, but by the critical examination of each block that is put
forward. Even in the case of the most well accepted theories, as a consequence,
exceptions to the theory are eventually discovered, with the result that, if
the theory is to be maintained, subsidiary theories have to be introduced to
explain why empirical observations do not always accord with what the main theory
predicts. Over time, as these inconsistencies accumulate and more and more of these
subsidiary theories are consequently required, the overall theory therefore
becomes increasingly complicated until, eventually, someone comes along and
says, ‘You know what, it would be a lot simpler if, instead of looking at it
like that, we looked at it like this’, thereby introducing a new theory, which initially
overcomes many of the old exceptions –
which is why it is initially accepted –
but which merely begins the process all over again.
Nor is this a mere theory. For as Thomas Kuhn points out,
the history of science is littered with dozens of examples of these paradigm
shifts, as he called them. In fact, one of the most well-known – that brought about by
Nicholas Copernicus –
happened so early on in the history of science that most people don’t actually
see it as a revolutionary change within science at all, but as part of that
larger transition from the medieval to the modern way of thinking described by
Foucault: a misconception which is almost certainly due to the later persecution
of Giordano Bruno and Galileo Galilei by the Roman Catholic Church as a result
of their beliefs on this subject, along with the fact that the Catholic Church
is regarded as the very epitome of medievalism while Galileo is regarded as one
of the fathers of modern science, which the medieval church was trying to hold
back. While these latter points may be valid, however, the science of astronomy
was no less scientific before Copernicus’ revolutionary change in perspective
than it was afterwards. In fact, the only thing that changed was the position
of the observer, who had previously been located at the centre of the
astronomical system but who was now relegated to a position on one of a number
of planets orbiting that centre. Everything else – the instruments used to measure and chart the
position of the planets, the Euclidian geometry used to calculate their future
movements, etc. –
remained exactly the same.
More to the point, the reasons why Copernicus chose to
reposition the observer are precisely those that Thomas Kohn describes as
typical in scientific revolutions. For until Copernicus changed the underlying
model there were numerous discrepancies between the observed position of the
planets and their predicted positions based on standard geometric calculations.
Moreover, these discrepancies had been known about and explained away using
various ad hoc theories for
centuries. By changing the underlying model from one that was geocentric to one
that was heliocentric, however, Copernicus discovered that most of the discrepancies
simply disappeared, which should have convinced anyone with half a brain that
he had to be right, or at least nearly so.
I say this because, although I wouldn’t expect another
revolutionary paradigm shift any time soon, a lot has changed in Copernicus’
model since he published his Dē
Revolutionibus Orbium Coelestium in 1543, and even today it is not perfect,
with new anomalies being continually discovered which then require further
refinement of the model. To give the reader an idea of some of the changes that
have taken place in the last 479 years, Copernicus, for instance, assumed that
all the concentric orbits of the planets were in the same plane. But they are actually
at slightly different angles to each other. What’s more, the angles constantly
change, which is to say that the orbits wobble. Copernicus also assumed that
the planets’ orbits were all the same shape and that they didn’t change. Due to
the gravitational pull exerted by the planets on each other, however, their
orbits are again constantly changing, sometimes being pulled further out such
that they become more elliptical, sometimes being pulled further in such that
they become almost circular. Copernicus also assumed that the sun was always at
the centre of each planet’s orbit. As a further result of the gravitational pull
exerted by the planets on each other, however, the sun is usually closer to one
end of a planet’s elliptical orbit –
the end known as the perihelion – than the other – known as the aphelion – a fact which, in the
case of the earth, has a massive cyclical effect upon our climate, which our
politicians, still adhering to a very basic Copernican model of the solar
system, fail to take into account when presuming to tackle climate change.
The real lesson to learn from this, however, is that all
models and theories are human inventions created to represent an infinitely
complex universe; they are not that universe itself, and are only ever, therefore, a simplified reflection of it, with
the result that whenever we are arrogant enough to think that we have its
measure, the universe almost invariably teaches us the error of our ways.
An even better example of a paradigm shift with respect to
how well it illustrates Kant’s proposition that genius does not just work through
the possibilities inherent in an
existing language but creates a new language in which it is possible to think
and say things we had not been able to think or say before, however, is Lavoisier’s
naming of oxygen. I put it this way because, while most people believe that
Lavoisier discovered oxygen, as I have explained elsewhere, that is not what he
did at all. In fact, Joseph Priestly discovered oxygen and even taught
Lavoisier how to isolate it. Priestly, however, called it dephlogisticated air.
Lavoisier renamed it, just as he renamed hydrogen, as part of a wholesale
transformation of the way in which he thought we should conceive of the
material universe: a transformation which essentially consisted in the abandonment
of what one might call transformational chemistry – which, having its roots in alchemy, had conceived
of chemical changes as changes in the very nature of the substances being acted
upon – and the adoption,
instead, of what we may call synthetic chemistry, which conceive of chemical changes
as the combination of different elements to form different substances, the most
paradigmatic example of which is the combination of oxygen and hydrogen to form
water, which lay at the heart of Lavoisier’s work.
What was truly transformational about this paradigm shift,
however, was the world it opened up. For once the new paradigm began to be
accepted, it naturally generated its own questions. How many of these new fundamental
elements were there, for instance, and what were there differentiating characteristics?
How were they structured and how did they combine with each other? Did their
structure determine which other elements they could combine with and could
their structure even be deduced from the combinations that already existed? Whether
singly or in combinations, these questions then led to the development of such
concepts as ‘atomic weight’ and ‘atomic number’, as well as the creation of the
periodic table, the chemical equivalent of the taxonomic tables of natural
history. They also led to our concepts of such things as molecules and
subatomic particles and the forces which hold matter together. In short, what
Lavoisier gave us was not just the kernel of a new language of chemistry, but
the linguist space in which much of the language of all science was able to
develop.
2.
The Limits of Creativity
At this point, therefore, I was fairly confident that I had
enough historical evidence to support my thesis that all the really major
developments in our various ways of thinking have been discontinuous, involving
imaginative leaps rather than the steady accumulation of knowledge. I was also
confident that I could extend this paradigm to the cover the arts, especially
the visual arts and music, although, in both of these cases, that which precipitates
the discontinuous change is, of course, different.
After all, neither a school of painting nor a particular
musical form is susceptible to being undermined by the discovery of exceptions
to their rules in the external world. In fact, in some cases, I suspect that
what brings a particular form of music to an end is that its exponents
eventually exhaust all the possibilities inherent within it, with the result
that nothing really new can be produced and the form simply falls out of
favour.
There are, however, cases in which an art form is clearly
impacted and can, indeed, be brought to an end by external events, usually of a
historical or cultural nature. In the 1970s, when I was writing this thesis, I
was very fond, for example, of the English music of the late 19th
and early 20th centuries: music by composers such as George
Butterworth, Fredrick Delius and, of course, Ralph Vaughan-Williams. Originating
in the latter stages of Britain’s industrial era, but with titles such as ‘On
Wenlock Edge’, ‘A Shropshire Lad’, and ‘The Lark Ascending’, it is music which not
only evokes the English countryside and landscape, but is, as a result, quite unashamedly
nostalgic, with long sustained chords which swell and then die away again as if
lamenting the loss of an earlier, simpler age. With its soft, pastoral
lyricism, however, alternating between joy and melancholy, it was a style of music
which simply could not survive the first world war. After the carnage of the
Somme, where George Butterworth was actually killed in the summer of 1916, no new
English composer could ever write such music again.
Whatever the cause of a way of thinking or form of
expression coming to an end, therefore –
whether it be through the accumulation of exceptions to a theory eventually making
that theory unworkable, or as a result of the possibilities inherent in a
musical form simply becoming exhausted or going out of style – I was convinced that
the underlying pattern was the same: a particular model or paradigm would be
born, flourish for a while and then find itself replaced by something
completely different. All that was left, therefore, was to understand a little
more about the creative processes which brought new paradigms into being and determine
what limitations, if any, these processes might have: two questions which I
very quickly realised were closely related. For however momentous a change of paradigm
might be in terms of its consequences, as illustrated by the examples of both Copernicus
and Lavoisier, what those who bring these changes about actually do may well be
very small.
Yes, the removal of the earth and its inhabitants from the
centre of God’s creation had a profound effect on how we were now forced to
think about ourselves; but it didn’t introduce any new concepts or involve the
creation of any new methods or techniques. Yes, Copernicus had to generate hundreds
of calculations to demonstrate that his new heliocentric model of the solar
system more accurately predicted the position of the planets than the old
geocentric model. But this was just the simple application of a branch of
mathematics that had been known and understood since the 3rd century
BC.
Similarly, Lavoisier’s replacement of the old transformational
paradigm in chemistry with the new synthetical paradigm may have opened up a
whole new world of scientific possibilities, but the synthetical paradigm
wasn’t, in itself, new. In fact, we use it every day in the kitchen whenever we
take a set of ingredients and combine them to produce a different recipe. Even
the idea that there is a set of fundamental elements which are the building
blocks of this synthesis wasn’t new. The concept of the atom was first
introduced by the Greek philosopher, Democritus, in the 4th century
BC. It was then taken up by Epicurus, whose empiricist philosophy was strongly
championed by Robert Boyle, thereby making the concept of the atom known to the
modern world.
Even that massive transformation in the way in which we
think which marked the transition from the medieval to the modern era during
the 16th and 17th centuries didn’t actually introduce
anything new. After all, the concept of causality, which replaced the concepts
of sympathy and antipathy as the primary form of connection between objects in
the physical universe, is not only central to our everyday lives, but is also
what Kant called a category of thought, which is to say part of the way in
which our minds work, such that the proposition that ‘everything that happens
has a cause’ is not just a law of nature but a characteristic of any world of
which we could conceive.
For those who have not read Kant and need a little help with
this most fundamental of Kantian concepts, imagine, if you will, a world in
which things just pop into existence, ex
nihilo, without anyone or anything causing this to happen. By definition,
we would call such events miracles, being beyond our comprehension, not because
the universe doesn’t produce things ex nihilo – we could never know whether it did or not – but because we could
never make sense of it. Our brains simply don’t work that way.
The problem, of course, is that our brains also rebel
against this. The idea that reality is not the way it is because it is reality,
but because it is the way in which our minds both perceive it and conceive of
it is too much for most people to take, especially those with a down-to-earth,
common sense approach to life. It is also why, of all the paradigm shifts and
intellectual revolutions that have occurred during our history, that wrought by
Kant in philosophy is not only the least understood but has almost certainly
gained the least traction. Even within philosophy itself, in fact, it is still
not widely or fully accepted. Indeed, one gets the impression that most people
don’t even want to entertain it. For the idea that everything in the universe
about which we are most certain –
from the fundamental laws of logic and the axioms of mathematics to our
concepts and, indeed, perceptions of space and time – are not derived from the universe, itself, but
emanate from ourselves, and that the universe, as it exists in itself, may not therefore
be quite as we imagine it, is so mind-boggling that, if you think about it for
long enough, it can almost send you mad. I know because I have been there.
In the context of this essay, it is somewhat fortunate, therefore,
that I do not actually have to convince the reader of the soundness of Kant’s
arguments, at least not in the way I felt obliged to do so in the context my
thesis itself. For while I am myself convinced that our inability to think
outside our basic programming or operating system – as one might think of Kant’s categories of thought – determines the limits
of our creativity, making it just as impossible for us to formulate a
scientific theory which posits the creation of objects ex nihilo as
it would be to create a musical form without a temporal dimension, there is
actually a much simpler argument which more or less leads us to the same
conclusion. For as illustrated by the examples of Copernicus and Lavoisier
above – and,
indeed, every other discursive discontinuity I have ever come across – human creativity is
less of a leap into the unknown than a kind of a sideways shuffle. For when an
existing paradigm ceases to work for us, we no more create something out of
nothing than does the phenomenal universe. What we do is look into our
intellectual toolbox to see if there is some other concept we already have that
we could apply in this situation. Indeed, it’s why new paradigms are so often expressed
– at least initially
– in the form of metaphors,
which takes an idea previously applied in one context and applies it, not
literally but analogously, to another.
Indeed, had I explored this idea more fully at the time, I
now realise that my characterisation of creativity would not only have been
more complete but would have also helped me understand much that has eluded me
over the last forty-odd years. It is therefore with some regret that I now
recognise that I took the wrong path, especially as I did so because I became
fixated on something else: something that was just as important and from which
I also gained a lot, but which I probably wouldn’t have gone anywhere near had
it not been for a chance encounter with a certain Dr. Dan Dennett and his damn
wasp.
3.
Dan Dennett and his Tropistic Wasp
Dan Dennett is an American philosopher who was touring
Britain at that time to promote a new book, a copy of which I am fairly sure I
bought in the days or weeks following our meeting but which I cannot find on
any of my book shelves, which not only means that I either lost it or leant it
to someone who never gave it back, but that I cannot give you a more precise
reference. Not that it actually matters, in that the passage to which I shall refer
shortly so shocked me when I first encountered it that it is indelibly engrained
in my memory.
It happened at one of the regular meetings of my
university’s Philosophy Society, which took place around a dozen times a year
and usually involved a paper given by a guest speaker, followed by an open
debate among those attending, after which members of the philosophy department,
including postgraduate students, would take the guest speaker out to dinner. The
evening on which Dan Dennett came to give a reading from his new book, however,
was one of the few occasions on which I actually declined the dinner
invitation, having completely lost my appetite, both for food and the company
of my colleagues. For within about five or ten minutes of Dr. Dennett beginning
his talk, I knew that I had a problem, one which is probably best illustrated
by recounting Dr. Dennett’s own description of the behaviour of a particular
wasp.
Because I no longer have a copy of the book, I cannot tell
you which species of wasp it actually was, only that it was not one which
nested in colonies. I know this because each female of the species excavated
her own nest in which to lay her eggs, after which she would go out hunting for
grasshoppers or locusts, which she would sting and paralyse, but not kill, so
that they would remain alive and fresh in order to provide food for her
offspring during their larval stage. She would then bring the paralysed
grasshopper or locust back to the nest and leave it on the threshold while she
first checked inside to ensure that everything was as it should be. Satisfied
that all was well, she would then come outside again, retrieve her prey and
drag it into the nest.
What entomologists discovered, however, was that if, during
the time the mother spent checking the nest, they moved the paralysed locust a
few inches away from the entrance, on coming back outside, she would drag her
prey back to the entrance once more before going back inside to check the nest
again. If, while she was inside, they then moved the paralysed locust once again,
on coming back outside, she would repeat the process once more. And, as long
they kept moving the locust, she would go on doing this indefinitely, time
after time, trapped inside her own programming.
Given that I had already accepted that we, too, are limited
by our programming, I therefore immediately recognised the implications this
had for my own work. In fact, Dr. Dennett spelt it out for me during his talk.
For his argument was that, while our programming may be more sophisticated than
that of the wasp, given that all living creatures are finite organisms,
determined in their behaviour by their neurophysiology, we are all, therefore, more
or less tropistic in the same way as the
wasp, the difference being one of degree, not of kind. And it was in this latter
point that my real difficulty lay. For if I accepted that the difference was
only one of degree, such that, fundamentally, our thoughts and behaviour are
still fully determined, then, in any real or meaningful sense, discursive
discontinuities cannot exist and any capacity for creativity we might have is
far more circumscribed than I had previously allowed. If, on the other hand, I
tried to argue that the difference was actually qualitative, rather than merely
quantitative, I would then have to specify in what this difference lay? And I
strongly doubted that arguing that, unlike wasps, human beings have souls was
going to get me awarded a Ph.D. in philosophy. In theology perhaps, but not
philosophy.
I was thus caught on the horns of a dilemma, unable to win
either way, unless, of course, I could think of a third option. As a result, I
spent the next six months or so scouring through our collective intellectual
toolbox to see what other concept we had that might serve my purpose. It was
Dan Dennett’s description of the wasp, itself, however, that actually gave me
the clue I needed. For when I heard him describe the way in which entomologists
revealed the wasp’s tropism, I, like most of his audience, instantly underwent
a change in perception. Up until then, I had viewed the wasp’s behaviour as
that of a mother making provision for her offspring. The way in which she went
about this was a little gruesome; but this didn’t make it any less maternal. Then,
however, as she repeats herself over and over again, one’s perception changes.
This isn’t a mother looking after her children any more; this is a robot.
Indeed, she actually ceases to be a ‘she’ and becomes an ‘it’: a change in kind
if ever there was one.
Of course, Dan Dennett would argue that this change in our
perception is just that: a change in perception rather than a change in the
nature of the object, and it is our fault, therefore, if we had previously been
projecting some kind of anthropomorphic interpretation on to this object’s
behaviour. This, however, misses the point. For even if, up until the moment of
revelation, we had been wrongly interpreting the wasp’s behaviour as that of a
mother looking after her unborn offspring, it is the wasp, itself, which
reveals to us its true nature. What’s more, this change in our perception is
both involuntary and instantaneous, strongly suggesting, therefore, that the
distinction we draw between those living beings that are like us and those that
are not is not the
result of a process of reasoning. We don’t think about it, weigh the evidence and
then decide that one living being belongs in category A while another belongs
in category B. It is rather something we actually perceive in the object. And
what we perceive in the case of a human mother looking after her child, or
indeed a canine mother looking after her litter, is that there is someone
there, what Martin Heidegger called Dasein,
whereas what we perceive in the case of the tropistic wasp, or indeed any automaton,
is that it is merely a machine inhabited by no one.
Nor are the implications of this change of perception
limited to the moment of perception itself. For when we apprehend another as Dasein – as ‘someone there’ – implicit in this apprehension is the recognition,
not just that they can see us –
an ability which an automaton may also possess – but that they will reciprocally apprehend us as Dasein. For apprehending others as Dasein is an attribute of Dasein: something which only Dasein can do and which, being Dasein, we therefore must do if we,
ourselves, want to be apprehended as Dasein
by others, as we most certainly do.
Indeed, not being apprehended as Dasein by others whom we apprehend as Dasein stirs in us the most powerful of emotions. A good example of
this is when someone allows or, indeed, forces ideology to supersede or
override perception in order to see others as
racially inferior and thus not quite human. So objectionable is this
that, as I have explained elsewhere, being looked down on in this way can often
generate a hatred so bitter and all-consuming that it can actually destroy the lives
of those who succumb to it. What’s more, because the other does not regard the
object of their disdain as Dasein,
they render themselves not-Dasein,
with the result that we no longer regard them as Dasein, thereby creating a vicious circle of mutual denial which
can easily lead to violence.
Of course it may doubted that all this does flow just from
our failure to reciprocally apprehend of others as Dasein, with the result that one may suspect that other factors are
involved. To demonstrate that this is not the case, however, contrast the above
response with how we react when another does not apprehend us as Dasein, not for ideological reasons, but
because the other is not, itself, Dasein,
and is therefore incapable of apprehending us as such. A good example of this
is the way we feel when watching sci-fi films in which the enemies of the human
protagonists are robots or automata, to which our typical response is not
hatred but fear. This is because our apprehension of another as not being able
to apprehend us as Dasein, has the
further implication that they will not therefore be constrained in their behaviour towards us in the way in
which others whom we do apprehend as Dasein
would be.
This is because the apprehension of another as Dasein has yet further implications for
us, not the least of which is how we are constrained to treat them. For knowing
that another is able to see us in the same way we see them instantly raises the
question as to how we want to be seen, the answer to which, of course, is
‘favourably’. This then leads us to examine our behaviour and look at ourselves
in the same way that others see us, with the result that acting well towards
others and winning their approbation becomes a major issue for us. It is not
just that we do not want to be seen as acting dishonestly or with moral
disregard for others, it is rather that we do not want to actually be someone who
acts in this way, with the further consequence that we actually get angry with
ourselves if we do something shameful or otherwise let ourselves down even when
we are on our own.
With
all of these implications, both emotional and moral, flowing from our
perception of others as either belonging to the category of Dasein or not as the case may be, the
idea that the distinction between the two, along with our ability to make this
distinction, should merely be a matter of degree, rather than a fundamental difference
in kind, which this distinction itself expresses, thus seems almost laughable. What
made me even more certain of this when writing it, however, was the realisation
that both the modification of our behaviour as a result of our
apprehension of others as Dasein and
our shift to a new paradigm when we realise that we have been thinking about
something in the wrong way are based upon a common foundation. For they both
rely on an extraordinary ability we have to stand back from ourselves and view ourselves
critically. And it is precisely this, or rather the consciousness on which it
is based, that the tropistic wasp so clearly lacks and which causes us to say
of it that there is no one there.
4.
Our Incorrigible Misconception of the Nature of
Truth
In fact, so excited was I by this discovery and the fact
that I was also fairly certain that no one had ever put these two things – creativity and morality
– together before, that I even changed the
title of the thesis to reflect this coupling, certain as I also was that this
would prove to be my clinching argument.
Not, of course, that it actually turned out that way, not
least because, given this latest twist in my already meandering philosophical
journey, I don’t think that any of the three examiners at my viva voce were completely sure what it
was I was trying to prove, let alone whether I’d actually proved it. They
certainly didn’t think that I had proved the thesis which I had originally set
out to prove: that all major creative developments involve the creation of a
new language in which to express them. Indeed, one of the examiners thought
that my dissertation, in itself, proved the opposite. For by bringing together
the ideas of Kant, Foucault, Kuhn and Heidegger in a way he thought was
interesting, original and hence creative, what I’d actually demonstrated in his
view was that, while revolutionary changes in direction may occasionally occur
in both the sciences and the arts, creativity is more often than not a matter
of synthesis.
Even more importantly, none of the examiners thought that I
had actually proved Dan Dennett wrong. For while we may regard those who
possess consciousness as belonging to a different order of being, and while
this consciousness may be the basis of our critical self-awareness, both with
respect to creative problem solving and our behaviour towards others, it is nevertheless
just another layer of programming laid down in the synaptic pathways of our
brains.
In fact, at one point in the discussion, when it was
becoming fairly obvious that I was losing the argument, I really thought they
were going to fail me, with the result that it actually came as something of a relief
when, in his summary, the chairman of the panel said that, for all their
criticisms, the panel felt that, all in all, the work I had produced was a
highly commendable effort to reconcile the irreconcilable, by which he meant, of
course, these two completely different ways we have of perceiving ourselves:
one from the inside, from which perspective we experience ourselves as
thinking, feeling, daydreaming etc.; the other from the outside, from which
less privileged perspective all we see is the firing of neurons within our
brains, giving us two different descriptions of what we believe to be one and the
same thing but which have absolutely nothing in common.
Much as I was relieved to be given this somewhat backhanded compliment,
however, I don’t think I ever thought that reconciling these two different
descriptions of ourselves was what I was actually trying to do. After all, what
would such a reconciliation even look like? Nor was I trying to do what Kant
did, which was to simply place these two phenomena in two separate phenomenal
worlds, both of which are at least partly constructs of our own minds. After
all, had I wanted to do this, I could have just slapped a copy of Kant’s
collected works on the table and said, ‘There’s your answer!’ The problem,
however, was that while I was fairly sure that what I was trying to say was far
less fundamental and far-reaching than Kant, its precise articulation remained stubbornly
out of reach.
In fact, over the intervening years, my sense of failure at
not having made a more coherent and convincing contribution to the intellectual
space in which I have lived for most of my life on the one occasion I was given
the chance to do so has kept me coming back to this subject time and time again
and has even prompted me to include some aspects of it in some of the more
recent pieces I have written for this blog. Much to my surprise, moreover, this
doggedness may have finally borne some fruit. For in one or two of my more
recent essays I have begun to sense the dawning of a question I now feel I
should have asked all those years ago, both in my thesis itself and of my
examiners, and which, inchoate though it still is, might yet yield the answer
which has eluded me for so long.
Stated as plainly as possible, this question is: ‘Why, of
the two ways we have of perceiving ourselves, do we think that one is more real
than the other?’ For it is this belief, of course, that is at the heart of Dan
Dennett’s reductionist materialism: that what we experience as thinking,
feeling, daydreaming etc., is just our way of internally perceiving what is
going on, but that what is really going on is the firing of neurons. But why do
we believe this?
Part of the answer, of course, is that we not only think of
our experience of ourselves as subjective but as essentially unknowable by
others. No one else, for instance, can feel the pain I am currently experiencing
in my left knee or know how bad it is. With the right equipment, however,
anyone could observe both the inflammation in and around the joint and the effect
this inflammation is having on my pain receptors. Whereas my pain is just my
body’s way of letting me know that there is something wrong with my knee, it is
these latter observations of what is actually going on that we therefore regard
as the objective truth.
Not only does this grossly misrepresent the language in
which we express and share our experiences, however – a subject to which I shall return later – but it also fails to
recognise that our everyday use of the term ‘objective truth’, or indeed just
‘truth’, is a little misleading. This is because most people believe that it is
always and only used to mean a state of correspondence between a statement
about the world and a state within the world. If I say, for instance, that,
among other pieces of furniture in my living room, there is one sofa and one
armchair, and you go into my living room and see that it contains exactly one
sofa and one armchair, you will probably conclude, therefore, that what I said
was true, in that my statement corresponds to what is actually the case.
The problem with this correspondence model of truth,
however, is that we are not often in a position to make this kind of comparison
and when we are, as in the above example, the question at issue is usually
fairly trivial. More complex questions of truth are not usually quite as susceptible
to being determined in this way.
Take, for instance, a trial in a court of law, where
establishing the truth would seem to play a crucial role in determining the right
outcome for the case. However, it is not quite as simple as that, not least
because, if a case has come to trial, it is unlikely that anyone actually saw
the accused commit the crime. And even if there is a witness, unless he or she actually
filmed the offence being committed, the jury still cannot see for themselves the
correspondence between the witness’ statement and what actually happened, with
the result that the criterion of truth most often applied to witness statements
is that of credibility.
Of course, there may also be physical evidence, such as
fingerprints, which represent another form of correspondence: not the correspondence
that exists between a statement and a state in the world, but rather a correspondence
between things. Even in the case of fingerprints, however, the matching of
which is highly reliable, there may be a perfectly innocent explanation as to
why the accused’s ‘dabs’ should have been found at the crime scene. The result
is that, while juries are asked to form an opinion as to the truth of the
matter, the model of truth they apply, whether they recognise this or not, is
not correspondence but consistency, in that what they decide is which version
of events – that
presented by the prosecution or that offered by the defence – they think is most consistent
with all the available evidence.
And the same is true with respect to science. For although we
mostly think of science as being based on empirical evidence in the form of
observations and measurements etc., the collecting of this data, as I observed
earlier, is only one part of the scientific process, and is often the most
trivial part. The more difficult part is the formulation of a theory to explain
the data. And the model of truth we employ in this regard is again that of
consistency, especially when dealing with competing theories, when, as in a
court of law, the question we have to ask ourselves is again which theory is most
consistent with all the available evidence.
When applied within science, however, this consistency paradigm
of truth is not confined merely to the relationship between a theory and the
data which may or may not support it. For any new theory we formulate must also
be consistent with every other scientific theory we hold to be true, with the
result that unification theories, such as James Clark Maxwell’s classical
theory of electromagnetic radiation, which was the first to describe
electricity, magnetism and light as different manifestations of the same
phenomenon, are usually regarded as being among science’s highest achievements.
It is also why the 17th century view of science as
a gradually accreted edifice is at least partly correct. For science does build
upon itself. It is just that most of the building blocks are not immutable laws,
but far less trustworthy theories, which, in some cases, are held to be true, at
least in part, because they are consistent with some other theory which is held
to be true and which may or may not be any more firmly grounded. Either way,
this then creates a dependency. For if a theory is held to be true largely
because it is consistent with another theory that is held to be true, should
that other theory be proven false, the first theory will also be brought into
question, as will any technologies based on it.
Take, for example, the technologies which allow others to directly
observe my injured knee –
X-ray machines in the case of the knee itself; MRI scanners in the case of my
central nervous system –
both of which depend on various scientific theories, both to explain how the
technologies, themselves, work, and to explain why we should believe their
outputs to be true representations of my various body parts. If any of these
underlying theories were proven false, therefore, this would then raise the
question as to how sure we would be that the images obtained using these
technologies were still valid.
Of course, some may argue that this couldn’t happen for the
simple reason that, if the underlying theories were false, then the
technologies wouldn’t work. The fallacy of this argument quickly becomes apparent,
however, if one considers that the same reasoning was almost certainly applied
to the theories behind medieval medicine, which must have worked some of the
time – or, at
least, given that impression –
or physicians would never have been paid or employed in the first place. The
fact that a technology works –
or at least seems to work –
does not therefore prove that the theory upon which it is supposedly based is true.
Even more importantly, however, the theoretical basis of the
technology, whether it has been proven false or not, also raises the question
as to which paradigm of truth we should apply to the technology’s outputs, in
this case X-ray images. For far from allowing others to ‘directly observe my
injured knee’, as I stated above –
which would have made the relationship between the X-ray and my knee one of correspondence
– the
interpretation of the slightly darker area around the joint as an area of inflammation
is actually based on a huge amount of theoretical knowledge, covering such
matters as the way in which subatomic particles are emitted by certain
radioactive elements, how they behave when passing through materials of different
density, and the effect they have upon materials that are photosensitive. A
radiologist does not need to personally possess all this knowledge to interpret
the dark area on the image as an area of inflammation, but he will have been
taught at some point that this is the best interpretation consistent with
everything we know about human physiology and theoretical physics.
Thus, we come back to consistency. Consistency in this case,
however, is not the consistency of a statement with the evidence, or the
consistency of one theory with another, but the consistency of a statement – that this dark area
indicates an area of inflammation –
with all the relevant theoretical knowledge. And it is for this reason that
none of the theories underlying X-ray and MRI technologies are likely to be
proven false any time soon: not because the technologies work, but because the
theories themselves are part of a unified belief system which, for the most
part, is internally consistent, such that, even if we were to discover that one
or more of the theories was found to have more exceptions to it than is
generally regarded as acceptable, we would still not reject it. We would either
formulate additional, subsidiary theories to explain the anomalies and
exceptions or – if
this proved impossible –
we would simply leave it as an outstanding problem to be solved in future, in
the way in which the anomalous orbit of the planet Mercury – which does not behave
in accordance with Newton’s theory of gravity – was simply left for hundreds of years until
Einstein came along with a new theory, based on a different paradigm, to fix
the problem. Except he didn’t. For the orbit of the planet Mercury does not conform
to Einstein’s theory either.
In fact, according to the philosopher of science, Paul
Feyerabend, science is so riddled with such unresolved discrepancies that many
of its most fundamental theories should have been discarded years ago. The
problem, however, is not just that no one has yet come along with new paradigms
which would eliminate the anomalies, but that any new paradigm is likely to be
inconsistent with the rest of our unified scientific belief system, with the
result that it could cause the whole system to collapse in the same way that
the medieval belief system collapsed: something which, having invested so much
in it, we simply could not allow. So, imperfect and full of holes as the system
is, we keep on shoring it up, using work-arounds and patches to get as close to
the empirical evidence as we possibly can while pretending that nothing is
wrong.
5.
Recognitional & Metaphorical Truth
My purpose in examining the nature of scientific truth in
this way, however, was not merely to disabuse the reader of the naïve belief
most of us have that our scientific description of the universe, including my
left knee, corresponds to the way the universe is in itself, but also to
compare the model of truth upon which science actually rests, i.e. consistency,
with the model of truth we implicitly apply to statements about our personal
experience. For that we do, indeed, apply a model of truth to such statements is
clear from the fact that, all things being equal, we tend to believe people
when they tell us that they have a pain in their left knee.
Sometimes, of course, this is based on external evidence. If
we ask someone what’s wrong when we see them wincing every time they get up out
of a chair and they tell us that they have hurt their knee, assuming that they
have no reason to lie, we generally take this to be a statement of the truth,
in that it is consistent with what we have been witnessing. In this instance, therefore,
we are again applying the consistency paradigm of truth.
However, there is another paradigm we use in this context
which one might call the ‘recognitional’ paradigm of truth. In this instance,
the injured person says that he keeps getting a ‘sharp’ pain to the inside
front of his knee, especially when he puts any lateral weight on it. You
recognise his description of the pain as something you too have experienced in
the past and so you respond by saying: ‘Yes, I’ve experienced that too. Even if
you don’t remember, you’ve probably twisted your knee at some point, straining
the lateral collateral ligament. My doctor told me to simply keep the knee as straight
as possible for a while, until the inflammation receded’.
If this strikes a chord, it is because we use this recognitional
paradigm of truth all the time when talking about our experiences, from
physical sensations to a whole range of emotions: something we are only able to
do, however, as a result of two conditions being fulfilled.
The first of these is that we all have the same experiences.
If we didn’t, we wouldn’t have a common experiential language or, indeed, an
experiential language at all. For languages only develop when people have
things in common to talk about. If they didn’t, they wouldn’t know what the
words spoken by another referred to. Suppose, for instance, that there are
people who do not feel pain. You tell one of them that you have a pain in your
left knee and he says: ‘Pain, what’s that?’ The mere fact that, except in very exceptional
circumstance, this doesn’t happen and that people are able to share and talk
about their experiences implies, therefore, that we do all have the same
experiences.
The second condition, however, is a little more tricky, not
because it hasn’t been met –
because we know that it has, otherwise we would not have a common experiential
language – but
because it is a lot more difficult to explain how this is so. This is because the
second condition is that we have a way of developing this experiential language
that ensures that everyone uses the same words to refer to the same
experiences.
Not, of course, that this is particularly unusual. For it is
a necessary condition of any language that everyone uses the same words to
refer to the same things. If they didn’t, communication would fail and the
language would breakdown. This is why most languages are self-regulating or
auto-corrective. If I go into a greengrocer’s shop with the intention of buying
four oranges, for instance, but ask for five apples, the chances are that what
I intended to communicate will not have been conveyed, a fact which will quickly
become apparent when the greengrocer starts putting five apples into a bag.
‘No,’ I say, ‘I want apples’, pointing at a pile of oranges. ‘Those?’ the
greengrocer asks, to which I nod my assent. ‘Those aren’t apples,’ he goes on,
‘those are oranges’. ‘Ah,’ I say, thanking him for correcting my mistake, which
not only enables communication between us to occur but actually maintains the
integrity of the language.
The problem in the case of our experiential language,
however, is that it can neither be created nor auto-corrected through ostensive
definition, which is to say by pointing. In addition to us all sharing the same
experiences, in teaching and learning our experiential language, therefore, we also
rely on two further factors which turn this otherwise impossible task into what
is merely a life-long journey, the first of which is the often overlooked but
crucial fact that we don’t just communicate in words. We say as much through
our behaviour and our facial and bodily expressions as we do through language.
What’s more, most of us are fairly good at reading each other in this respect. We
see someone wrinkle their nose at a bad smell and because we smell the same
thing and have the same response we know exactly what they are experiencing. We
may even shake our heads at each other in agreement. The problem, however, is
how to turn this into language. For while this may not be particularly
difficult with respect to smells –
we can simply hold our noses and say ‘Pooh’ – it is a lot more difficult to mime anxiety or grief.
In fact, even today, when we already have a fully developed
language of the emotions in place, we are not very good at learning it. Few
people attain proficiency while many remain almost wholly inarticulate in this
regard. The reason for this is that attaining proficiency is determined by
three main factors: an interest in understanding ourselves, which partly arises
as a result of having already attained some level of self-understanding; a
willingness to be honest with ourselves rather than hide behind clichés and
superficial answers; and the existence of an already proficient linguistic
community which will not only correct us when we make mistakes using the
language, but will encourage us both to be honest with ourselves and continue
upon our life-long journey towards self-knowledge.
In fact, along with having the same experiences and being
able to communicate non-verbally, having someone who is already proficient in
the language to help us express ourselves is the third key factor in enabling
us to learn the language at all: a requirement that is so obviously circular
that it thus raises the question as to how the language developed in the first
place, before this linguistic community was formed.
The answer, of course, is that both the language and the
linguistic community must have evolved together over time, starting from what
was probably a very low base, which may, indeed, have consisted in little more
than holding our noses and saying ‘Pooh!’ By pressing the limits of this
language, however, it seems that we were somehow able to develop it into
something much more expressive and sophisticated. The question is how? How did
we do this? Or rather, how do we do this? For if the language
evolved over time, the chances are that it is still evolving and that, whether or not we are explicitly aware of it,
we have all probably had some experience of it. Indeed, it is probably quite
commonplace.
Imagine, for instance, that you are in a situation in which
you want to say something but you are not quite sure what it is. You try saying
it in a number of different ways but none of them seem quite right. What’s
more, the person you are talking to clearly doesn’t understand what you are
trying to tell him either. He looks at you with a totally blank expression,
which makes you feel even more frustrated. In fact, you are almost on the point
of giving up when you decide to give it one last try. ‘Look,’ you say, ‘think
of it like this…’ and, in that moment, not only do you, yourself, feel that you
may have finally nailed it, but you start to see a flicker of recognition on
the other person’s face. ‘Yeah,’ he says, ‘I’d never thought about it before,
but now that you put it like that, I think I know what you mean!’
Because this experience really is quite commonplace, it is
easy to overlook its significance. But it is actually quite extraordinary. For
by pressing the limits of what we are currently able to say, we not only manage
to say – and hence
think – something
we had not been able to say or think before, but we are able to get someone
else to think something they had not been able to think before. In fact, it is
only when we get someone else to ‘see’ what it is we are trying to say that we
actually know that it makes sense and that we are not totally crazy.
If gaining the recognition of another is thus a condition of
this kind of linguistic development –
in that, without this recognition, there would be no communication and the
development would be stillborn –
it is also a condition of the truth of what is being said or, at least, of its
coherence or sensibleness, in that, if it did not make sense, the other would
not recognise it. Thus, in part, what we have here is another example of the
recognitional paradigm of truth. Unlike
my earlier example, however, in which, based on our common experiential
language, we merely recognised the truth of someone’s else description of a
pain we too have experienced, this new example has an additional dimension. For
in the previous example, we were effectively matching descriptions: the other’s
description of his pain compared to the description we would have given if
asked to describe our pain. As in the case of both correspondence and
consistency, therefore, truth, under this application of the recognitional
paradigm, is a relationship: not this time between a statement about the world
and a state in the world, but between two statements. In this new example,
however, the hearer of what we are trying to say would not have been able to
state what this was until he recognised it. He is not, therefore, comparing
statements. What he is actually doing is participating in a creative act, which
is not a relationship at all but an event.
In a short essay entitled ‘On the Essence of Truth’, Martin
Heidegger, in fact, specifically describes truth in this sense as something
that happens between two or more people, one of whom is a speaker or writer who
is struggling to say something which is at the limit of what he is able to
express, while the other or others are readers or listeners who have not
thought about this area of human experience before, and for whom it is therefore
entirely new, but who, on hearing what the speaker has to say about it,
recognise both the experience itself and the truth of what they are being told,
thereby bringing this truth into the realm of their shared language and making
it possible for them to talk about it.
While this would therefore seem to perfectly accord with
what Kant said about genius, what it is also important to note about Heidegger’s
description, however, is that while ‘On the Essence of Truth’ was originally
published in a collection of four essays which are largely focused on the
poetry of Friedrich Hölderlin –
a poet who is generally recognised as one of the great geniuses of German Romanticism
– the creation of
truth as an aspect of this communal development of the language is not confined
merely to those rare utterances of genius we all recognise as such. For, if it was
so, language would take so long to develop that it would be static most of the
time and its discontinuous leaps would certainly not be commonplace. More to
the point, for language to develop in this way, the whole linguistic community
has to be more or less in accord, as it is, for instance, whenever an audience
laughs in unison at a comedian’s joke while telling each other that it’s so
true. Indeed, laughter is one of the most common ways we have of expressing the
pleasure we take in this kind of recognition, even if the truth the joke
reveals is so embarrassing that it causes us to hide our faces behind our
hands, thereby both acknowledging it to be true while simultaneously attempting
to deny our acknowledgement.
Not, of course, that this explains how any of this happens. Nor
does it explain why it continues to happen rather than the language becoming
fixed and stable. Is it because the world we inhabit is continually changing, thereby
exposing us to new experiences for which we have to find new forms of
expression? Or is it because language, itself, eventually becomes tired and
hackneyed and has to be refreshed? Or is it, perhaps, a little of both, as T.S.
Eliot seems to suggest in the following passage from ‘Four Quartets’, in which,
on the subject of writing poetry, he writes that every new attempt…
Is a wholly new start, and a different kind
of failure
Because one has only learnt to get the
better of words
For the thing one no longer has to say, or
the way in which
One is no longer disposed to say it. And so
each venture
Is a new beginning, a raid on the
inarticulate
With shabby equipment always deteriorating
In the general mess of imprecision of
feeling,
Undisciplined squads of emotion. And what
there is to conquer
By strength and submission, has already been
discovered
Once or twice, or several times, by men whom
one cannot hope
To emulate—but there is no competition—
There is only the fight to recover what has
been lost
And found and lost again and again: and now,
under conditions
That seem unpropitious. But perhaps neither
gain nor loss.
For us, there is only the trying. The rest is not our business.
What I have always found remarkable about this passage is
the sheer level of gloom Eliot seems to have derived from writing poetry, which
is in marked contrast to the level of joy Heidegger clearly derived from
reading Hölderlin. Not, of course, that this is entirely surprising. For it may
simply be a reflection of the essential difference between reading and writing,
in that readers are highly unlikely to feel anguish or despair simply because they
fail to understand a particular passage in a poem by a favourite poet,
especially if they are reading it for the first time. Nor is their failure to understand
one passage likely to spoil the pleasure they get from reading other passages
where they and the poet are more on the same wave length. For a writer struggling
to articulate that which remains stubbornly out of reach, however, there is
always the fear that his meagre efforts may fail to convey what he is trying to
say, thereby leaving his audience perplexed or even questioning his sanity.
In the context of the current essay, what is also quite
remarkable about this passage is the way in which it implicitly differentiates between
experiential truths –
in this case the experience of trying to write something which continually
eludes one’s grasp –
and the truths of science, for instance. For whereas scientific truths only have
to be discovered once and are then available to everyone for all time, the
business of articulating what it like to be a human being has to be repeated in
every generation, not only because writers of previous generations may no
longer speak to us, their concerns not being ours, but because it is only by
trying to articulate these truths ourselves that we truly come to possess them.
The most remarkable thing about this passage, however, is not
something Eliot actually says, but something he does. For while it may not be
immediately apparent –
it being deceptively simple –
in talking about trying to get the better of words in order say
something we had not been able to say before, and describing this as ‘a raid on
the inarticulate’, he actually shows us how we do it. This is because our primary
use of the word ‘inarticulate’ is as an adjective. We say that someone is
inarticulate when they are not good at putting things into words. By placing a
definite article in front of it, however, and talking about ‘the inarticulate’, Eliot turns it into a
noun, which, in conjunction with the word ‘raid’, actually becomes the name of a
place, in that only places, lands or realms can be raided. What’s more, by
simply removing the negative prefix ‘in’, the word ‘inarticulate’ implicitly
refers to its opposite, thereby implying the existence of two realms: the realm
of all the things that can be said and the realm of all the things that can’t, an opposition which
inevitably leads us to imagine our raid upon the latter as a raid upon some
foreign land from which we steal some inarticulable treasure, which we then
bring back into the realm of the articulate so that we may then speak of it.
Simply by placing a definite article in front of what is primarily
an adjective, Eliot thus creates an immensely powerful metaphor: one, moreover,
which perfectly expresses what he was trying to convey because we, for our
part, instantly ‘get it’, even without my heavy-handed exposition. Indeed, my
main purpose in providing this exposition was not because I thought it was needed
in order to get others to understand what Eliot is saying, but rather in order
to reveal how he actually achieves this little act of creative genius: not by
creating something out of nothing, but by pressings the existing language into
new service in a way very similar to that which Lavoisier employed when he took
existing concepts out of our intellectual toolbox and repurposed them to
provide him with a new way of talking and thinking about the basic building
blocks of the material universe.
6.
The Dangers Inherent in an Overly Restrictive
Definition of Truth
Although I hope that I have thus given at least some
credence to Heidegger’s definition of truth as something that happens which
brings truth into being, what I am not suggesting, of course, is that we use this
definition to replace our existing paradigms of truth, i.e. correspondence and
consistency. Even less am I suggesting that truth is in any sense personal, as
some people seem to suggest when they talk about ‘their’ truth. Indeed, the
idea that one could have a personal truth makes about as much sense as the idea
that one could have a private language, not least because talking about truth
as an event, as I also hope I have made clear, is really just another way of
talking about the development of language within a linguistic community, which
only happens, of course, when it works, when a writer uses language to say
something new which his audience then recognises as the truth, making the
audience the arbiters of both the truth and this new linguistic development.
My point is rather the very simple one that there is more than
one valid paradigm of truth and that we need to understand which paradigm we
are using in any given context in order to ensure that we are using it appropriately.
If I am building a bridge, for instance, I want my design engineer to use
engineering principles which have proven themselves sound over hundreds of
years rather than a clever metaphor, no matter how apposite the metaphor may
be. If I want to know what it’s like to be a human being, however, I’d much
rather listen to a poet or comedian than to a neurophysiologist. The problem is
that throughout most of our history, not only have we failed to distinguish
between different paradigms of truth, but have often, as a consequence, applied
the wrong paradigm in the wrong context.
A perfect example of this, to which I referred earlier, is
the Roman Catholic Church’s denial of the empirical and mathematical evidence
supporting Copernicus’ heliocentric model of the solar system, and its
insistence, instead, on what might be regarded as the metaphorical truth of
humanity’s centrality in God’s creation, something to which I shall return shortly.
Having literally removed ourselves from the centre of the
universe, however, we then proceeded to remove ourselves morally from the
universe altogether by denying the truth of that which makes us unique – our consciousness,
creativity and moral awareness –
insisting instead that, in reality, we are fundamentally no different
from Dan Dennett’s tropistic wasp. By dehumanising ourselves in this way and depicting
ourselves as mere biological robots, like millions of other species, we thus no
longer saw ourselves as Dasein and
could consequently behave towards ourselves without moral restraint, with the
result that, during the 20th century, we were able to murder more
than a hundred million people in concentration camps around the world.
Nor is this the limit to the violence we have wrought upon
ourselves by insisting that there is only one kind of truth. For by building modern
culture on the reductionist materialism to which this inevitably gave rise, we have
undermined and, in many cases, completely forgotten the metaphorical truths we
had accumulated in our long history of myth-making and story-telling, in which
the values and collective wisdom of our historical culture were distilled. This
culture, which combined the classical mythology of the Greco-Roman world, the
prescriptive morality of Judaism and the redemptive promise of Christianity,
along with a whole panoply of pagan myths, has gradually been eroded not just
because, by believing that there is only one kind of truth, we no longer
believe that metaphorical truths are actually true, but because, in order to
defend itself against this inexorable trend, Christianity, which was at the
core of this culture, foolishly decided to go along with it by denying that its
own truths were actually metaphorical.
In order to see the truth of this, as well as to understand both its contradictory
nature and self-destructive consequences, consider once again the
Judaeo-Christian account of creation, in which the fashioning of human beings in
God’s own image and our endowment with consciousness was the culmination of God’s
labours, making us quite clearly the centre of His entire project, which, for
the pre-Copernican mind, must have been a fairly terrifying thought. For while
it placed mankind in a position of importance, to be the only living beings
with self-awareness while also being the focus of God’s attention would have
given anybody the heebie-jeebies, especially as being the only creatures able
to make judgments about how we behave, rather than simply acting on instinct,
the pressure on us to behave well would have been overwhelming.
Looked at in this light, the Judaeo-Christian creation myth
is thus not really about creation at all; it’s about us and our relationship to
the creator and hence His creation, with all the responsibility for its
stewardship which this entails. Indeed, had the Irish comedian Dave Allen ever
told this joke, he’d have had us all simultaneously laughing and weeping at the
absolutely intolerable position in which God placed us. And it is in this, indeed,
that the truth of the story lies. What’s more, it may even have been why some
members of the Roman Catholic Church chose to oppose the Copernican revolution
in the 16th century. For if they really understood the story’s
implications, which I am fairly sure that some of them did, they would have
known that as a side effect of removing human beings from the centre of the
universe and placing us on the third planet orbiting our sun, Copernicus was
also telling us that we were not that important, that what we did didn’t really
matter that much, and that we could therefore do more or less whatever we
liked.
Although some churchmen would have understood this, however,
I am also fairly certain that this was not the actual reason why the Church
opposed Copernicus’ heliocentric model. For had this been the reason, they
surely would have known that insisting that the Judaeo-Christian creation myth
was literally true –
in the face of all empirical and mathematical evidence to the contrary – was about the worse strategy
they could have employed. A far better option, indeed, would have been to have
congratulated Copernicus on correctly locating the earth in relation to the sun
and the other planets, but to have maintained that the fundamental truth contained
in the ancient Hebrew texts, that God created the earth as a home for those he
created in his own image, remained unchanged. In this way, they would have
conceded the literal truth to Copernicus while maintaining the metaphorical
truth of the biblical text. The very fact that they insisted that the story was
literally true, therefore, strongly suggests that they did not make this
distinction. And if they did not make this distinction, it is almost certainly
because they only had one paradigm of the truth, with the result that while it
may have been the metaphorical truth of the story they wanted to maintain, it
was its literal truth they were forced to defend.
Indeed, one might even describe this as Christianity’s great
paradox: that a religion based on a collection of canonical texts, most of which
are made up of stories, parables, poems, prophesies and songs, almost none of
which can be regarded as literally true, nevertheless found itself forced to
argue that that is what they actually were. For in a world in which there was
only one paradigm of truth, to not claim that the Bible was true in this sense
would have been to concede utter defeat. As the scientific discoveries of the post-medieval
era gradually chipped away at what literal truth the Church could still cling
to, however, it merely condemned itself to a gradual decline, with the further
sorry consequence that that vast store of metaphorical truth upon which churchmen
of all denominations have crafted sermons for hundreds of years, helping to
guide their congregations through the vicissitudes of life, has also been steadily
expunged from our collective unconscious, to be replaced in large part, one
suspects, by the metaphorical emptiness of Hollywood, with its constant
outpouring of dystopian fantasies and end-of-the-world disasters, all of them
portending our incipient demise.
Nor is even this the end of the destructiveness which our to
failure to recognise metaphorical truth and our consequent embrace of
reductionist materialism continues to wreak upon us. For with nothing to live
for beyond the material benefits which our hugely successful scientific culture
has admittedly brought us, our nihilism is once again taking us to the brink of
nuclear annihilation, the consequence of which may not just be that all
consciousness is removed from the earth, such that no one will ever again
contemplate its magnificent beauty, but that all consciousness is removed from
the universe. For while, on the basis of the Drake Equation, scientists speculate
that there may be billions of planets capable of supporting sentient life forms
in our galaxy alone, as I have argued elsewhere, one only has to tweak the
parameters of this equation a little to turn those billions into just one,
especially when one considers just how extraordinary our consciousness is. For
as we can see all around us here on earth, not all life forms need to develop
consciousness in order to thrive. In fact, there are millions of successful
species that have not evolved beyond the level of Dan Dennett’s wasp and only
one that has evolved to be like us: us. And what is true with respect to the
earth might also be true with respect to the universe, with result that if, as
a consequence of our lack of reverence for ourselves, we end up destroying
ourselves, we may not simply be removing consciousness from the earth, we may,
in fact, be removing it from all existence.
But what of God? you ask. He would still be there to look
down upon his creation, even without the continued existence of those for whom
He created it, or at least He would be if God, Himself, of course, were not also
a metaphor: one which we created, not just so that we would not be alone in the
universe, the only conscious beings, but to express the limits of our knowledge
and understanding. For we know that we do not know how or why the universe was
created and that, for us, such things are essentially unknowable. Believing,
however, that someone must know, we posit His existence. In doing so, however,
we also know that any being so fundamentally different from us that He knows
the answers to these questions must also be essentially unknowable, with the
result that God both fills the void beyond our knowledge and makes it all the
more stark.
Preferring not to face this void, we therefore pretend that
it is not there, that Kant was wrong and there are no limits to our knowledge
and understanding. And to prove it, we have consequently rebuilt the universe
on the basis of what we do know and understand and tell ourselves that this is
all there is, that our scientific reconstruction is the only truth. In our
ignorance, arrogance and folly, however, we include ourselves as part of this
universe and thus deny what we are and what is so unique and precious about us.