Saturday 26 February 2022

The Eyes of Another, Self-consciousness & Morality

 

1. The Apprehension that there is ‘Someone There’ as the Basis of All Morality

There is a passage in ‘Being and Nothingness’ in which Sartre describes a man kneeling at a keyhole in a hotel corridor, so utterly engrossed in what he is watching that he is totally oblivious to everything else, including himself, until he feels the presence of someone else watching him and is suddenly brought back to himself in a torrent of shame.

Phenomenologically, this little tableau has at least three significant elements. The first, of course, is the fact that the eyes of another can do this to us, and do it instantly. For this is not the result of any inductive or deductive process by which we are led to conclude that the featherless biped standing just ten feet away from us and looking directly at us is a fellow human-being who almost certainly makes the same judgements about the world that we do, and who is very probably thinking, therefore, that just a moment ago we were engaged in doing something shameful. It is far more immediate and far less rational than that, in that our apprehension of another as someone who can ‘see’ us is not a reasoned conclusion based on empirical evidence but the instantaneous recognition of a different category of being – different, that is, from all the other objects around us – the instantaneity of this apprehension being very probably the result of thousands of years of evolution during most of which time anything capable of watching us was almost certainly a predator.

This then leads us to the second important phenomenological observation to be made upon the above scene, which is not just that we can inhabit two such very different states of attentiveness – one which is conscious of itself as acting in the world; the other which is so absorbed in what it is doing that this consciousness is lost – but that we do not choose which state we are in and have no control over it. In fact, for the most part, the unself-conscious state happens automatically whenever we need to concentrate on some particularly demanding task or when something so completely grabs our attention that we become entirely focused on it.

In a similar vein, we do not then choose when we lose this focus. For evolution and the need for self-preservation has ensured that no matter how concentrated our attention may be on something, any change in our environment will instantly bring us back to ourselves. This even includes such subliminal changes as a slight dimming of the light, for instance, making us aware that we have lost track of time and need to go and pick up the kids from school.

The most important of the three phenomenological components in the above scene, however, is the fact that while, in his unself-conscious state, our voyeur was not aware that he was engaging in something shameful, he is immediately made aware of this as soon as he brought back into a self-conscious state, strongly suggesting therefore that moral consciousness is a function of self-consciousness and that it is only in our self-conscious mode that we have moral agency.

That is not to say, of course, that when we are totally absorbed in something and lost to ourselves we automatically behave immorally. In fact, most of the time we spend in our unself-conscious mode, we are so engrossed in what we are doing that we don’t do anything else at all. For the most part, however, it is only when we let ourselves go in this way that we occasionally do things which we later regret and of which we are consequently ashamed, thus not only making self-consciousness a necessary condition of moral integrity – our lapses only occurring when we are without it – but casting a slightly different light on the proposition that it is our apprehension of others as what Martin Heidegger called Dasein – a term which, in German, literally means ‘being there’, but has more the connotation of ‘someone’ being there – that is the basis of all morality.

I say this because when this proposition is usually discussed, the focus is almost invariably on the constraints which this apprehension places upon us with respect to how we may behave towards others apprehended in this way. What is seldom discussed is the effect this apprehension has on how we think about ourselves and how this consequently affects our behaviour. The last time I wrote about this in any depth, for instance, I concentrated almost entirely on the prohibition against killing, using a scene from the film ‘All Quiet on the Western Front’ to illustrate what happens when we fail to apprehend others as Dasein, unwittingly, as a consequence, breach the moral constraints which this apprehension places upon us, and are then belatedly confronted with what we have done: a sequence of events which is depicted with exceptional intensity in Lewis Milestone’s First World War epic due to the fact that the central character is a German sniper who, up until this point, had been shooting at targets in the French trenches with apparent equanimity, but who then finds himself trapped in a shell hole in no-man’s-land in the company a French soldier whom he fights and kills, only to then be overcome with horror at having taken the life of a fellow human being.

This then instantly raises the question as to why he didn’t feel this way when he was shooting at targets in the French trenches. Did he not know that they were fellow human beings? Of course he did. This therefore suggests that what we are dealing with here are two different types of knowledge, or two different meanings of the verb ‘to know’. When the sniper was shooting at targets in the French trenches, yes, he ‘knew’ that they were human beings, but in a very limited, entirely abstract way, at the level only of words. When he kills the French soldier in the shell hole, however, not only is he subjected to a tsunami of sensory data which proximity to his victim forces upon him but, alone in the crater, he is also forced to confront the fact that a little while earlier, in the person of this French soldier, there was someone else there with him. And now there isn’t. There’s just a body. And it is he who has done this terrible thing.

While this therefore presents us with a prima facia case for believing that our moral consciousness may indeed be based on our phenomenal experience of others as Dasein and that it is this that places moral constraints upon us with respect to how we must behave towards them, it does not, however, explain why we feel shame when someone observes us doing something shameful. This, however, is where Sartre’s Peeping Tom comes in, along with the wider implications which our apprehension of others as Dasein would appear to have, not with regard to those so apprehended, but with respect to ourselves. For unlike many other species which predominantly react to being watched with fear and alarm – a response to which, under certain circumstances, we are also prone, resulting in a state of panic – under other, more normal circumstances, when the eyes of another merely make us conscious of ourselves, the knowledge that there is someone there who can see us raises a completely different issue: that of how we want to be seen. For as is clearly demonstrated in Sartre’s example, it is not the voyeur’s behaviour itself which causes him to feel ashamed. If it were, he would have been ashamed of himself whether or not he’d been caught. What makes him feel ashamed is being seen as something he does not want to be seen as: a Peeping Tom.

Nor is this dependent on the other actually making or expressing any such judgement. For while the eyes of the other may have brought the voyeur back to himself, they would not have made him feel ashamed if he, himself, did not already feel such behaviour to be shameful. While we usually think of moral judgments as judgements made about others, what this also means, therefore, is that our most important moral judgements are actually those which we make about ourselves, not through the medium of reasoned argument based on moral principles, but through the mortifying experience of our two most powerful emotions of self-censure: guilt at the wrongs we do to others and shame at the wrongs we do to ourselves.

2. The Normative Development of Morality and the Various Ways in which it can Go Wrong

That’s not to say, of course, that the expressed judgement of another never affects us or may not lead us to modify our behaviour. For while all morality may have its foundations in our apprehension of others as Dasein – thereby constraining us to treat them in certain ways while leading us to seek their approval of our own behaviour – in an ever-changing world in which we could only experience road range once we’d invented the motorcar, the specific details of how we are to behave in any given situation can only be arrived at through a continuous process of consensus building in which we all affect each other.

Imagine, for instance, that we are in a restaurant and that something happens which causes me to express my irritation at one of the waitresses in a less than courteous manner. Suppose, also, that there is a member of our party whom I particularly want to impress. Now imagine that after my outburst, I turn my attention back to the table and see this person looking at me in a way which not only makes it clear that she sees me as an arrogant, overbearing bully but as someone who lacks the necessary self-control to resolve such situations in a more thoughtful and, ultimately, more effective way.

This scenario differs from the example of the voyeur in two significant respects. The first is that I did not initially see anything wrong with my behaviour. If I had, I’d have been self-consciously aware of it as soon as I turned back to the rest of the group. It is only the expression on the face of this one person that pulls me up short in a way which might even have caused me to respond with a belligerent ‘What?’ had she not been the person I was hoping to impress. It is the fact that I want her approval, however, that now makes me reassess my behaviour, not to the point, perhaps, at which I’d be willing to admit that I behaved badly – for no one likes to admit that they are wrong – but perhaps to the point at which I might be willing to accept that I may have been a bit hard on the poor girl. For in marked contrast to the scene in ‘All Quiet on the Western Front’ in which the German soldier undergoes a moment of revelation which changes his entire life – what little of it he consequently has left – in our everyday lives, the ways in which we affect each other are usually far less dramatic and more subtle, resulting in small, incremental shifts in our moral sensibilities, which are no less significant for all that. For by us all working upon each other, whether through dark looks of disapprobation or appreciative smiles of gratitude, we all contribute to the shaping of our moral culture.

This, however, constitutes something of a problem both for morality, itself, and for moral philosophy. For while societal agreement on standards of behaviour is not incompatible with all morality being based on our apprehension of others as Dasein, it is clear, both from history and from studies in anthropology, that this kind of normative development can give rise to some very different moral codes with significant variations between them. This, in turn, has led to the widespread belief that there is no absolute basis for morality, that all moralities are merely societal norms developed for the smoother and more harmonious functioning of society, and that, without any absolute basis on which to judge them, they can only be judged – if at all – on how well they succeed in attaining this goal.

By relativizing morality in this way, however, and ruling out the possibility that some moral codes might be ‘morally’ superior to others – this very idea having now been rendered meaningless – not only are we in danger of divorcing morality from our phenomenal experience but we run the risk of turning it into an intellectual exercise in social engineering based, not on morality, but on some abstract theory about how society should be organised – which is to say an ideology – which almost invariably ends in disaster. For the main difference between morality and ideology, of course, is that being based on abstract ideas rather than phenomenal experience, an ideology has no reason to recognise the constraints placed on us by our apprehension of others as Dasein and so seldom if ever does.

An obvious example of this, of course, is Nazi Germany, which combined Germanic mythology with National Socialist utopianism to promise its people a better future than the decade of privation they had suffered in the 1920s as a result of losing the first world war. In order to make this promise, however, the Nazis not only had construct a vision of this glorious Third Reich which was to last a thousand years – something which they did with all the theatricality of a Hollywood epic – they also had to explain Germany’s recent failures. And this they did by blaming them on the nation’s internal enemies: traitors and parasites who betrayed the Fatherland for their own personal gain and therefore had to be rooted out and removed from the body politic to allow the new Germany to emerge.

While this whole ideological narrative was both rousing and plausible, it consequently required that normal moral constraints be put aside with respect to the groups chosen to be the scapegoats. In particular, it required the abrogation of the fundamental moral principle that everyone be judged as an individual rather than an instance of a type: a principle which follows directly from our apprehension of others as Dasein in that, being an experiential phenomenon, it is always itself the apprehension of someone as an individual. The assignment of a particular individual to a type or class, in contrast, is an intellectual exercise which abstracts away from the individuality of each particular case and looks instead for commonalities across all members of the type. To judge someone as a member of a class, therefore, is not to judge them as Dasein, or even to apprehend them as such, and cannot therefore constitute a moral judgement. This is why racism is so fundamentally wrong. Because to judge someone on the basis of their race is to judge them as an instance of a type rather as a unique and distinct individual. In order to get their people to overcome what would have been their normal moral scruples in this regard, therefore, the Nazis actually had to prevent them from apprehending members of the groups in question as fellow human beings: something which was both difficult and outrageous, requiring as it did a propaganda campaign in which Jews, Gipsies and homosexuals were systematically portrayed as sub-human or even vermin to be eradicated.

While Nazi Germany’s descent into the moral abyss was thus ideologically engineered, throughout history there have been numerous other morally objectionable cultures which have developed quite organically, particularly in highly stratified societies in which the existence of a lowly servant class has always presented a problem to those they have nominally served. For servants, of course, have eyes: a fact which has usually led the ruling class to adopt one of two strategies in order to either overcome or live with the tyranny of constantly being watched.

The most obvious solution, of course, is to do what the Nazis did to the Jews. For by rendering someone sub-human, which is to say not-Dasein, they are no longer able to see us, a feat which is best achieved, therefore, by dehumanising them as much as possible: shaving their heads; not allowing them to wash; starving them until they are just skin and bone; and forcing them to wear filthy and degrading uniforms. The problem with this, however, is that it would not have been particularly palatable to the slave owners of most societies who merely wanted to use their non-persons as domestic servants. Something similar, however, can be achieved a lot less malodorously by simply forcing one’s slaves to avert their gaze whenever they are in one’s presence and punishing them severely for any contravention of this rule, their subsequent tendency to cower further adding to their sub-human standing.

In furtherance of this strategy, one should also avoid giving one’s servants names or, if one does, one should give them all the same name, thereby stripping them of their individuality. A similar effect can also be achieved by dressing them in identical livered uniforms and powdered wigs, as was very popular in the 18th century. Above all, however, one must avoid having any relationship with them at the personal level. One should never show them kindness or even acknowledge their presence except to give them orders, and one must never allow them to express or display any familiarity towards oneself.

That such great care has to be taken to maintain this artificial distinction between those who can ‘see’ us and those who cannot is testimony, of course, to the fact that even those imposing and enforcing this distinction must know it to be a lie. For as with all forms of self-deception, we always know what we are hiding from ourselves, the more dreadful the secret the harder we have to work to keep it suppressed, with the result that, in this case, the level of self-discipline required to maintain this distance between masters and servants could hardly have been much less than the level of self-discipline necessary to pursue the alternative strategy: that of preventing the eyes of one’s servants from ever putting one to shame by ensuring that one’s own behaviour is never shameful. Indeed, so similar are the levels of self-control required to pursue either of these two strategies that it is probable that in some societies, such as Victorian England, for instance, they actually coexisted, the masters of Victorian households being doubly protected from their servants, firstly by the maintenance of a strictly formal relationship in which nothing personal ever passed between them, and then by the masters acting with perfect correctness at all times, the phrase, ‘not in front of the servants’ being a very common admonishment for any member of the family who ‘let the side down’.

The moral philosophy which is most closely associated with this second kind of self-disciple, however, is, of course, Stoicism, which, rather unsurprisingly, was given its first expression in classical Greece and went on to flourish in imperial Rome, two very wealthy and highly stratified, slave-owning societies, in which a small minority of aristocrats or oligarchs may have owned most the wealth but enjoyed almost no privacy, least of all in Rome, where the entire households of wealthy patricians, including household slaves and freedmen, were more like extended families than households in the 18th century, say, and were actually referred to, in their entirety, as a man’s familia. What’s more, the heads of these households, known as pater familias, were legally responsible for each of their familia’s members, in that if a member of a familia were found guilty of a crime in a court of law, they would then be handed over to the pater familias to administer whatever punishment the court decreed, including capital punishment. This meant that a pater familias could find himself in the position of having to execute his own son or have his whole familia banished and outlawed, making it hardly surprising, therefore, that such men should chose a moral philosophy which advocated mindfulness and self-control as a way of avoiding any moral lapse that might undermine their moral authority. For it was only by the moral example which they, themselves, set that they could ensure that members of their familia would always act with courtesy and respect towards anyone with whom they had to deal, thereby not only ensuring that their households were respectful and harmonious places in which to live, but that, as a result of the entire familia’s self-discipline, they might never have to exercise the dreadful responsibility to which they were legally bound.

What is truly remarkable about Stoicism, however, is that it actually was a matter of choice. No one ever forced anyone to become a Stoic. People chose to live a Stoic life because they believed it was the best way to live a ‘good’ life. Not being based on any religion and not, therefore, having any ‘revealed’ moral laws imposed on it from outside, it was also one of the few moral codes to be developed entirely by consensus over time, if, that is, a consensus was ever reached. For over the six hundred years or so during which it dominated moral philosophy throughout the Greco-Roman world, hundreds of books and letters were written by Stoic scholars, each one contributing to an ongoing debate which never really ended. Indeed, most of the books and letters which survive today, do so because they were copied and circulated and used as the basis of discussions which not only kept the question as to how one should live one’s life at the forefront of Stoic minds, but made the study of moral philosophy, itself, an important aspect of that life.

3. Self-consciousness, Sex and Misogyny

While Stoicism understood that in order to exercise moral agency one first had to exercise self-control – a requirement which, in turn, necessitates a state of almost permanent mindfulness or self-consciousness – what it failed to fully recognise was the fact that by inhibiting spontaneity, for instance, self-conscience can, itself, be the cause of problems, both for the individual who ‘suffers’ from self-consciousness and for society as a whole.

One of the most obvious examples of this concerns sex, which we have inherited from our animal ancestors whom we assume engaged in the sexual act without self-consciousness. Not so, however, we human beings, who are made more than usually self-conscious by the intimate attentions of another. And while for women, this can seriously impair their enjoyment, for men it can be an absolute disaster. For by being even moderately conscious of themselves during sex, their attention is not entirely focused on that which would otherwise cause them to be aroused, with the result that they then become even more conscious of their own performance – or lack thereof – effectively trapping themselves in a self-perpetuating cycle of self-conscious dysfunctionality which gets worse every time it occurs. For men who suffer from this problem quite naturally approach sex with a certain degree of trepidation, worrying that it is going to happen again, which, of course, ensures that it does.

How widespread this problem may be is hard to say, not least because most men are too ashamed to talk about it and very probably believe themselves to be alone in their misery. One indication that it is far more common than one might think, however, is the fact that there have been many cultures which have developed in such a way as to more or less ensure that it doesn’t happen, primarily by objectifying women. For if women are reduced to mere sex objects, which can be seen but cannot see, then they cannot emasculate men with their withering looks.

Nor is this merely a recent phenomenon, as manifested in such modern cultural expressions as lap dancing clubs and online pornography. In fact, in some ancient cultures, the reduction of women to the mere bearers of men’s children would appear, on the surface, to be far more dehumanising than almost anything they may be paid to do today. In two of the Middle East’s oldest religions, for instance, Zoroastrianism and Judaism, women were not even regarded as having souls: something which, if we really thought about it, I suspect we would have a great deal of difficulty getting our heads round today, not least because it is hard to imagine how a man could love his wife if he didn’t actually see her as Dasein, or how she could love him. What this really tells us, however, is not just that when most if not all marriages were arranged, the last thing marriage was about was love, but that when we look back at ancient cultures – and even some that are not so ancient – we need to leave our modern preconceptions behind. For while we might find them comprehensible, somehow these cultures worked. Otherwise, they would not have survived. Which means that, even if women weren’t accorded the same status we accord them today, they didn’t find their lives so intolerable that they couldn’t live them. In fact, not being accorded the same status as men and not being apprehended by men as Dasein might even have been to their benefit.

I put this idea forward because I want you to imagine what would have happened, three or four thousand years ago, if a man, whose marriage was not based on those close bonds of intimacy and affection which we otherwise know as love, failed to perform in the bedroom. Unable to explain what had happened, distressed and even a little ashamed in front of his family and friends, how do you suppose he would have reacted? Would he or his family not have been tempted to jump at the most favourable explanation available, even if that explanation meant branding his wife as a witch who had cast some spell or put the evil eye on him? And under such a threat, is it not likely that a culture would have developed in which, when called upon to do their marital duty, women simply averted their gaze and allowed their husbands to do as they must?

Not, I hasten to say, that I am actually advocating this as a solution to the problem, not least because there are far better solutions available, one of which, of course, is that marriage be founded upon close bonds of intimacy and affection. Nor am I saying that such a culture would have been brought about intentionally. For if one of the purposes of this cultural development was to prevent the problem of male sexual dysfunction from occurring, rather than to solve it, then those who benefitted from it, both men and women, could not know that this was why things were the way they were. I would, however, suggest that a cultural solution which prevents the problem from occurring is better than having to deal with the problem after it has occurred, not least because the latter approach exposes women to extreme danger and can actually lead to them being hated by men.

To understand this properly, however, one first has to understand the nature of hatred, itself, which, as I explained in ‘The Phenomenology and Politics of Hatred’, is not just a strong dislike. Even less is it a politically incorrect stance taken for whatever reason against a protected group. It is a very specific and very strong emotion which many people find it almost impossible to control, not least because, in its more chronic rather than acute forms, it can become an obsession, like a sore which the sufferer feels compelled to scratch even though it inflames it all the more. It also has some very surprising characteristics. Like envy, for instance, with which it is closely associated, it is an upward looking emotion: something which, initially, most people find it very difficult to accept. For in addition to denying – both to ourselves and to others – that we are even capable of this most unacceptable of all emotions, in the deepest recesses of our lizard brains, where we hide all those things we do not want to know about ourselves, we naturally tend to think that we look down on those we hate and don’t want to believe that, in order to hate someone, we have to believe that they are, in some way, better than us. For this makes hating them seem even worse. And yet it is true.

In fact, it is one of three interrelated cognitive components which make hatred what it is and without which it cannot arise. These are:

1.                 The belief that the hated person or group is in some way superior to ourselves, even though this is invariably denied.

2.                  The belief that the hated person or group has used their position of superiority to in some way harm or disadvantage us.

3.                  The belief that, from their superior position of advantage, the hated person or group looks down on us with a downward looking emotion such as contempt or disdain.

If we now map these three cognitive requirements onto the likely cognitive state of a man suffering from sexual dysfunction, then the possibility that he might feel hatred towards the woman he feels ‘caused’ his problem, and possibly towards all women, is fairly obvious.

1.                  For of course, she is superior to him. She is sexually functional. He is not.

2.                  And of course, she is the cause of his dysfunction. After all, she is the one for whom he failed to perform.

3.                  And of course, she looks down on him with disdain. For he failed to do what men are supposed to be able to do. Which means that he is not a man. And this is what is so dreadful about his condition.

What it is important to note here, however, is that it is not necessary for all or even any of the beliefs fulfilling the three cognitive conditions to be true in order to inspire hatred. It is only necessary that the person doing the hating believe them to be true. Thus, it is very possible, in this case, that the woman does not look down on the man with disdain. But just as in the case of the voyeur, who did not need the other to actually accuse him of doing something shameful in order to feel ashamed, so the sexually dysfunctional male does not need the woman to actually show him contempt in order to feel contemptible and project the cause of this feeling upon her.

Nor does it actually help if she explicitly denies feeling any contempt for him and expresses sympathy, for instance. For this merely confirms her superiority, in that he is the one for whom she is feeling sorry and she is the one providing the compassion or pity, both of which are, again, downward looking emotions. In fact, if the man in receipt of such compassion is in any way disposed towards violence, this may well send him over the edge, as is brilliantly portrayed in the 1977 film, ‘Looking for Mr. Goodbar’, starring Diane Keaton, who plays a school teacher in New York with a congenital disease, the emotional effects of which she tries to overcome by leading an increasingly risky sex life, picking up men in bars and taking them back to her apartment for one night stands, until one night she picks up the wrong guy: a man who is unable to get an erection. Diane Keaton’s character naturally tries to say all the right things: that it doesn’t matter; that it happens to every man occasionally; and that they don’t have to have sex, they can just cuddle. She even puts her arms around him in an attempt to comfort him. And then he kills her.

4. Shyness, Unrestrained Licence and The Bully

Fortunately, such extremely violent responses to sexual dysfunction are very rare, not least, one suspects, because most men who find themselves in this position are sufficiently sensitive to others to know that the woman is not to blame and that the fault is in themselves. Unfortunately, such knowledge is also capable of sending them into a downward spiral of despair that is more likely to end in suicide than murder, as is another condition caused by acute self-consciousness.

This is chronic shyness, which, for most people, begins in early childhood where its original cause, if there ever actually was one, is lost to memory, leaving only the same self-perpetuating cycle of trepidation and failure experienced by the sexually dysfunctional. The difference, of course, is that, in the case of shyness, the anxiety generated is not just felt with respect to one activity, but with respect to almost anything we may be called upon to do in front of others, even something as seemingly innocuous as participating in a simple conversation. So nervous are we that we are going to stumble over our words, say something stupid, knock something over and generally embarrass ourselves, that this, of course, is precisely what happens.

So debilitating can the anxiety become that it can even make us physically sick. The result is that we avoid social situations as much as possible, prop up the wallpaper in the corner of the room when we can’t, and become even more socially inept in the process, making it seem highly unlikely, therefore, that shyness, even at its most extreme, could have any broader consequences for society. After all, the chronically shy person is hardly likely to take out his misery on others through acts of violence. The problem, however, is not the acts of violence which the shy person is likely to perpetrate, but the acts of violence which shyness tends to attract from someone we call a bully. Indeed, it is this attraction to the shy, the timid and the unassertive that actually defines the ‘bully’ as what he is, his predator’s instinct instantly picking out the weakest member of whatever population he has available to him to prey on.

In this regard, the rather trite saying that, ‘If you stand up to a bully, he will back down’, is correct, but for the wrong reason, it being generally believed that, underneath all the swagger and the bluster, the bully is a coward who takes out his inadequacy on people who can’t fight back. This, however, is not the reason he backs down when confronted. It is rather that a fight is not what he is looking for. What he wants is someone he can subjugate and keep in his thrall. And if you stand up to him, then you are not that person. Even more importantly, he wants someone over whom he can assert his power on a continual basis, day after day, week after week. And the key to this is not violence – even though violence may sometimes be necessary – but fear. In fact, the bully has to be very careful about how much violence he actually administers. For if it is too persistent or causes his victim real harm, then the latter may finally rebel, fight back, or even harm himself in an attempt to remove himself from the bully’s clutches. If the bully wants to keep his victim in a permanent state of submissive bondage, therefore, a far better strategy is one of gratuitous unpredictability, in which the victim never knows what is going to happen next, his abuser treating him one day like a family pet, ruffling his hair and even protecting him from the predatory attentions of others, the next day hitting him for some trivial infringement of one of the arbitrary rules he has himself set, thereby keeping his victim constantly off-balance.

If gratuitousness is thus an essential weapon in the bully’s arsenal, it is also an essential factor in why he does what he does in the first place. For power is not complete if it is driven by necessity or even logic. If the school bully beats up another boy to steal his lunch money, for instance – as school bullies are often portrayed as doing – without other conditions being met this is just a case of robbery. In order to turn it into a case of bullying, not only would the bully always have to pick on the same boy, but he must not actually need the money he steals. Ideally, in fact, he should either give it away, in gratuitous act of generosity, or spend it on something he neither wants nor needs. This would then make the act entirely gratuitous, not just from the perspective of the victim, who will not know what to expect next, there being no sense to any of it, but also from the point of view of the bully, himself, who is free to take the money or not take the money as the whim takes him, unconstrained by necessity, reason and, most importantly of all, of course, morality.

For unlike the German sniper in ‘All Quite on the Western Front’, the bully doesn’t do what he does to his victim because he fails to apprehend him as Dasein. Where would be the fun in that? You cannot torment and terrorise someone who isn’t there. The bully knows that his victim is Dasein. He can see the fear in his eyes. Unlike most people, however, he feels no moral constraints as a result of this apprehension. And it is this that gives him his power. For he can do things that other people can’t. It is also what makes bullying so addictive. For once the bully has felt this power – this total freedom from all moral restraint – he has to do it again, and again, and again, if only to prove to himself that he can, that he is free to do whatever he wants, whenever he wants.

It is also what makes bullying contagious. For seeing someone act with such unfettered freedom attracts followers: others who are similarly unconstrained by the apprehension of others as Dasein but who, up until now, had been restrained by the normative rules of society and the disapproving eyes of others. Being in the presence of someone who does not disapprove, however, sets them free, giving them, too, licence to do whatever they want, whenever they want. In fact, together, they not only mirror and hence reinforce each other’s licence but make themselves more or less immune to the eyes of others, being able to meet any disapproving stare with the time honoured challenge of ‘What are you looking at?’.

Indeed, it is this that largely keeps them together. For while they may not be friends, and may not even enjoy each other’s company, together they are stronger than each of them would be on their own and will therefore remain part of the gang until they eventually turn on each other, as, without any moral constraints upon them, they are eventually bound to do.

This semi-permanence, in fact, is one of the main characteristics that distinguishes a gang from a mob, the latter only ever coming into being on discrete occasions for specific purposes. The other main difference is that the licence which the mob gives itself to depart from normal standards of behaviour is deemed to be righteous, its righteousness being derived from whatever specific purpose the mob is pursuing, whether it be the burning of a witch, the lynching of a paedophile or a protest against police brutality and racism. Being deemed righteous, moreover, the resulting licence tends to be even less restrained and, indeed, even more gratuitous than that given to itself by any gang, encompassing such wanton acts as the burning of cars, the looting stores, assaults on anyone who tries to stop these things from happening, and ultimately, of course, the killing of whoever the mob has specifically targeted, whether this be done in person, as in the case of a lynch mob, for instance, or vicariously, as in the case of the crowds which gathered at Tyburn in London during the 16th and 17th centuries to watch public executions.

Depending on the notoriety of the executionee and the nature of the execution, these crowds could number in their thousands and comprised both men and women and even children, all of whom would take the day off, not just from work but from normal moral constraints in order that they might enjoy exactly what the bully enjoys: the terror and the torment of the designated victim, one made powerless by the powerful for whatever crime he may have committed against them and handed over to the mob to satisfy their righteous fury, while allowing them to also experience the joy of knowing that, on that day, it would be some other poor wretch who would be made to undergo this terrible ordeal and not themselves.

5. Further Differences Between Morality and Ideology

Not, of course, that, today, we like to think about these things, let alone try to understand them. Over the last few years, in fact, I have read at least a dozen historical novels set in England during the 16th and 17th centuries, many of which have depicted such executions. Not once, however, has the central character been portrayed as relishing the experience. In each case, he has had to be present, of course, so that the author could record the proceedings, describe the festive atmosphere, the drunken raucousness of the crowd and the brisk trade done by the pie sellers; but not once has the author attempted to get inside the mind of someone enjoying their day out, while the central character is always depicted as being appalled by the whole thing, especially the behaviour of those around him.

In part, of course, this is quite understandable. For we like to identify with the central characters of the novels we read and having a central character who is excited and enthralled by watching a traitor taken down from the gallows and disembowelled while still alive would probably not go down very well with many readers. In fact, the book would probably not get published. Still, one cannot help but suspect that the main reason why most authors would not event attempt to depict such characters is that they simply do not want get inside their heads, partly out of the fear that they would somehow be tainted by the exercise – that if they looked into the abyss, the abyss really might look back into them – but also because they do not want to be thought of as someone who is actually capable of understanding such a mind-set, the implication being that, if one is capable of understanding how someone might take pleasure in watching someone being hanged, drawn and quartered, then one is also capable of taking pleasure in this oneself: a proposition which, at one level, is trivially true, but which, at another, not only betrays a profound misunderstanding of the nature of morality, but, in itself, actually constitutes a moral failure.

The basis upon which the proposition is trivially true, of course, is that we are all capable of any human behaviour we can understand. Only behaviour which is insusceptible to being understood – which is literally insane – is therefore beyond us. Indeed, it is why we actually prefer to simply label aberrant behaviour as insane rather than try to understand it. Because, if we understood it, not only would it not be insane, but we would have to admit that we were capable of it. The reason why our preference – to not go down this route – is both a failure to understand the nature of morality and a moral failure in itself, however, is that it is only by understanding the darkness that is within us that we can rise above it and hence control it.

To return to my earlier account of ‘Looking for Mr. Goodbar’, for instance, it is not the man who understands sexual dysfunction who is liable to lash out in rage and kill the woman he is with, but rather the one who doesn’t understand it, and is consequently caught up in a tide of emotions he also doesn’t understand and cannot therefore control. This is something which, of course, the Stoics understood: that it is only by understanding our emotions that we can transcend them and thus control them. It is also why the study of moral philosophy was not just considered an integral part of the Stoic way of life but a moral requirement, thereby actually making our own decision to deliberately not understand some of the darker parts of our nature an immoral one.

Not, of course, that there isn’t a perfectly good reason why we have chosen this path. For unlike the Stoics, who believed that we all have this potential to do bad things inside us and that, both for our own good and for that of society as a whole, it is our moral duty, therefore, to suppress the worst aspects of our nature while nurturing the best, ever since Jean-Jacque Rousseau introduced the concept of the noble savage in the 18th century, western civilization has increasingly believed that human beings, in a state of nature, are essentially good and that we are corrupted by the social structures within which we are forced to live. While the Stoics actually viewed society as a civilizing force, enabling us to discuss moral philosophy, for instance, and hence transcend the more brutish aspects of what we are, we, in contrast, have increasingly come to believe that, by forcing us to live in ways that are essentially inhuman, it is society that makes us brutish.

One of the main reasons we have chosen to believe this, however, is almost certainly that it absolves us of moral responsibility. If a man kills someone in a violent rage, for instance, it is not because he failed to control his pent up anger and resentment, but because society has given him so much to be resentful and angry about. Thus, to coin a phrase, we not only need to be tough on crime, but tough on the causes of crime, which, if societally based, must inevitably be found in our civilization’s principal social structures, including the nation state, which promotes nationalism, xenophobia and racism; the patrilineal family, which engenders sexism and sexual inequality; and capitalism, which breeds greed, venality and huge disparities of wealth. If we have any moral responsibility at all, therefore, or so the argument goes, it is to change these fundamental social structures.

The problem with this, however, is that, once again, it is an attempt to engineer society and has nothing to do, therefore, with morality. For while we may have a moral responsibility to change ourselves – to continually strive to make ourselves better in the Stoic tradition – we cannot really assume a moral responsibility for changing others. Yes, we can write and teach. But we cannot guarantee that anyone is going to read or listen. Nor can we force them to do so, as that would almost certainly involve some form of coercion, with negative consequence for non-compliance, which, unless it can be shown that the person being coerced has done something morally wrong, is, itself, morally wrong, especially if those selected for coercion are chosen on the basis of their membership of a class or group. For as I explained earlier, this, in itself, breaches one of the moral constraints placed on us by our apprehension of others as Dasein: that we always judge Dasein as an individual, not as an instance of a type, a form of judgement which also has other inherent risks.

This is because any judgement which is made about someone based on their membership of a class or group, rather than on them as an individual, must necessarily be derived from whatever characteristics or attributes we believe this group to possess. And although these beliefs may have some basis in empirical fact, as I hope was clear in the case of the Nazis’ beliefs about Jews, they are just as likely to be informed by the total set of beliefs and ideas which constitutes our world-view or ideology. Either way, any judgement based upon these ideas will therefore comprise some measure of prejudgement or prejudice.

Nor is this prejudice confined to race. In fact, any judgement we make about someone on the basis of our assignment of them to a group or type, about which we have certain beliefs, will be prejudiced by these beliefs. Nor must the assignment of a someone to a group or type be based on any visual characteristics such as the colour of their skin or the shape of their eyes. It could just as easily be based on something they say or an attitude we perceive them to have. Thus, one might easily assign someone to the type ‘homosexual’ on the basis of what we take to be a typically ‘gay’ hand gesture, after which all our judgements about this person are likely to be informed by our beliefs about homosexuals.

Nor is this kind of ideologically based judgement confined to those we typically believe to harbour such prejudices. It just as frequently happens the other way round. A homosexual man, for instance, may well assign someone to the type ‘straight’ on the basis of nothing more than a poor dress sense, after which all his judgements about this person will then be informed by his beliefs about heterosexuals. In fact, this is very normal. The problem comes when this kind of ideological judgement usurps the place of moral judgement, as in the case of racism, not least because ideologically based judgements tend to be extremely crude and simplistic, unlike moral judgements, which, because they are about individuals rather than types, tend to be far more complicated and nuanced.

Suppose, for instance, that you hear a colleague make a racially derogatory remark to someone, which greatly surprises you in that you have known him for some time and have never heard him make racist comments before. Feeling you have no alternative but to speak to him about it, however, you ask him what’s going on. With a mixture of exasperation and contrition, he then explains that the person in question had annoyed him by assuming that, because he is white, he would automatically be a racist. ‘And so I acted like one’, he says, quite clearly recognising that it was both wrong and stupid of him to have done so and already bitterly regretting it.

Because moral judgements are not just complicated but personal – based on our own phenomenal experience of the person being judged – I cannot, of course, know for certain what you would ultimately decide about your colleague in this case. You’d probably tell him that he has been extremely stupid and that, no matter what the provocation, it was wrong of him to have reacted in the way he did. You’ll also probably tell that he needs to control himself better in future. The one thing I don’t think you will conclude, however, is that he is a racist. Not so, however, the man to whom he spoke so roughly, and who will have almost certainly dismissed him as just that. Worse still, unlike your judgement, which was moral, his will have almost certainly been ideological. After all, he didn’t know your colleague well enough to know that he wasn’t a racist and just assumed that he was. As a consequence, he will now have had that assumption confirmed, which is doubly unfortunate in that, unlike moral judgements, ideological judgements tend to be irrevocable, not least because there is hardly any social mechanism for revoking them.

To illustrate this, suppose that you have another colleague who you really do suspect of having racist tendencies. In fact, you really don’t like him and rather wish you didn’t have to work with him. Suppose also that a new member of staff is assigned to your department, who happens to be black, and around whom your possibly racist colleague initially behaves rather awkwardly. After a few weeks, however, they are assigned to the same project and, to your amazement, seem to be getting on remarkably well. They even have lunch together and can be heard joking and laughing. As obliquely as possible, you therefore decide to ask the colleague you thought to be a racist about his new partner, to which he replies that ‘he is not a bad bloke once you get to know him’, the key phrase here, of course, being ‘once you get to know him’, not as an instance of a type, but as a fully rounded fellow human being: as someone there.

You now have a new insight into your colleague’s character and have to admit that either you were wrong about him or he has changed, either of which is possible. For we all get things wrong about people from time to time, especially when we initially dismiss them by simply assigning them to a type. Even more importantly, we all occasionally do things which we regret. After all, we’re only human. However, we are all also capable of learning from our mistakes and of amending our ways accordingly. At least, most of us try. And when we judge others morally, as Dasein, we are able to recognise this, in that we can usually tell when someone is truly sorry and trying to change. Do we sometimes get this wrong? Of course, we do. Do people sometimes fool us? Yes, of course. Even so, when we feel that someone is truly sorry about what they did and is trying to make amends, we tend to give them a second chance. And this is the point I am trying to make: that moral judgements are not irrevocable.

In order to give people a second chance, however, we have to be close enough to them to see the changes that are taking place. That is to say that we have to apprehend them as a complete human beings, as Dasein. When we make ideological judgements about people and dismiss them merely as being of a type, however, it is very difficult for us to do this. It is as if we close the door on them. We dismiss them as an X or a Y and that’s it, leaving us with no of way of judging them except ideologically.

6. Ideology, The Internet and The Mob

In fact, stereotyping or pigeonholing people on minimal acquaintance is possibly one of the most salient, as well as divisive social characteristics of our age, creating a culture in which forgiveness and reconciliation – and even just talking to each other – has become increasingly difficult. And, in my view, there are two main reasons for this.

The first is the prevalence in the Western World of an ideology which owes its origins to a group of people known as the Frankfurt School: academics from different disciplines who fled Nazi Germany in the 1920s and 30s and were welcomed at Princeton University in New York, from where they began to propagate the political philosophy which we have come to know as Cultural Marxism, the most basic premise of which is that the world is divided into two classes or types, the oppressors and the oppressed, each of which can then be broken down into a number of subclasses. With respect to the oppressed, for instance, these subclasses include women, people of colour and homosexuals. Because each subclass is not exclusive, it being possible to belong to more than one subclass, this then gives rise to the concept of ‘intersectionality’, the intersectional subclass of black, homosexual women, for instance, being one of the most oppressed, while white, heterosexual men comprise the dominant intersectional subclass among the oppressors.

Not of course that the ideology is so crude as to suggest that simply by being a member of one of these subclasses one is either a wielder of the relevant form of oppression or a victim of it. It is a little more subtle than that in that it also explains how these relationships of dominance and subjugation have come about. The fact remains, however, that due to a few hundred years of history, during which white people dominated and oppressed black people, assigning them to a class of sub-humans about which they then accumulated a whole raft of cultural prejudices, white people still occupy positions of privilege in society today, with the result that, whether they know it or not, they are all inherently racist, just as all men are inherently sexist, even when they are trying not to be, which just goes to prove that they are.

In fact, according to this ideology, there is one intersectional subclass which simply cannot win no matter what it does: an inevitable consequence of the ideology’s logic which, given the number of people – all male – who are consequently rendered irredeemable by it, makes one wonder how it could now have become so dominant. This, however, leads us to the second main reason why we are where we are today: the advent of the internet.

For while Cultural Marxism may have been influential on college campuses throughout the 60s and 70s, the European roots and Bohemian lifestyles of its leading proponents, along with their opposition to the Vietnam war, making them both popular and fashionable among British and American students from largely conservative, Anglo-Saxon backgrounds, the esoteric concepts with which it dealt and the opaque language in which these concepts were expressed meant that it was never really likely to gain a mass following. Even when it was taken up by third wave feminists during the 1990s, when the concept of intersectionality began to be used more widely, it still didn’t really become mainstream, not least because, if anything, the language became even more impenetrable. What it really needed, therefore, far more than its student adherents from the 60s and 70s filling positions of power and influence, was a medium of communication which would not only gain it a wider audience but could provide it with a new, more accessible form of expression. And this was what the internet or, more specifically, social media did.

Not, of course, that this was in any way intentional. The truth is rather that social media just so happened to have its own set of requirements which coincided rather neatly with those belonging to what was soon to be called ‘Wokeism’, one of which was the need for greater simplicity in what was communicated, especially with respect to value judgements. For whatever else one may be able to do with the 140 characters which Twitter initially provided, one can hardly use them to discuss nuanced moral judgements. What Twitter and, indeed, all social media therefore required was precisely what Wokeism actually had: an entirely binary value system in which people and ideas could be characterised as either unequivocally ‘good’ or unequivocally ‘bad’ depending on the abstract type to which they were assigned, rather than any experiential phenomenon, which social media could not convey anyway. What it could do, however, was provide a language in which users could express their approval or disapproval of these unequivocally good or bad things as economically and transparently as possible, in some cases by simply pressing the ‘Like’ or ‘Dislike’ button, while adding a comment in the form of an emoji or one of a widely accepted set of acronyms or abbreviations.

Nor was this coming together of an entirely binary value system and a widely accessible medium for expressing it the only way in which Wokeism and social media mutually benefitted each other. For Wokeism’s value system, of course, already came replete with a whole set of social issues, the solutions to which could be discussed online and approved in the appropriate manner, thereby providing the social media platform with a whole new sphere of content.

What really gave this partnership momentum, however, was the way in which participants were able to deal with those who dissented from the ideologically correct line. For without the moral constraints which they would have felt dealing with opponents in person, they could gang up and attack people as viciously as they liked. Yes, of course, they still knew that the people they were trolling were fellow human beings. But just like the German sniper in ‘All Quiet on the Western Front’, when he was shooting at distant targets in the French trenches, this knowledge was confined merely to the level of words, in this case words on a screen. For without our phenomenal experience of someone there, the term ‘human being’ is merely an abstraction without substance or meaning. And that is what human beings on the internet, defined by an abstract ideology which only recognises types, have become.

And yet even this, isn’t the worst of it. For with this new online mob in full righteous tilt, shaming, deplatforming and cancelling anyone who opposed them, woke ideology very quickly became self-policing. People became increasingly afraid of saying anything ideologically incorrect, not just on social media but in any media at all. In fact, anyone in public life today, has to be constantly vigilant about what they say, remaining on script as much as possible, while taking every opportunity to signal their commitment to woke values, thereby creating the kind of disingenuous world in which hypocrisy becomes the norm and no one is willing to speak the truth.

Adding still further to the general atmosphere of paranoia, it has now also become extremely difficult to know when the rules of the game have changed or are going to change. For while woke culture may have had its origins in Cultural Marxism, having gone viral, it has long since left those origins behind. More to the point, having no roots in phenomenal experience, the mob can drive this entirely abstract ideology in almost any direction they fancy consistent with its already  contorted logic.

Take, for instance, the idea that gender is a social construct. When this idea first became ideological orthodoxy I’m not sure, but looking through the available literature online it is fairly clear that it has undergone at least one transformation since then. I say this because, as recently as 2005, Schneider, Gruman & Coutts, for instance, were still maintaining a clear distinction between biological sex, which is determined by one’s chromosomes, and gender roles which, to varying extents in different cultures, are clearly determined by social norms. This, however, falls a long way short of what transgender activists currently insist: that not only are post-operative transgender women women in the fullest sense, but that anyone who identifies as a woman is a woman, thus totally removing biology from the equation in a way which caused Germaine Greer to be deplatformed in 2015 for refusing to accept something so patently false and absurd.

What this really tells us, however, is not just how divorced from reality the woke mob has become, but how little importance any such ideology needs to attach to either the factuality of its premises or the soundness of its arguments. For what actually determines whether people believe what an ideology tells them is not its factuality or soundness, but whether or not other people believe it. It is the sheer weight of numbers, in fact, and the discomfort which most people feel at being the lone dissenter, that determines whether an ideology thrives, especially if dissent is penalised, as it usually is. 

Consider, for instance, the case of those who currently refused to be vaccinated against Covid-19 on the grounds that, like Novak Djokovic, they are young and fit and unlikely to be made seriously ill by the actual disease, while the vaccines were rushed into production without proper testing and have already been shown to have adverse health effects, including myocarditis, an inflammation of the heart muscle to which fit, young men are especially at risk following vaccination for Covid. Even more importantly, the Nuremburg Code, which was accepted by just about every country in the world  after the second world war, makes it illegal to force anyone to accept any medical procedure or treatment against their will, including vaccinations. And yet governments around the world, mostly with popular support, have been making it mandatory for parts of their populations to be vaccinated, with severe penalties, such as the loss of employment, for anyone who refuses, their argument being that the unvaccinated pose a risk to everyone else.

The illogicality of this argument, however, is absolutely astounding. For if the vaccines protect one from the disease, then the vaccinated should not be at risk from the unvaccinated. The unvaccinated should only constitute a risk to each other. Unless, of course, the vaccines do not protect one from the disease, in which case one would be forcing those who dissent to accept a medical treatment which has no benefit to themselves, and which does not prevent them passing the virus on to others, but which might have adverse effects on their own health, all in contravention of the Nuremburg Code. It thus makes no sense at all. And yet millions of people around the world support this point of view. Like the Tyburn mob, in fact, they vociferously demand that action be taken against those who have been demonised as selfish, antisocial delinquents, with the result that in Australia, for instance, the unvaccinated are actually being interned in quarantine camps.

In a recent, controversial interview, Dr. Robert Malone, one of the inventors of the mRNA vaccine, referred to this pattern of behaviour as ‘mass formation psychosis’, characterising it as the following of a set of beliefs regardless of the lack of any evidence to support them, combined with hysterical attacks on anyone who questions the narrative, which is a fairly good description of what is going on. Coining a new name for this syndrome, however, makes its seem as if the syndrome itself is something new, when, in fact, it is what always happens when, for whatever reason – fear being one of the most common – we stop apprehending others morally as Dasein and start thinking of them ideologically as an instances of types. In Nazi Germany, it was the type ‘Jew’ that was made the scapegoat and of whom it was also said that they spread disease. Today, it is ‘transphobes’, ‘anti-vaxxers’ and ‘climate-change deniers’ who are turned into non-people as soon the label is applied to them.

Despite our persistent tendency to revert to this pack-like mob behaviour whenever we feel threatened, one has to question, however, whether Covid hysteria, along with our willingness to accept of ever more authoritarian restrictions on our liberty, could have so taken hold of us had we not already been conditioned to act in this way by the combination of woke culture and social media. It is this underlying change in our cultural environment, therefore, that is the greatest cause for concern. For while we may always have been prone to mob behaviour, we have now created the conditions for making it permanent.

I say this because unlike the Tyburn mob, who returned home each evening, sated and perhaps a little sickened by their feeding frenzy, but able to return to themselves and therefore normal life, the mob, today, is always with us, not just in the form of social media but through the medium of broadcast television and twenty-four hour news, which, if anything, has now become the mob’s principal organ, instilling fear by means of one exaggerated and sensationalised story after another, while keeping us confused as to what is really happening by never properly explaining or making sense of it. If we are ever going to escape the mob’s hold on us, therefore, we first have to divest ourselves of this tyrannical beast, which, while we remain ignorant of our own true state, I doubt whether we are going to do any time soon.

Indeed, it is possible that the only way we will ever be brought back to our senses is if woke ideology, itself, has its way. For one of the most likely, if unintended consequences of our current climate change hysteria is that it will cause our electricity grids to fail, thereby taking away our access to both television and the internet. Unfortunately, it will also mean that no one will be able to access this essay. But if it frees us from our current insanity, it will be worth it.