Complexity

0

The human brain is the most complex entity in the known universe. It is not an exaggeration to say that we have hardly begun to understand it. Yet already there are many initiatives to simulate the intelligence it bears. In contemporary science, the term ‘Artificial Intelligence’ has taken on a rather different meaning from when it was first introduced. It seems our notion of intelligence has weakened; whilst once upon a time an agent had to pass the Turing Test to merit the appellation, now it is applied to relatively rudimentary problem-solvers which appear positively dumb in comparison. An agent that could pass the Test would have demonstrated nothing less than Artificial Personhood. Would this be a qualitatively different achievement from that of current intelligent software agents which turn their digital hands to various algorithmic tasks? Or is it merely a case of developing greater processing power and finding the right algorithms?

:(

0

nuts… going back to food blogging.

What is the significance of Anxiety for Faith?

0

What is the Significance of Anxiety for Faith?

Introduction

Throughout Kierkegaard’s work he addresses problems he sees arising in contemporary Christendom from a faulty definition of the concept of ‘original sin’ . I’m going to post my elucidation of this as a background piece for my theology/existential psychology thesis.

1

As Kierkegaard saw it, original sin had arisen in two phases. First, under the influence of Hegelian ‘ontological logic’, a qualitative distinction had arisen in the popular imagination between Adam and the rest of humanity: unlike the first man, the descendents of Adam were considered innately corrupt and therefore unable to respond to the fact of sin in their lives. Second, under the influence of this logic, the Genesis account had begun to find itself deconstructed, such that it no longer presented a situation that was ‘actual’ for the contemporary reader. The result of this was that the concept of Arvesynd was taken not to require a personal, Christian reaction.

This is the broad context within which Kierkegaard, via his pseudonym Vigilius Haufniensis, introduces the concept of ‘anxiety’ (Angest). Anxiety is something real and really present in every human individual. And not only is it present, but is ‘absolutely educative’ (155) for every human individual; it serves a vital role in the dialectic of the self’s development. In this way, the concept of anxiety represents a corrective to the faulty definition of Arvesynd that was prevalent in Kierkegaard’s contemporary context, designed to jolt the reader into a right understanding of sin and faith, ‘the mood that properly corresponds to the correct concept’ (14).

In this essay, I will first of all outline the method by which anxiety is considered in this book (section 1); this is important, for Haufniensis himself takes care to define the limits of his method, and without this the critic runs the risk of exceeding the conclusions the book itself permits. In the main part of my essay, I will first describe (section 2) and then analyse (section 3) the concept of anxiety in relation to faith. Finally, I will conclude that the relation between the two is ‘coterminous’: anxiety in this book not only explains the need for faith, but also provides the condition of faith. In this way, The Concept of Anxiety provides an account not just of the Fall, but of salvation. 1. Method

First, then, what is the method Haufniensis adopts in order to consider the significance of anxiety for faith?

It should be noted that for some critics the book hardly presents any method at all! Roger Poole, for example, has suggested that it may have been intended by Kierkegaard as nothing more than a kind of ‘gay spoof of an academic textbook’.2 According to this reading, the literary playfulness with which it is undoubtedly written is deliberately confusing; the effect of the whole is that serious philosophical meaning cannot be extracted from the text in a linear or analytical way, nor was it ever intended thus. To take one example; he suggests that many of the fine distinctions drawn by the text (such as those between ‘sin’ and ‘sinfulness’, Synd and Syndighed) are really pseudo-qualifications, drawn only in order to provoke and stimulate the reader. Or to take another example, he notes the unusual number of sibilant terms in the Danish, and concludes that (especially when read aloud) the text mischievously suggests the hissing of the serpent that tempted Adam and Eve in the garden.3 The style of the book, then, evokes the very ‘dizziness’ that characterizes the experience of anxiety, and in this way, for Poole, it is intended to ‘refuse instruction and disseminate doubt’ in the mind of the reader.4

However, this sort of interpretation does not take into account the painstaking effort that Haufniensis himself makes throughout the book to delineate his own approach. Haufniensis claims to have control over the method by which he is advancing his thesis about anxiety and assures the reader on numerous occasions that he will not transgress the limits of that method by straying into the territory of any other ‘science’.

1 Literarily ‘inherited sin’; this is translated throughout by Thomte as ‘hereditary sin’, rather than the more vernacular ‘original sin’ of Lowrie; cf. discussion in Poole (1993), p.92.

2 Poole (1993), p.84.

3 Ibid, p.107

4 Ibid, p.98

2

So what is the method that Haufniensis utilises? It is revealed in the subtitle of the book: The Concept of Anxiety is presented as ‘a simple deliberation in the form of psychological observations directed towards the dogmatic problem of original sin’. So the method, even we might say the genre, that is adopted by Haufniensis in this book is that of psychology.

At this point, the original reader would have been reminded that, at that time, psychology (along with so many other ‘sciences’ of course) had been self-consciously appropriated as a Hegelian discipline. For Hegel, psychology constituted part of his all-embracing science of man as emerging self-conscious spirit.

The prompts a stinging attack by Haufniensis. The problem he identifies is that this results in the dangerous and unwarranted conclusion that ‘actuality’ was a valid subject matter for logic; or, to put in other another way, that the demand of personal agency in matters of faith could be ‘safely ignored’ (10). The reader considering faith in terms of ‘Hegelian psychology’, then, will find himself ‘duped’ in matters of faith (11).

Haufniensis rejects this Hegelian definition, then. But he does not reject the psychological approach entirely. For him, when the psychological method was appropriated correctly, there was much it could do to explain the significance of faith. Thus he writes: ‘that which can be the concern of psychology and with which it can occupy itself is not the fact that sin comes into existence, but rather how it can come into existence’ (22, original italics).

In this way, The Concept of Anxiety can be considered as an attempt to re-appropriate the psychological method in the service of an explanation of sin and faith, via the concept of anxiety.

2. Description

Having established this method, how does Haufniensis go about using it to describe faith via the concept of anxiety?

I will suggest there are three entry points by which we can understand how he does so. First, in a general sense, the psychological method that Haufniensis uses presupposes that anxiety is significant for faith. Second, he shows this is the case via means of an exposition of the Genesis narrative. Third, he demonstrates that this continues to be the case in the life of all those who follow Adam, albeit in a subtly altered form.

2.1 General

In one sense, the relationship between anxiety and faith is straightforward. For anxiety is the state in which the individual (constituted as he is of a mind/body synthesis) begins to confront his own growing awareness that something is lacking qua this synthesis. In this case of every human individual, this confrontation results in one or two reactions. Either the individual decides to cling to his familiar nature, thereby failing to respond to anxiety; Haufniensis calls this the unhappy condition of ‘spiritlessness’, in which ‘[…] there is no anxiety, because the individual is too happy, too content, too spiritless to allow it’ (95).5 Or, alternatively, the individual responds to the anxiety he feels by discovering the nature of his ‘eternal qualification’ (61). This enables him to find by faith the God-relation. And this in turn results in a transformation of the individual from a mere mind/body synthesis into something new: he ‘becomes spirit’ (42).

However, because the process of finding the God-relation by faith goes against human nature, it is naturally accompanied by anxiety and reluctance. Or, to use a term that is frequently employed by Haufniensis, it is ‘personally strenuous’ (28, 69, 71, etc).

The process of anxiety leading to faith for a human individual is contrasted with the ‘natural’ processes that can be observed in the life of something ‘determined’ or mechanistic. The example Haufniensis gives is that of a plant (22). It is the nature of a plant to grow in a certain way; its physical state is a result of inevitable processes

5 Haufniensis does however nuance this by suggesting that, even though there is no ‘anxiety’ in this condition of spiritlessness, nevertheless anxiety ‘is waiting in the background’ (96), and this constitutes a kind of restlessness that is constantly inviting the subject to reconsider his decision.

3

involving sunlight, chemicals, water, and so on. In this way, its existence is a necessary unfolding of its own potentiality; or, as Haufniensis describes it, it grows according to ‘quantitative determinations’ (33). But for faith to ‘grow’ within a human individual, the conditions are very different. In this case, the individual must be conscious of an infinite disparity between his familiar, ‘necessary’ nature and a (perhaps as yet unspecified) infinite aspiration that has been prompted by anxiety. Thus, for faith to arrive, the individual must participate in it by a ‘qualitative leap’ (32), not according to something already internal to himself.

The process that is in question, then, is not comparable to a plant (becoming what it is by nature); rather it is something distinctively psychological (becoming conscious of something that is not already in you). Thus, within the method employed in the book, psychology when properly understood demands a concept of anxiety.

2.2 Historical

The general observation given above is further supported by Haufniensis’ exegesis of the story of creation and the Fall of Adam.

The essence of Haufniensis’ question in regards to Adam is as follows: how can it be that this first man went from a state of ‘innocence’ to the state of sin in one leap? (37) Or, why was it that ‘guilt broke forth in his case via the qualitative leap?’ (41)

Prior to the Fall, Haufniensis argues that Adam was ignorant of the difference between good and evil: ‘in innocence, man was not qualified as spirit, but was psychically qualified in immediate unity with his natural condition’ (41). Adam’s condition was a form of ‘sleep’ in regards to the external world: ‘the spirit in man was dreaming, and in this state there was peace and repose’ (41).

However, this condition was not entirely as it seemed, for there was something else present that caused the state of innocence to be disrupted in Adam. Whatever it was, this other thing caused the otherwise innocent man to ‘beget anxiety’ (41).

This cannot be attributed to something external to Adam, for ‘there was nothing in existence or in his own self against which he could strive’ (43).

Rather, that which caused anxiety to intrude into Adam’s state of innocence was the presentiment of the freedom that would follow if his consciousness was fully awakened. This is spoken of by Haufniensis as ‘freedom’s actuality as the possibility of possibility’ (42) or as ‘entangled freedom, where freedom is not free in itself but is entangled, not by necessity, but in itself’ (49).

Under the influence of this condition, eventually Adam moves out of the state of innocence into the condition of sin. This can only be posited by means of the ‘qualitative leap’ (112). This is of great significance to the concept of faith that Haufniensis will outline later in the book. To postulate anything external to Adam that provoked or caused him to sin is to be dismissed: ‘if the object of anxiety is something, we have no leap, but only a quantitative transition’ (77). Rather, it was something internal to Adam that caused him to make the ‘leap’: his sin comes literally ‘out of nothing’. This sets up what Haufniensis will describe as ‘the dialectic of faith’, which in the same way presents its self as a ‘qualitative leap’ that must be made by the individual by himself.

Having argued that it was the concept of anxiety that provided the conditions for Adam to sin (via the ‘qualitative leap’) it is important to note that Haufniensis does not concede that his state of innocence was therefore flawed or incomplete. To show this, he offers an illustration regarding the experience of children. He notes that in children there is something like this experience of anxiety: ‘in observing children, one will discover this anxiety intimated as a seeking for the adventurous, the monstrous, and the enigmatic’ (42). And yet, he asks, does this prove that children don’t live in a state of innocence? ‘Not at all’, he answers: just because children experience this does not mean that their childhood is corrupted; rather, ‘anxiety belongs essentially to the child,

4

such that he cannot do without it’ (42).6 The child experiences anxiety, but it also captivates him, paradoxically, as a sort of ‘pleasing anxiousness’ that he cannot do without [Beoengstelse] (43). The same must have been true for Adam in his original state, Haufniensis suggests.

To summarise, then: in his state of innocence Adam was invaded by anxiety, not as an external aggressor, but as a condition of his own freedom and being. And it was this concept of anxiety that provided the condition by which he ultimately sinned in disobeying the command of God, as documented in Genesis 2-3.

2.3 Contemporary

The experience of Adam shows points of both similarity and dissimilarity to our own.

On the one hand, the same movement that afflicted Adam (innocence interrupted by anxiety, leading to the ‘qualitative leap’ into sin) is exactly the same movement that afflicts all who follow him. Thus, Haufniensis writes: ‘that which explains Adam also explains the human race, and vice versa’ (29). To ignore this, to ‘place Adam fantastically outside the history of the human race’ (25), is a logical fallacy and an abandonment of our duty to face up to the requirement of the ‘qualitative leap’. This is because it allows us to situate Adam as an outlier to the human race and therefore denudes him of the existential challenge he places on each and every one of us to face up to and confront our own sin. This, says Haufniensis, is akin to a ‘legal loophole’ that ‘allows us to escape the recognition of sin’ (27).7

And yet, on the other hand, the application of Adam’s experience also shows points of dissimilarity to our own. Although in its main form it is the same, he suggests that the state of sin has become more ‘reflective’ as generations have passed by, so that there is a slightly different ‘quantitative determination’ (57) now than there was for Adam.8 This is the consequence of time passing, and a certain residue of ‘historical’ knowledge of sin (or, as Haufniensis puts it, it is the ‘consequence of the relation of generation’, heading of chapter III).

However, the key point to note is that the ‘qualitative leap’ still applies to every individual that follows Adam: ‘this more is never of such a kind that one becomes essentially different from him’ (64, original italics).

3. Analysis

In this final section of my essay, I will offer analysis of three aspects of Haufniensis’ account of anxiety and faith, in each case suggesting their significance within the broader purview of Kierkegaard’s religious writing.

First, Haufniensis’ account provides a rationale for the human condition that is situated entirely within the horizon of human psychology. That is to say, in attempting to explain how man moved from a state of innocence to a state of sin, his account resists the temptation to bring in any third, or mediating, factor. The concession of anything external to Adam that can be accused of ‘provoking’ sin is consistently dismissed.

To take just one example, Haufniensis considers the possibility that God’s command in Genesis 2:17 might be taken as a ‘prohibition awakening the desire’ (45ff.). And yet, he throws out this possibility. Why? Because for Adam to have ‘desired’ anything, he must have ‘had a knowledge of the object given as a possibility, and a desire to achieve it or to use it’ (44). For an individual residing in the state of innocence, a ‘desire’ like this

6 In fact, he even suggests that ‘higher’ the culture, the more this is respected and nurtured in children by the adult world that is responsible for them (42).

7 As an aside, Haufniensis (provocatively) suggests this holds up a mirror to the practionners of higher criticism of his day. By claiming the Genesis story to be myth, they evade its existential application to their own lives. But in doing so, ironically ‘what they have substituted in its place’ is, in turn, ‘a myth’ – and ‘a poor one at that’ (32). In this way, ‘no age has been more skilful than our own at producing myths and yet also desiring to eradicate them’ (46).

8 Haufniensis offers the interesting suggestion that this is even evident in the very first ‘generational relation’, that is, Eve. He suggests that

‘anxiety is reflected more in her than in Adam’ (64). This is because, created second, she had a greater disparity in the synthesis of her own self (for example, as reflected in her quality of ‘sensuousness’), and therefore, like a pendulum, she swung over ‘more’ in anxiety than Adam, who was created first. This does not indicate that in some rudimentary way she is ‘more guilty’ than Adam; only that she started from a different ‘relation’, and so the effect of her sin was subtly altered compared to him.

5

cannot have been triggered by an external agent (in this case, the voice of God). In its place, Haufniensis offers a psychological explanation that maintains the personal agency of the individual (namely, Adam was guilty of disobeying the command) without undermining the state of pristine innocence (namely, Adam was not determined to disobey the command by something outside the range of his own freedom). The explanation for sin can only come via the concept of anxiety, since anxiety does arise as a mediating agent from the outside, but literally ‘out of nothing’. It is as if ‘Adam talked to himself’ in causing himself to yield to temptation (44).

This aspect to Haufniensis’ account is noted by Philip Quinn, who suggests that the significance of

Kierkegaard’s account of the Fall over and against his predecessors (in particular Kant) is located precisely here: he is able to provide Adam with a ‘logical motivation’ for his sin.9 That this was one of the objectives of the work is indicated by Kierkegaard himself in his Journals, where he notes the need to carefully define what this ‘middle term’ really was: ‘he who becomes guilty through anxiety is indeed innocent, for it was not he himself but anxiety, a foreign power that laid hold of him, a power that he did not love but about which he was anxious […] and yet he is guilty for he sank in anxiety, which he nevertheless loved even as he feared it’.10 My suggestion is that The Concept of Anxiety is ultimately successful in locating and describing this ‘middle term’, such that ‘innocence is not guilty, and yet there is anxiety as though it were lost’ (45). In this way, Haufniensis’ account can lay claim to success in providing a dialectical break-through against the Hegelianism narrative of the religious significance of man.

Second, Haufniensis’ account provides an explanation for the absence of faith that has characterised ‘pagans’ (to use his own term) since the time of Adam.

For Haufniensis, the primary mechanism by which the ‘pagan’ enters into a state of ‘un-faith’ is by positing anxiety as something other then what it really is. The Kierkegaardian hero,11 of course, rejects all mediating excuses and entirely owns his sinful nature; this is the process that leads to faith. But the ‘pagan’, faced by the anxiety of his condition, shifts the blame away from himself and onto something else; this is the process that leads to ‘un-faith’.

The ‘pagan’ does this by ‘dialectically defining anxiety’ as ‘fate’ (96). That is, as anxiety begins to present the possibility of radical freedom before his very eyes, the pagan decides to close down and, as it were, disown that possibility by attributing it something outside of his own self, namely, ‘fate’. The Kierkegaardian hero, on the other hand, does not do this: he defines anxiety as ‘guilt’ rather than ‘fate’. This is because ‘guilt’ is a concept which presupposes some agency, and therefore some responsibility, on the behalf of the subject. By ascribing his state of sinfulness to ‘guilt’, the individual ‘refuses to side-step the primitive decision, refuses to seek the decision outside himself with Tom, Dick or Harry, and refuses to remain content with the usual bargaining’ (109).

At this point, it seems to me that Haufniensis’ argument is at its most tenuous, for it would seem feasible (at least, we might say, according to the assumptions of ‘modern psychology’) to ascribe guilt to something external to the human self. If this is true, then the concept of guilt would provide no ground of evidence that the individual has ‘eo ipso turned toward God in faith’ (107).

Third and finally, Haufniensis’ account does not just chart how anxiety leads to sin, but also how anxiety provides the condition of faith. The concept of anxiety, he argues, culminates in a dichotomy. Either it prompts the self-destruction of the subject, by provoking him to continue in sin ad infinitum. Or it enables the subject to ‘renounce anxiety without anxiety’ (154) and thereby enter into the state of faith.

So just like the hero of one of Grimm’s fairy-tales (155), this is the crucial journey that the reader is invited to commence: ‘to learn to be anxious in order that he might not perish either by never having been in anxiety or by succumbing in anxiety; for whoever has learned to be anxious in the right way has learned the ultimate’ (155). And, faithful to his promise not allow dogmatics to ‘intrude’ upon his psychological examination (23), it is only

9 Quinn (1993), p.356.

10 JP I 41; Pap. X2A 22.

11 Or ‘genius’, which is the term frequently used in this book.

6

in the penultimate sentence of the book that Haufniensis allows a hint of dogma to surface, with the tantalizing suggestion that: ‘he who in relation to guilt is educated by anxiety will rest only in the Atonement’ (162). Any explication of the Atonement of course lies in the realm of the non-psychologically-orientated (that is, in Christian doctrine), and therefore lies outside the remit that Haufniensis has set himself in The Concept of

Anxiety. However, reflecting its formal placement as the penultimate sentence in the book, we can conclude that Haufniensis’ portrayal of anxiety in relation to faith has brought us, as it were, to the threshold of this discovery, and we are left to ourselves, as the individual face to face with the Infinite, to lift our hands, push the door and walk through.

Conclusion

In these three ways, then, the relationship between anxiety and faith is ultimately presented as ‘coterminous’ in this book. The concept of anxiety explains the need for faith (since it is the condition of the Fall). But it also explains how we come into faith (since it is the prompt that causes the individual to turn in faith to the God-relation). And it is precisely in this two-way trajectory that the originality, and the challenge, of Haufniensis’ account resides, both for his contemporary readership, and for ourselves.

Bibliography

Kierkegaard, Soren, The Concept of Anxiety (1844, edited and translated by Reidar Thomte, Princeton, New Jersey: Princeton University Press, 1980)

Dupré, L., ‘The Constitution of the Self in Kierkegaard’s Philosophy’ in International Philosophical Quarterly

3 (1963), pp. 506-526

Morino, Gordon D., ‘Anxiety in The Concept of Anxiety’ in The Cambridge Companion to Kierkegaard (Cambridge: Cambridge University Press, 1993), pp. 308-328

Poole, Roger, Kierkegaard: The Indirect Communication (Virginia: Virginia University Press, 1993)

Quinn, Philip, ‘Kierkegaard’s Christian Ethics’ in The Cambridge Companion to Kierkegaard (Cambridge:

Cambridge University Press, 1993), pp. 349-376

How should statistical evidence provided by empirical psychology be interpreted in assessment of arguments in the rationality debate?

0

How should statistical evidence provided by empirical psychology be interpreted in assessment of arguments in the rationality debate?

 

1: Introduction

 

In a 1968 paper entitled Reasoning About a Rule, Wason devised an experiment known as the Wason selection task in order to investigate Karl Popper’s idea that all of science is grounded in hypothetico-deductive reasoning, whereby seeking counter-examples (i.e. contradictory evidence to a particular proposition) is a precondition. Wason’s project examined the likelihood of Popper’s claim – is “learning” just the establishment of hypotheses and perpetual search for contradictory evidence?  The selection task involved testing whether subjects have the ability to seek facts that violate a proposition, particularly in conditional If P then Q hypotheses. Astonishingly (!), Wason’s findings appeared to suggest that humans are in fact incapable of completing simple logic task.

 

Since Wason’s tragic discovery of our fundamental incapability, literature on rationality has formed a basis for challenging the basic reasoning abilities of human beings. In the last few decades of the twentieth century, the debate was distinctly split between proponents of the “heuristics and biases” tradition and those of the “evolutionary rationality” tradition. The former interprets evidence from tests such as Wason’s selection task to be conclusive that Homo sapiens are ultimately irrational, whilst the latter derives a much more optimistic approach to human rationality. In recent years, much writing has been done to scrutinize and illuminate the foundations of both camps from a more objective, external perspective. It became apparent that the rationality debate is, to a point, more complex than what either camp assumed.

 

The goal of this essay is to review matters encircling the rationality debate and to posit that amongst the two camps, evolutionary rationality appears to be a more constructive approach to the interpretation of empirical psychological evidence. Nonetheless, we must be aware, as Stanovich and West have pointed out, that as a philosophical discipline the evolutionary rationality approach does not outstrip one fundamental flaw- it does not clarify nor resolve the problem of whether rationality should be seen as a process resulting in optimal results for the gene or for the individual.  Towards the end of the essay, I shall elucidate Stanovich and West’s conclusion and the ramifications it may have for what lies ahead for the rationality debate.

 

2: The Evidence

 

To begin with I shall present the evidence accumulated by empirical psychology in contemporary experiments, which will be interpreted and elucidated towards the end of this paper. I shall first introduce three experiments which have displayed general consistency in results when repeated on different human subjects. The three experiments are all shown as evidence for a specific fallacy that most human beings are disposed to commit in a rationality test, akin to the paradigm Wason selection task. Such rationality tests can be devised in various ways, but are generally equivalent to the following:

 

A human subject is presented with four cards on a table, with upper faces that display ‘A’, ‘D’, ‘3’ and ‘8’.  He is told that a letter is printed on one side whilst a number is printed on the other on each card in the set. The subject is next provided with a piece of paper, which reads “If a card has a vowel on one side, then it has an even number of the other side”. The subject’s task is then to determine which cards need to be turned over in order to establish the truth of the statement conclusively.

 

According to the laws of classical logic, because the proposition is of the form PàQ the sole contingency that could refute the statement is if one of the cards held the form P ^¬Q, meaning if a card had a vowel written on one face but an odd number written on the other. Hence, only the cards with either a vowel or an odd number need to be checked – in which case would be the “A” and “3” cards here. Be that as it may, recurrent experiment has shown that in general, not more than 10% of people come up with the right answer at their first attempt to solve the puzzle. Most subjects have selected either only the “A” card, or both the “A” and the “8”.

 

The second example was established by Stanovich and West in their 2003 paper entitled Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality.  The example is known as the Linda Problem:

 

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable.

a.       Linda is a teacher in an elementary school

b.       Linda works in a bookstore and takes Yoga classes

c.        Linda is active in the feminist movement

d.       Linda is a psychiatric social worker

e.        Linda is a member of the League of Women Voters

f.        Linda is a bank teller

g.        Linda is an insurance salesperson

h.       Linda is a bank teller and is active in the feminist movement

(Stanovich and West 2003: 173-174)

 

 

Most participants of this experiment were shown to have ranked statement f as somewhere below statement h. According to laws of classical logic however, this cannot be correct because statement h is a conjunction of the form A ^ B, which by law of the probability theory cannot have a greater probability of being true than either A or B being true independently. Hence, although most participants rank statement f below statement h, it is, strictly speaking, incorrect to posit that statement f is any less probable than statement h. This finding has been interpreted to show that human subjects are prone to employ instances of faulty reasoning, in this case what is labelled conjunction fallacy.

 

The reasoning employed by those who get a drastically incorrect answer is labelled the base-rate fallacy.

 

The third example, pertaining to the human analysis of statistical data, is found in Tversky and Kahneman’s 1982 paper Evidential Impact of Base Rates.Participants were presented with the following question:

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

(Casscells, Schoenberger and Grayboys 1978: 999, cited in Tversky and Kahneman 1982: 154)

 

Presuming that when a person is in fact a carrier of the disease the test will always be positive, the right answer to the question should be 2%, as uncovered by simple statistical calculations. Approximately 51 out of 1000 people will come out positive, and amongst these people only about 1 person will in fact be a carrier, thus the answer should be 2%, which is 1 as a percentage of 51. However, amongst the original participants of this test, who were staff and students at Harvard Medical School,

 [t]he most common response, given by almost half of the participants, was 95%. The average answer was 56%, and only 11 participants [out of 60] gave the appropriate response of 2%, assuming the test correctly diagnoses every person who has the disease.

(Tversky and Kahneman 1982: 154)

Likewise, this type of test has been re-done over and over with analogous results. The reasoning exercised by those who get answers that are substantially far off is known as the base-rate fallacy. These subjects seem to neglect the fact that out of 1000 people, only 1 would be a carrier of the disease to begin with, meaning that the aggregate number of false positives will drastically outweigh the correctly identified carriers.

 

Thus we have been presented with 3 paradigmatic examples of prevalent human violation of the laws of classical logic, probability theory, and statistical theory. The way we should decipher such violations is dependent on our general understanding of human reasoning, and on our understanding of the said classical theories themselves. We shall proceed to elucidate human reasoning in the next section, and the classical theories thereafter.

 

3: The Rationality Debate

 

In recent years, the rationality debate saw various lines of thought emerge, advocating different strategies for interpreting the types of evidence against human rationality presented in the previous section. Two major groups established were the ‘heuristics and biases’ (H&B) and ‘evolutionary rationality’ traditions. Generally speaking, proponents of the H&B tradition interpret the said evidence as pessimistic to human rationality, whilst evolutionary rationalists leave much more room for conveying human rationality. In what follows we analyze the argumentative thrust of each tradition.

 

3.1: Heuristics and Biases

 

A heuristic is a general rule of thumb, defined by cognitive strategies people employ to make decisions in face of data overload in problems of common form. For example, a girl might use the heuristic “a boyfriend being extra caring suddenly means he is cheating” while making decisions on whether to dump him or not. A partygoer may choose which club to enter based on the heuristic that “the club with the longest queue outside must be the best”. Nonetheless, it is evident that such heuristics do not always work effectively. Often, heuristics entail systematic errors that may be experimentally secluded, thus the label of being bias. Biases are the assumptions fused into a system that are taken for granted as being part of the system’s operation.

 

The H&B camp of the rationality debate as pioneered by Kahneman and Tversky postulates that human subjects are prone to perform poorly in rationality tests because they have limited sets of heuristics from which they can bring logical concepts to apply on problems of rationality. Additionally, human subjects innately hold biases that accommodate and assist the heuristics that are used. The broad idea is that most of the time, such heuristics are too blunt to be depended on when it comes to doing rationality tests given by empirical psychologists. Often, they result in faulty reasoning in daily life.

 

Advocates of the H&B tradition postulate that the rationality underlying our reasoning in daily life (the form of reasoning that experimenters in the second section were attempting to examine) diverges from the rationality prescribed by pure logic. Kahneman and Tvsersky’s extensive empirical work was ultimately aimed at demonstrating that people behave irrationally in many contexts involving decision making under uncertainty and statistical thinking. This happens because when human subjects are presented with a problem, they are psychologically disposed to draw on imperfect heuristics under particular biases, often generating answers that contradict “normative”[1] answers. At times, contradictory responses are also given under circumstantially different but logically equivalent formulations of the tests, casting the phenomenon now known as the “framing effect”, a cognitive heuristic whereby human subjects have the tendency to form conclusions depending on “frameworks” within which situations are presented. How a problem is framed influences the human subject’s subsequent choices. In Kahneman and Tvsersky’s work it has been found that people often accept frames as they are given and seldom reframe (and meanwhile rationalize) problems in their own words.

 

The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences

(Tversky and Kahneman 1986)

 

A classic example of how framing influences people’s choices would be Edward Russo and Paul Shoemaker’s story about a Franciscan who seeks permission from a superior to smoke while praying. When the Franciscan asks “Can I be given permission to smoke while I pray?” his request was promptly denied. However, when he framed the question differently and asked “In moments of human weakness when I smoke, may I also pray?” an opposite response was elicited, showing the power of framing. Describing outcomes as either gains or losses also counts as framing, and often triggers “irrational” humans to make differentiated choices in situations whereby outcomes are actually identical, but only described as gains as opposed to losses, and vice versa (i.e. saying that something has a 1 in 10 chance of winning instead of 90% chance of losing).

 

    Another example of a heuristic employed by humans in judgement formation is known as the availability heuristic. First reported by Kahneman and Tversky, this heuristic causes human subjects to project biased probabilities in favour of concepts that they can think of more easily. Thus, the easier a concept can be recalled to mind, the more ‘available’ it is, and the subject then believes that its occurrence is also more frequent (or available). For example, the suicide rate of a given community may be judged to be higher than it actually is by somebody who personally knows of many specific cases of suicide in the particular community. Representativeness is also a heuristic that influences judgment bias in favour of sample events that seem to be more representative of the wider range of events from which a sample has been taken. For example[2], in a test whereby a die with 4 green faces and 2 red faces is rolled 20 times with the series of Gs and Rs recorded, participants were asked to bet upon which of the following sequences will most likely be yielded: (1) RGRRR (2) GRGRRR (3)GRRRRR.  Many of those unfamiliar[3] with probability theory tended to assert that tossing such a die will more likely yield the sequence (1) RGRRR than (2) GRGRRR, even though sequence 1 in fact dominates sequence 2, meaning that in any test where (2) is yielded, it is necessary that (1) has already been yielded too. The reason people choose (2) is that it more strongly resembles the die (that it has a larger proportion of Green inside), thus what is meant by the “representative” heuristic. Human subjects often choose future outcomes that are strongly representative of pre-existing beliefs about the generating process, and neglect attention to what hypothesis is actually simplest. This is explained by the fact that people tend to have expectations of sample events that conform to the “norm” – if there are more green sides on the die, it is more ‘normal’ for there to be more Greens yielded in the sequence.

 

Not only do people base (fallacious) judgment on pre-existing beliefs, it has also been found that people tend to seek confirmation of data that has already been accepted and actively (though perhaps unconsciously) find facts in support of the data, via what is known as the “supporting evidence bias” as explicated by Hammond, Keeney and Raiffa in 1998 Harvard Business Review amongst a list of biases that people are prone to commit to. The supporting evidence bias is a confirmation bias that affects more than how people collect information as it also changes how data is interpreted.

 

Suppose, for example, you are considering an investment to automate some business function. Your inclination is to call an acquaintance who has been boasting about the good results his organization obtained from doing the same. Isn’t it obvious that he will confirm your view that, “It’s the right choice”? What may be behind your desire to make the call is the likelihood of receiving emotional comfort, not the likelihood of obtaining useful information

(Hammond, Keeney Raiffa 1998)

 

 The supporting evidence bias is a confirmation bias that affects more than how people collect information- it also changes how data is interpreted. At times, it can fuel a form of irrationality that may even be considered delusional according to Alfred Mele’s deflationary theory of self-deception. People are often motivationally biased by psychological desires or anxiety to bring conclusiveness to problems, even when it means neglecting potentially contradictory but truth-bearing evidence. In an Atheist’s view for example, confirmation bias triggers much of the believer’s interpretation of facts in the world (i.e. in the Design/Cosmological arguments) in favour of the idea that an all-perfect God exists, ignoring evidence that potentially harms God’s reputation in order to fulfil the human creature’s desperate, psychological need for a purposeful life with divine meaning (and perhaps also perpetuation of life after death). This, of course, can apply vice versa too as long as one pays too much attention to supporting data and neglects conflicting data. It is believed by psychologists that what underlies our tendency towards such bias is our natural inclination to decide what we want to do before figuring out why and our tendency to prefer dwelling with things or concepts that we like.

 

Yet another form of rationality-altering bias identified by Hammond, Keeney Raiffa is the “sunk cost bias”. This refers to the idea that humans tend to find greater difficulty in letting go or giving up things that they have invested more into[4], even though rationally speaking, sunk costs are investments made in the past that cannot be recovered in the future, and should bear no relevance to decision making.

Sunk costs are the same regardless of the course of action that we choose next. If we evaluate alternatives based solely on their merits, we should ignore sunk costs. Only incremental costs and benefits should influence future choices.

(Hammond, Keeney Raiffa 1998)

This type of bias has been widely demonstrated through research in many aspects of life. For example, many people find that the longer they have waited for a bus to arrive, the harder it becomes for them to decide choosing alternative transport (knowing of course, that the bus will arrive at some point). This is because many people feel an innate sense that “wasting” any form of resource (time or money, for example) is bad. In waiting for a bus for example, people feel obliged to keep waiting (investing more time) for the bus because if they decide to take the train, then the sunk cost (time they have already used to wait for the bus) will have been “wasted”. Speaking in economic terms, however, this is irrational thinking, since the time used to wait for the bus is irrecoverable and should be irrelevant to a decision regarding what should be done from present time in order to get to destination X.  

 

Before we move on to the next section, I shall mention one last common bias- what is termed the “status quo” bias, also known as the “comfort zone” bias. It has been postulated that many people have a tendency to choose alternatives that conserve the status quo. Reasons behind this are fairly apparent, for going against what is normally considered axiomatic often disposes individuals to social isolation or criticism. Adhering to the status quo is generally more emotionally comfortable as it provokes less internal tension, but often people overvalue pre-existing axioms to the extent that they forsake individual rational judgment. Psychologists posit that the cause of this is that when major decision making is necessary, potential losses loom larger than potential benefits. What we lose are things that we already possess and therefore know of, whereas gains can only be taken potentially and are hypothetical. Hence, the impacts of loss form more vivid concepts in our minds and since people mostly hope to avoid regret and strive to preserve reputation, they often find clinging on to the status quo a more secure option even when boldly going against it opens opportunities to greater gains.

 

According to advocates of the H&B tradition, it is thus reasonable to posit that human beings are irrational. We move on to examine arguments from proponents of the opposing camp of Evolutionary Rationalists.

 

 

 

 

3.2: Evolutionary Rationality

 

Since William Harvey demonstrated that the purpose of the heart is to pump blood through the lungs and the body (ref. Harvey’s work on systemic circulation), the functional organization of the human body has been examined to great depth by physiologists throughout the world. Today, it is generally accepted by biologists that the structure of our bodies attends to our needs for survival and reproduction. A sort of cornerstone within such research was established by Charles Darwin, who in his 1859 book On the Origin of Species, described how humans have evolved through the process of natural selection, developing traits favourable to reproduction. Since then, psychological concepts of evolution (analogous to physical evolution) entered the scene and some psychologists have concluded that like our bodies, cognition also has structure. Thus emerged a group of evolutionary psychologists who postulate that the human cognitive structure is designed by natural selection for assisting survival and reproduction, in ways analogous to physiological evolution.

 

The primary advocates of the evolutionary rationality theory are Gred Gigerenzer and Peter M. Todd. In the Evolutionary Rationality tradition, the heuristics found to be used during rational decision making are also examined, but interpreted in ways much friendlier to human rationality. Gigerenzer and Todd claim that our common heuristics have adapted through interaction with the “social, institutional or physical environment” to become useful decision-making tools. In their view, heuristics that we employ are definitely good in one way or another (at what they do) and are not random tactics rummaged around just for the sake of resolving problems whenever decision-making is required.

 

“Ecological rationality” is the term Gigerenzer and Todd use to describe how suitable a decision mechanism has become in terms of its adaptation to assist the human mind. Cosmides and Tooby have made renowned argument that most psychological traits differentiating human beings from other animals (including the more intelligent higher primates) developed during the Pleistocene era, which began approximately 1.8 million years ago and ended about 11,000 years ago.[5] During this era, the human mind began evolving into the particular form it holds today. This phenomenon of change is termed the environment of evolutionary adaptedness (EEA), as put forward initially by John Bowlby, who was noted for his pioneering work in attachment theory. The EEA illustrates how animals in different environments adapted to their surroundings when they were faced with local problems of reproduction. This explains the diversity of existing animals, for they have faced different reproductive problems which resulted in different adaptations. EEA refers to the collection of problems that a particular specie has faced through evolutionary time. It has been advanced that the collection of selection pressures that entailed evolution of the human body are almost unquestionably the selection pressures that humans faced during the Pleistocene era. 2 million years ago the human genus homo began in Africa and a couple millions after that the genus was spread to Asia. Whilst roaming around and spreading throughout the lands of the earth, humans began agricultural practices and abandoned previous habits of hunting and gathering towards the end of the Pleistocene era. A few thousand years later (post-Pleistocene), people began establishing cities and drastic cultural changes occurred rapidly over the previous 10,000 years. This was said[6] to have possibly opened humans to new selection pressures whilst eradicating selection pressures that were significant earlier on. Although the Pleistocene excludes the recent period of novel settlement, its significance in the genesis of the human genus is now identified as the “epoch which shaped human physiology and psychology”[7]. This idea is validated by the fact that the adaptation of vision, for example, would have been eliminated during the 2 million year-long period of the Pleistocene, were it not maintained by stabilizing selection[8]. The following extract explains clearly the evolutionary science behind this:

 

If the sun blinked out 2 million years ago, humans and all other animals with vision would have lost their visual capabilities. Mutations would inevitably have occurred in the genes underlying our visual system, degrading our visual abilities. Since there wasn’t any light, however, this degradation would have been inconsequential, and such mutations not been selected out of the population. 2 million years later, the visual system would be eradicated.

(Hagen 1999)

 

Thus “sunlight” is to be taken as a component of the human EEA, and the visual system we have today is the result of stabilizing selection for vision in the Pleistocene era. So since the majority of evolutionary change in human adaptations to the environment actually occurred before the Pleistocene (or at least before the dawn of the earliest fixed civilization), there is inevitable disparity between EEA and our present day environment to which the majority (in fact, all) of the human subjects from the rationality tests as described earlier in this paper belong. This fact is fundamental to the objections against evolutionary rationality. We shall discuss disputes advanced by Keith Stanovich and Richard West later in this paper (Part 5), whereby it is (in their view) revealed that what is entailed by major discrepancies between the EEA and the current era undermines the arguments put forward by evolutionary rationalists.

 

Gigerenzer and Todd on the other hand endorse interpreting the empirical findings in psychology regarding the human mind and the external world the way Herbert Simon analogized – that the mind and the world are like “blades of a pair of scissors” and “must be well matched for effective behaviour to be produced- just looking at the cognitive blade will not explain how the scissors cut”. Gigerenzer and Todd object to the H&B  tradition on the grounds that its assessments in human mechanisms of rationality has been too confined, since it does not consider environmental factors and account for surroundings in which specific heuristics are appropriate. Issues connected to this point will be elucidated later in this paper (part 4) along with Cosmides and Tooby’s observations about improved performance in rationality tests for human subjects when problems are introduced in structures more compliant to human cognition.

 

3.3: Contentions and Reconcilement

 

Since a mere map of the heuristics accessible for employment in the modern human mind will not provide hints as to how the brain began acquiring those features, it is obvious that a completely established theory of evolutionary rationality needs to surpass the opposing camp of heuristics and biases as an explanatory hypothesis. Some scholars have however pointed out that the conceptual divergence between the two hypotheses is in fact illusory. Rather, as an attempt to reconcile the two traditions we may pose evolutionary rationality as an informative stance to the H&B tradition- it can supply psychologists with ideas surrounding the backgrounds and origins of the heuristics used in contemporary society.

   

The prime endorsers of the reconciliation movement between the 2 opposing traditions are Samuels, Stich and Bishop. In Ending the Rationality Wars, they attempt to moderate the friction between the two opposing traditions through distinguishing the core theses of each from their rhetorical flourishes[9]. They contend that

 

The fireworks generated by each side focusing on the rhetorical excesses of the other have distracted attention from what we claim is, in fact, an emerging consensus about the scope and limits of human rationality and about the cognitive architecture that supports it

(Samuels, Stich and Bishop, 2002)

 

In Samuels, Stich and Bishop’s view, controversy only arises when advocates of one hypotheses fail to realize that the central claims of the opposing hypotheses are not deliberated to be all-encompassing theories about the human mind. In their view, it is entirely plausible to posit simultaneously that humans regularly stray away from appropriate norms of rationality and that at times they act in conformity with such norms. Likewise, as Gigerenzer postulated, it is not implausible to hypothesize that the heuristics detailed by researchers such as Kahneman and Tversky evolved as mechanisms to assist human problem solving and decision making in the EEA.

 

 

However Gigerenzer disputes that heuristics such as availability and representativeness are too obscure and can pertain to any number of undefined cognitive processes that can “post hoc be used to explain almost everything”[10] , and that interpretation of such hinges wholly upon the subjective orientation and personal perspective of the researcher. Terms like “representativeness” are atheoretical and appealing to such heuristics as generators of biases does not provide satisfactory explanation. Gigerenzer thus instead favours the recognition heuristic, which is employed to single out one object amongst a subset according to some criterion.[11] The recognition heuristic operates through the mechanism of ecological rationality which Gigerenzer and Todd puts forward (that humans are innately capable of exploiting the structure of information in its natural surroundings) and is fundamentally based on the ignorance of human subjects. The heuristic can only be applied in situations where ignorance is heavily correlated with the criterion being recognized. For example, assume that the “criterion” in question is “population of a city”. It is “natural” for human beings to assume that “cities with more people are generally more salient to individuals”, in which case the recognition of a city’s name would have a strong correlation to determination of a city’s population.  Gigerenzer and Hoffrage demonstrate this in a research (2005) through asking “Which has more inhabitants – San Diego or San Antonio?” to a group comprised of Americans and Germans in equal numbers. Although the Germans in the group should expectedly be less familiar with U.S. city populations than the Americans, all participants correctly answered that San Diego had the larger population since it was the more “recognizable” of the two choices. The recognition heuristic allows subjects to determine the more populous city with considerable accuracy based on whether they recognize a city or not. Daniel Oppenheimer however, posits that findings of this sort cannot be interpreted as satisfactory validations of the recognition heuristics account. For example, pre-existing knowledge of other aspects of San Diego, such as its size, may have contributed to a subject’s determination that it is more populous in addition to mere recognition of the city, contrary to what the recognition heuristic account suggests – that subjects will choose a recognizable city as the more populous over an unrecognizable one even if they knew that the recognizable one is small in size[12].

 

It appears evident that much work is still to be done in the conciliatory vein prior to an established consensus on what psychologists ought to investigate. The rest of this paper will be dedicated to the evolutionary hypotheses regarding rationality. The empirical work carried out by Kahneman and Tversky now demands complementation with legit supporting theories concerning the how’s and why’s of heuristic development. A substantial portion of Gigerenzer’s work has been dedicated to the appropriateness of our evolved heuristics in the contemporary era: “The goal is to determine what environmental structures enable a given heuristic to be successful, and where it will fail”[13]. Thus, we must consider the deficiencies of supposedly universal heuristics when bound by rationality tests, particularly when deviation from (apparently) rational norms (according to empirical psychology) appears to be systematic. In Part (4) I shall examine some explanations that have been put forward in the past. Nonetheless we must maintain a critical view of Gigerenzer’s notion of “success”. A substantial flaw in Gigerenzer’s evolutionary rationality account (and indeed, the accounts of most proponents of the same tradition) is identified by Stanovich and West, in that it neglects the fundamentally crucial question of what actually is rational – should rationality be taken as a process resulting in optimal results for the gene or for the individual? What promotes success for an individual often diverges from what is rational for their genes. What exactly do we strive to attain success for?  In Stanovich and West’s words,

 

Definitions of rationality must coincide with the level of the entity whose optimization is at issue

(Stanovich and West 2003)

 

Their views shall be elucidated thoroughly in part 5.

 

4: Explanations offered for systematic deviation

 

Why do humans perform poorly in rationality tests? Proponents of the H&B tradition such as Kahneman and Tversky typically answer by accusing humans of being fundamentally irrational creatures.  They believe that this conclusion can be drawn explicitly from their empirical studies.[14] This standpoint, however, has not gone far without being frowned at due to its propensity to oversimplify the dynamics of human decision making. It has particularly been accused of failing to consider the reality of constraints on the subject’s decision making process. Hence, a significant proportion of what is grasped to be the H&B tradition as a whole is dependent on Kahneman and Tversky’s own interpretations of their findings. Thus, regardless of deeper elucidation of why human performance is substandard in rationality tests, the H&B tradition alone is explanatorily unsatisfactory.

 

Elliott Sober posits that the human’s poor performance in rationality tests should not reflect nor be attributed to their underlying rational competence. For example, we would expect an eloquent speaker of English to have grasped all the rules of grammar in the language, but sometimes their ability will be hampered by disruptive factors and we do not therefore say he does not speak English fluently. Hence, it would be inconsistent for us to consider humans irrational beings merely on the grounds that they occasionally fail to respond correctly in rationality-measuring tests. The substandard performances can often be attributed to rationally disruptive factors such as bias.

 

Hilary Kornblith on the other hand does not find conceptual divergence between general competence and performance sufficient for fending off accusations of human irrationality[15]. Even systematic and recurrent errors (i.e. errors depicted by Kahneman and Tversky’s rationality tests) can be elucidated through Kornblith’s conception of the distinction. He analogizes the question at hand with a depiction of a blocked faucet:

 

When the blockage was removed, the faucet worked perfectly. Consider the workings of the faucet prior to removal. It does not work properly, and its failure is neither occasional nor unsystem-atic. But a natural way of describing the faucet would consist of a description of a mechanism which works perfectly – the faucet after the blockage has been removed – together with a description of an interfering factor.

(Kornblith 1992: 900)  

The distinction between the potential competence of the faucet and its actual performance is clear, and Kornblith points out that it is via describing the faucet in the manner cited above that we know what to do in order to make the faucet work properly. In his view, pinpointing the blockage this way bearing in mind the natural function of a faucet is more utile than calling it just “another of the faucet’s many parts”. The faucet is compared to our “belief generating equipment”.

 

If our interest in epistemology is cognitive repair, that is, improving the manner in which we arrive at our beliefs, a description of our cognitive equipment which divides it into a perfectly working mechanism and a variety of interfering factors will serve our purposes admirably.

(Kornblith 1992: 900)  

 

The catch is that although it is reasonable to conceive of a perfectly working faucet that has been blocked by interfering blockage, it is unclear whether our cognitive equipment can be described as a perfect reasoning mechanism with interfering factors. For what is a perfect reasoning mechanism? There is no reason to posit that when we make errors in rationality tests, it is simply due to our reasoning mechanisms being interfered by some external disruptive factor. Further, the competence/performance distinction does not absolve the faucet (or human mind) as long as it does not work the way itought. As soon as we establish a standard against which to measure rationality, conformity to that standard can only be described on the basis of concrete results, and this does not include an idealized notion of what a perfect reasoning mechanism is. Thus measurement of competence for the faucet is whether water comes out, and for the human it is whether positive results are persistently achieved in tests of rationality, which repeated empirical investigation shows to not be the case.

 

Nonetheless various tests have been conducted to highlight the stark contrast of performance in rationality tests when human subjects are presented with problems in different circumstances/formats. Cosmides and Tooby for example have observed a radical increase of positive responses when probabilistic questions are posed in terms of frequencies instead of percentages/decimal form. This lead to speculation that humans naturally acquire and manipulate information more accurately when problems are presented in formats more reminiscent of how homo sapiens have been hypothesised to make reasoned decisions back in the EEA. The general contention is that as homo sapiens did not think in terms of percentages in the Pleistocene, our early primates were pushed to make probability judgments based on frequency, a form of information relatively simple to acquire and mentally recall. This idea has been backed by the observation that tests such as the Wason selection task for example have yielded a much higher percentage of correct responses when presented in social context (i.e. in terms of violation of underage drinking laws) instead of the abstract context originally set up, even though the situation is logically identical to the original experiment. This has been traced back to a higher disposition for social reasoning over abstract deliberation. After all, back in the Pleistocene, human survival was highly dependent on social interaction.

 

Such observations go together with the apologetics made by advocates of evolutionary rationality against the H&B tradition. The general view is that since it is possible for us to conform to rules taken by cognitive psychologists to be guidelines of rationality whenever the information is presented appropriately, it is unfair and erroneous to characterize people as irrational. The H&B advocates argue back that this does not in any way mitigate the findings of original H&B experiments. The human systematic deviation from statistical logical norms when information is introduced in an “incompatible”, yet prima facie equally legitimate format, withstands the evolutionary rationalist’s claims against the human’s inherent irrationality.

 

Kornbith identifies implausibility and unfairness in two common ideals considered guidelines of judging human rationality. Firstly, it appears to be the distinct domain specificity of heuristics that make their limitations seem inherent.[16] What a domain refers to in terms of heuristics is the set of conditions whereby it evolved adaptively to become functional, and for humans this is the EEA. The actual domain where rationality tests for our heuristics take place is the modern world.[17] Kornblith maintains that all heuristic mechanisms, in order to function effectively in its appropriate domain, must consist of fundamental presuppositions. Thus, if it is utilised in an actual domain where such presuppositions are incorrect, it will not function correctly and therefore lead to possibly irrational judgments. Hence it is unfair (and unrealistic) to expect human possession of decision-making algorithms that are accurate in all domains. Kornblith opens the question

 

Why shouldn’t we compare the ways in which people reason against some standard account of proper statistical inference?

(Kornblith 1992: 900)  

 

Here additional problems are unearthed as Kornblith notes that such a standard is unattainable. Gilbert Harman for example has demonstrated that even basic calculations required for humans to reach this ideal to be computationally intractable as a result of the combinatorial explosion problem, explained thus:

 

If one is prepared for various possible conditionalizations, then for every proposition P one wants to update, one must already have assigned probabilities to various conjunctions of P together with one or more of the possible evidence propositions and/or their denials. This leads to combinatorial explosion as the number of such conjunctions is an exponential function of the number of possibly relevant evidence propositions.

(As cited in Kornblith 1992: 900)  

 

Because in reality, human reasoning is limited by constraints of feasibility and practicality, attempting to function generally through degrees of belief or probabilities is too complex. When one believes in something, it is simply that he has an explicit belief about the probability of something. However, functioning in terms of explicit probabilities cannot always work, as combinatorial explosion in resources required limits the use of probabilitstic conditionalization as a mean of updating an individual’s degrees of belief.  Thus real-time resources for rules of rationality to be established are impossible, and it is therefore implausible to measure human rationality against such an ideal standard.

 

For explanatory purposes, it is obvious that accounts which consider standards against which rationality is judged are superior to accounts that do not. However, those who have posited laws of probability as ultimate standards have yet to provide absolute justification for why such laws should be accepted. In Part 5 we continue to explore additional problems with unreservedly taking rules of mathematics and logic as standards of rationality.

 

 

5: Appropriate Norms?

 

Participants of the rationality debate typically do not challenge the ideal of utility that people are expected to aspire to in decision making. General experience should also lead to agreement by proponents of both evolutionary and H&B traditions that when presented with information in “compatible” formats, humans can think upon appropriate norms of rationality. Indeed, Gigerenzer endeavours to prove that in appropriate situations, humans can in fact follow probabilistic norms such as Bayes’ theorem. However, here a deeper assumption is revealed regarding the question of why such logical, probabilistic theorems should be accepted as unconditionally appropriate norms. It appears that the ultimate goal of theoretical reasoning is taken for granted to be that of true belief formation and in the following I attempt to challenge this assumption.

To begin with, we identify two separate stances on rationality as put forward by Samuels, Stich and Faucher. The first of these is the deontologist view

 

To the deontologist, what it is to reason correctly – what’s constitutive of good reasoning – is to reason in accord with some appropriate set of rules or principles.

(Samuels, Stich and Faucher, 2004)

Thus a norm such as ‘One should follow Baye’s rule when he calculates conditional probabilities’ is taken to be deontological since it advocates conformity to a particular principle. The alternative stance is what they term consequentialism, also ‘pragmatism’.

 

Consequentialism maintains that what it is to reason correctly, is to reason in such a way that you are likely to attain certain goals or outcomes.

(Samuels, Stich and Faucher, 2004: 166, emphasis original)

This view promotes the pragmatic objective of efficiently attaining an individual’s goals and desires as the criterion for what constitutes good reasoning.

 

The wall that divides these two views is, however, not completely stable. The Baye’s rule for example, is accepted due to its mathematical soundness and appearance of reliability in the real world. Thus it cannot strictly be seen as ‘deontological’ in the previously defined sense and may instead be seen as a consequentailist rule. It would be trivial to posit a deontological theory that designates which norms should be followed without explanations of why that should be the case. Norms we generally consider appropriate are those that mathematicians and logicians have validated as “true”. Thus, the ultimate validation of rational norms is implicitly ascribed to the maximization of true belief formation. However this pertains to a form of mathematical naturalism that cannot be left unchallenged – it is unclear whether standards of truth should be directly and entirely dependent on what is dictated by mathematical practice.

 

Regardless, our emphasis in what follows shall not be whether true belief can be formed from adaptation to mathematical norms – rather, we shall examine whether true belief formation itself can justifiably be considered the ultimate objective of a rational, reasoned thought process. Now, general consensus indicates that having maximal true beliefs will allow an organism to achieve maximal goals that it can physically achieve. The augmentation of true belief formation is thus tightly associated with the augmentation of individual human utility, making it seem plausible to posit that true belief is in fact the ultimate objective of reasoned thought process.

 

That said, Stanovich and West sharply point out that a crucial distinction between the interests of an individual organism and the interests of his geneshas often been missed out in evolutionary rationality literature. When advocates of evolutionary rationality rush to tell us of the cleverly adapted heuristics in the human mind they forget the distinction between heuristics that serve genes as agents of selection and heuristics that serve individual humans as vehicles built by said genes.

 

Presuming that it is self-evident for humans to have a variety of different objectives and personal aims, the only ‘aims’ that can be attributed to genes are survival and reproduction. Stanovich and West posit that the majority of the heuristic mechanisms developed during the Pleistocene era should be expected to serve such aims. Goals serving the interests of individual humans are expected to be merely secondary or non-essentially.  It is once the genes have developed so much as to build complex creatures like ourselves that we start to exhibit behaviour oriented towards personal utility which can often diverge from or even contradict genetic utility.

 

It becomes apparent that we cannot disregard the disparity between any tenable model of the EEA and the contemporary world. The obvious aim of formal logic is to assist formation of true belief; however, although this (apparently) aids organisms in their various endeavours, it is clear that the extent of logic we have developed is unnecessary for genetic purposes.  Max Delbruck questions the fundamentals of evolutionary biology in “Mind from Matter?”[18]

 

Our concrete mental operations are indeed adaptations to the mode of life in which we had to compete for survival long before science. As such we are saddled with them, just as we are with our organs of locomotion and our eyes and ears. But in science we can transcend them, as electronics transcends our sense organs. Why, then, do the formal operations of the mind carry us so much further? Were those abilities not also matters of biological evolution? If they, too, evolved to let us get along in the cave, how can it be that they permit us to obtain deep insights into cosmology, elementary particles, molecular genetics, number theory?

(Delbruck, 1978)  

 

Delbruck’s conclusion was that he had no answers to such questions, and given the lack of conclusive evidences both for and against evolutionary biology it is not difficult to understand why. As Sober pointed out, many of the mental operations which accommodate scientific reasoning were neither used nor useful back “in the cave”, in the days when much of genetic engineering took place through natural selection. It is therefore unclear how our present information processing techniques can be viewed as products of evolution if they were not selected for amongst a range of other alternatives. As such rationality cannot be seen as an “adaptation” if it is not fundamentally beneficial to the genes.   

 

Stanovich and West, in response to this problem suggest a tentative theoretical division of the mind into two – a part designated ‘System 1’ which serves the goals of the genes, and a part designated ‘System 2’ which refers to an area of higher cognition that specifically serves the individual’s personal needs. Stanovich and West’s contention is that both systems can exist together despite being in conflict at times.  Conflicts are triggered by the occasional unsynchronized goals of the two systems – although it is generally taken that true belief formation is the ultimate end of rationality, the adaptive System 1 did not necessarily evolve in such direction, as the apparent diversity in values and views regarding reproduction also suggests. [19]

 

Given such systematic division of the mind, Stanovich and West proceed to accuse evolutionary rationalists of neglecting this fundamental distinction between the two forms of rationality. They specifically criticize Gigerenzer’s notion of ecological rationality for being conceptually disoriented in the divide between what is rational for the individual and what is rational for the gene, abstrusely interchanging between the two forms of rationality inappropriately.  The surfacing of this very distinction appears to reduce the argument from being normatively concerned to being merely descriptive. The areas of the human psyche that the advocates of H&B labelled “irrational” are, in Stanovich and West’s view, simply expressions of System 1 whilst behaviour conforming to rules of formal logic and probability (and thus true belief maximization) is attributed to manifestation of System 2. Thus, whether we label entities rational or irrational hinges upon the “level” of optimization in question, at either the individual level or genetic level. 

 

Although evolutionary rationalists have always accused proponents of the H&B tradition of being explanatorily dysfunctional, their own thesis, in Stanovich and West’s view, is far from irrefutable for the fact that they neglect such crucial distinction as the divide between individual and genetic rationality.  One cannot obtain legit interpretations of evidence from empirical psychology without first establishing exactly what notion of rationality one is to take as the standard that human performance is to be judged against. Perhaps this is the fundamental question that we, as philosophers need to contemplate upon in order to provide legitimate justification for our choices of standards.

 

6: Further Considerations

 

The role of emotion in human rationality and decision making has also been studied extensively by evolutionary economist Robert H. Frank, who postulated that emotions may in fact be a complimentary facet of human rational process even when they appear irrational. In the book Passion within Reason for example, Frank gave the example of how the emotions of “love” accord extra value to long term romantic commitment. If people chose partners strictly for rational reasons, it is likely that they will dump their partners for someone more desirable on “rational” criteria as soon as he or she encounters better partnership. This would create what Frank terms a “commitment problem”.  Where emotional attachment such as “love” is present however, there is greater meaning in sustaining long term relationships, and this is where Frank’s “commitment model” highlights problems that cannot be solved by rationality alone, thus the saying that “Those sensible about love are incapable of it”. Frank ultimately claims that if we look at emotions through the “longer lens of evolutionary theory”, much of what appeared irrational in emotions in fact form an “effective strategy for achieving agents’ goals and maximizing their reproductive success”.  This, in a sense, befuddles Stanovich and West’s distinction of individualistic and genetic aims for they appear to be equated in Frank’s thesis. Nonetheless it seems plausible to attribute emotions to manifestations of System 1, explicable solely as a facilitator of genes that may even impede instrumental rationality in some cases. Sripada and Stich[20] for example, note that our genes, environment and culture (what they believe to be the three basic sources of value structure) can often produce “maladaptive” value structures that result in irrational emotions. Phobias for example, characterized by excessive feelings of the emotion ‘fear’, is one prime example of irrational emotion. In an evolutionary perspective, Sripada and Stich note

 

 

Plausible candidates for innate fears are those directed at recurrent threats faced by human ancestors. The underlying adaptive logic is that an innate and rapid fear response to a recurrent thread would have conferred a selective advantage on human ancestors who possessed such a trait.

 (Sripada and Stich, 2004)  

 

This causes problems when fears initially adaptively advantageous to our ancestors, i.e. fear of closed space which called for alertness against dangers of being stuck, are inherited by persons of the contemporary world who experience pathological emotions of fear in closed spaces such as elevators, subways, and phone booths.[21] In such cases our emotions and value structures are maladaptive.

We may perhaps decipher phobias as direct conflict between Systems 1 and 2 in Stanovich and West’s terms. While an otherwise rational human being may be completely certain that he will not suffer any danger in using the phone booth, he may still possess a natural, atavistic aversion against using it even if this will prohibit him from achieving his individual goals i.e. making an important phone call. Such “atavistic” tendencies can be construed as the dominance of System 1 over System 2 in a decision making process.

So it seems that evolutionary rationalists are still left with much to do in order to fully justify how and why the laws of logic are to be explained by evolutionary psychology. Further, even if we hypothesize the truth of their thesis, we must still wonder how the normative status of our laws of logic and probability are to be secured once they are reduced to psychological descriptive terms. If human rationality level has surpassed the threshold demanded by genes for mere survival, then we are unavoidably disposed to a normative schism and must establish standards of rationality on new premises.  It is unclear whether anticipating such endeavor can advance the rationality debate in any way.

 

 

 

 

7: Conclusion

 

It appears that the most common, and also gravest failure faced by proponents of all sides of the rationality debate is the inability to establish a definitive standard of rationality which human performance is to be judged against. Failure to attain uniformity of views in this crucial area means that we fail to identify a universal frame of reference within which we can legitimately interpret the broad account of rationality through laws of logic and mathematics. We are warned (by Kornblith) against wrongfully measuring rationality against impossible standards, and the divergence of our genetic goals and individual goals have clearly been brought to light. How we are to establish norms given the fundamentally descriptive nature of much of the evolutionary rationalist’s views on logical laws is also something we must be cautious about. Explanatory gaps are yet to be filled with regards to how and why one, but not another mental adaptation to our past environments gets recognized as adaptively advantageous.  It also remains unclear why we have developed individualistic traits that are not necessarily beneficial to genetic aims such as reproduction. We also cannot begin to speak about whether humans inherently possess rationality at all without robust definition rationality.  All of such considerations need to be further scrutinized should our debate progress towards greater insights.

 

 

 


[1] Logical, mathematical or statistical

[2] Tversky and Kahneman (1983) ‘Extensional versus intuitive Reasoning’ Psychological Review, 90, 2SB 315

[3] 65% of 125 undergraduates in Tversky and Kahneman’s test

[4] Be it in financial, emotional, or any other form

[5] Cosmides and Tooby (1994)

[6] Edward H. Hagen 2002

[7] Edward H. Hagen 1999-2002

[8] Maintenance of adaptations that have already evolved, as genetic diversity decreases after population stabilizes on specific value of a trait

[9] As defined in the actual work, this refers to claims that  (i) are not central to the research program, (ii) are not supported by the evidence offered, and (iii) are typically not endorsed by advocates of the program in question when they are being careful and reflective.

[10] Gigerenzer (2000)

[11] Gigerenzer and Goldstein (2002)

[12] Oppenheimer: Not so fast! (And not so frugal) : Rethinking the recognition heuristic

[13] Todd and Gigerenzer (2007: 168)

[14] Kahneman and Tversky (1973)

[15] He admits however, that the distinction is useful for cognitive psychology

[16] Kornblith (1992: 907)

[17] As distinguished by Sperber (1994)

[18] Excerpt found in Elliot Sober, The Evolution of Rationality

[19] Kornblith postulated that the evolution of System1 in line with rationality is actually impossible whilst Gigerenzer has shown how successful use of heuristics approximate true belief.

[20] Sripada and Stich, Evolution, Culture, and Irrationality of Emotions

[21] Examples from Sripada and Stich, Evolution, Culture, and Irrationality of Emotions P.12

Religious Belief and Mental Disorder

0

“Religion is a mass neurosis” – Sigmund Freud

In Freud’s view, religion emerged as a by-product of psychological distress caused by human emotional conflicts resulted from our general incapability to make sense of divine events such as the beginning of the world, without resorting to random, unexplainable forces. Freud maintained that much of religion would become redundant if one became unacquainted with such distress. His idea is laudable for our consideration that behind religion and its beliefs within, there may be hidden psychological motives. Can we really identify mass neurosis to religious belief of a hypothetical supernatural being and all its moral entailments? In this essay I would like to assess the similarities and differences between psychotic and religious (mystical) experiences. I should make clear that this is in no way an essay intended to disprove religion, but only a comparison of the belief mechanisms of the mentally pathological and the religious. Many psychiatrists have been skeptical about whether there are any genuine differences, and indeed there exists no definitive guideline for distinguishing between normal and pathological beliefs in clinical practice. Although at this moment religious belief is exempt from pathology in the DSM-IV, I feel that it is at least plausible to posit that there may be no phenomenological difference between the two types of experiences. Indeed many religious beliefs and delusions stem from the same neurologic lesions and anomalous experiences, prompting the possibility that at least some religious beliefs may be pathological. Further, since religious belief is generally seen to exist outside the scientific domain where empirical reasoning used in daily life is largely irrelevant, it is easy for logical positivists with completely rational perspective to label religious belief as delusional. One may argue that in clinical practice, the dimensional characteristics and functional impact of a belief may be more important factors of consideration. However, I believe that a clearer understanding of the distinction between what is and what is not pathological is crucial to creating a superior foundation for diagnostic measures in the clinic. Thorough elucidation of the problems presented in this essay should reveal some of the most pervasive problems concerning our understanding of what constitutes mental pathology and the flaccidity of our society’s notion of norms.

The core similarity between religious and psychotic experience is that they are both reason-refuting phenomena. Citing Soren Kiekegaard, religious belief exists outside the boundaries of reason and defies rationality. Similarly, the lack of rational reason characterizes psychotic beliefs in deluded individuals. The crux of our problem lies in establishing why the refutation of rational reasoning in some beliefs result in pathological

labeling whilst others are seen as religious or sacred. What exactly exempts religious belief from pathology, being just as short of empirically verifiable evidence as any psychotic belief? The fact that religious faith necessarily means, by definition, to believe in the divine (i.e. God) without logical proof or material evidence does not suffice as an explanation for such exemption, for it remains unclear why the kind of faith an apparently psychotic individual attributes to bizarre but secular notions should be distinguished from religious faith. Of course a devout Christian might at this point argue that his beliefs do in fact consist of logical reasoning. The Design, Cosmological and Ontological arguments for example, all attempt to “reason” into claiming the truth of particular religious beliefs about God. Many of the most famous arguments with such intention come in the form of analogy, such as the tale by William Paley in which the universe is compared with a man made watch – the concept being that when one sees a man-made object which functions well and looks beautiful, we automatically know that someone skillful must have designed it, and as such the orderly and beautiful universe must also have been designed by an intelligent being, which must be God. Other arguments refer to the necessity of an uncaused cause, or the absurdity of a world without purpose. However, the abundance of atheists and agnostics signals to us that such presumably rational arguments still require an ultimate ‘leap of faith’ (ref. Kierkegaard) that would, for example, prompt one to conclude the existence of a divine metaphysical being via mere material evidence. Hence, divine belief still necessitates a certain faith that denies rational reasoning and thus remains hindered by its intangibility and incapability to refer to empirical evidence.

Perhaps religious experience and psychosis/neuroticism are indeed phenomenologically similar. If this is true it would mean that mental pathology amounts to aspects of experience other than the immediate content of experiences. A person’s conviction for example, or his preoccupation and extension may be taken into account in a sort of dimensional approach to psychotic thinking. Religious believers have pre-existing beliefs that manipulate the way they interpret extraordinary events, typically as “spiritual” events, that do not apply to non-believers who would typically interpret the very same events as insanity. It is questionable whether the mystical/psychotic distinction should hinge on the dependence of pre-existing theistic beliefs. This is partly because the reliability of such pre-existing theistic beliefs themselves is highly dependent on the number of holders of the particular belief. If there was just one Christian, we would quite likely think that he was insane. But this goes into discussion of what makes a religion deserve more social respect than a bizarre cult of fewer than 20 members. Most religious people would be reluctant to say that the only difference between cult and religion is size, although those decidedly unaffiliated with religion would be reluctant to give religious faith any

identity more honorable than mere irrational preoccupation. The possibility of entire delusional subcultures means that pathology may well be ascribed simply where a belief is not shared by a large enough portion of a population. This, in my opinion, clearly shows how untenable a fully established definition of mental disorder is. If psychosis really amounts to the number of belief-sharers, Nicolas Copernicus and Galileo Galilei would have been correctly considered insane for correctly postulating a heliocentric rather than geocentric view of the universe. Based on experience, we know that idiosyncratic beliefs only become normalized when they are shared by others. But surely being more epistemically informed cannot be considered pathological just because the majority of the remaining population is deluded about particular facts.

Before we move on one important point to make is that there are many varieties of psychotic experiences and religious experiences. I do not aim to address any particular “type”, and as I have said, I do not wish to disregard religion on the basis that many of its characteristics bear resemblance to what some of us consider pathological disorders. Mike Jackson and K.W.M. Fulford in their work concerning religious belief and pathology were accused of construing “too narrowly the meaning of spiritual experience” by Marek Marzanski and Mark Bratton. Admittedly, it is in my interest to avoid the same accusation, but for the sake of precision, I should briefly define religious experiences as non-empirical occurrences or perceived supernatural events that change a person, perhaps leading them to be more religiously aware. General testimony seems to show that the experiences are usually personal, and come in various forms. They can be “Mystical” , where the individual feels a sense of union with the divine, or through “Prayer”, an experience brought about by meditation and reflection, or via “Conversion” whereby the individual faces a forever life changing religious experience. At times people describe their experience as being “numinous” because they sense a feeling of being in the same presence of a greater power, but still within a distance. The fact that religious experiences come about in so many forms certainly makes it more complicated to form comparisons with mental disorders, for it is unclear whether all forms of religious experiences can be categorized as being part of the same phenomena.

William James conveniently came up with a list of common characteristics of mystical experiences which may facilitate our comparison. The first one is ineffability, the most easily recognized characteristic of religious experience. It is simply a feeling beyond words. This can easily be seen as a point of similarity to mental disorders which involve patients being incapable of explaining why they feel particular ways, but can also be easily dismissed by its apparent lack of informative significance. The second is noetic quality, which is an

individual insight to normally unobtainable truths solely through intuition and perception, not intellect. This property makes the true opportunity to learn about God rather difficult to imagine because the normal way of learning about something is usually through both perception AND intellect. As we have discussed before about Kierkegaardian faith, there appears to be an element of irrationality. The third characteristic is transience, where the significance and effects of the experience are out of proportion to the duration of experience. For example, the religious experience may simply be an abrupt “vision” of God that lasted for 3 seconds but with an impact so powerful that a person becomes a faithful Christian for life. This can be compared to the extortionate reactions some patients have to what they may find to be evidence for particular false beliefs. The last characteristic is passivity, meaning that during the experience the person feels that they have lost control to a more powerful being. At times different personalities start speaking in different voices or languages. In cases like this, devout, Christian fundamentalists may see it as “demonic possession”, but clearly, this would only be a case of Schizophrenia in an Atheist’s view. It becomes evident that even if the content of a mystic experience and a psychotic experience was the same, we describe them differently if they were experienced through different cultural meanings. For example a person from a community of atheists or agnostics would not, upon witnessing a “miracle” such as the stigmata, insist that there must exist a supernatural being that conjures up the occurrences.

William P. Alston makes an important distinction between 2 group’s mentalities by distinguishing people’s epistemic practices. Atheistic/agnostic people base their knowledge upon normal methods of perceiving the world and therefore have normal “perceptual practices” whereas religious people (Christians) have Christian epistemic practices. They base their knowledge on beliefs that are more “difficult” to justify, such as the existence of an omnibenevolent, omnipotent and omniscient God. In the design argument for example, they automatically designate God as the creator of the world’s beautiful and intricate details. In other cases (for example in the Toronto Blessing where members of a revival service randomly began laughing and rolling around on the floor), religious people interpret any scientifically unexplained event as a sign of God’s presence. Whilst non-religious people will be open to empirically verifiable explanations of extraordinary events, religious people always identify God as the only possible cause. J.C. Popa in his article entitled Psychoanalysis and Religion explained that the religious tendency to believe in an anthropomorphic, almighty God indeed stems from the human child’s feelings towards their own parents – having experience of vulnerability in society, children seek refuge from their parents, and as they grow older the aspects of life where refuge is required develop to necessitate divine protection.

“Adults’ extended knowledge on nature and society does not shield them from anxiety; on the contrary: the more they know, the more they can realize the void of their knowledge. Hence the need for divine protection and the restoration of infantile relationships – endowed with religious significance – with parental imago” -Popa

An important implication of Popa’s explanation of religious belief is that there are in fact hidden psychological motives behind religious beliefs that may be the cause of the human being’s abandonment of rational reason. The regression of an adult person to the childhood experience of a powerful father and to his corresponding expectations at the time can be seen as neurotic. Of course, it remains debatable whether the religious believer’s radical faith in one single explanation to every scientifically unexplained event amounts to some form of delusion or underlying human desire to bring conclusiveness to every unresolved question, but to be aware of the possible reasons that people may turn to divine conclusions is critical to our assessment of whether a person is religiously devout or mentally disabled. One may argue that, after being able to “seek refuge” from a divine being, a person achieves personal growth over time, and as such, a religious belief must not be pathological. A suggestion posited by Jackson and Fulford in Spiritual Experience and Psychopathology for example was to base pathology exclusively on the positive and negative consequences of experience. In this consequentialist view, an experience followed by a positive outcome, i.e. what they call an “action enhancement”, would be considered a spiritual experience whilst a purely pathological psychotic experience would be one that leads to a negative outcome such as depression or destructive behavior. This view was however rejected by Marek Marzanski and Mark Bratton1, who point out the lack of clarity in our understanding of what actually holds between negative medical values expressed by failure of action and spiritual values expressed in action enhancement. Further, like in most cases of “value” talk, there is much vagueness in what we consider “action enhancement” – it seems impossible to bring an objective answer to this without begging more questions, such as whether “good” should be defined solely as what is “beneficial”. Further, what of cases whereby delusional concepts do not interfere with personal enhancement? An example would be Simon’s case as described by Fulford in Spiritual Experience and Psychopathology:

Simon (40) was a senior black American professional from a middle-class Baptist family who claimed to have psychic experiences. At one point he was troubled because a group of his colleagues brought a lawsuit against him. He responded to this crisis by praying at a small altar which he set up in his front room. He found that the wax from his candles dripped onto his bible, covering particular letters. Although the marked words had no explicit meaning, Simon felt

that there was special significance in the occurrence – it was a direct communication from God. He continued to function well throughout the lawsuit and eventually became very successful.

In Fulford’s words, the belief clearly met the PSE definition of a “Primary Delusion” in the sense that it consisted of no element of rational reasoning and was solely based on sensory experience. It also involved Simon being “suddenly” convinced that there was divine significance in an arbitrary event. Fulford believed that medical students would have been inclined to diagnose Simon with Schizophrenia, but the fact that many of us hesitate to pathologize his beliefs once again exhibits the difficulty of systemizing mental illness.

Nevertheless, Freud, while undergoing his clinical treatments observed similarity between the behaviors of obsessional patients and religious adepts. Much truth in his claim can still be found today – and in the following I present a list of common similarities that Freud may have been referring to :

1. Similarity to Neurosis – Neurosis is generally defined as a functional disorder whereby feelings of anxiety, obsessional thinking, compulsive acts and physical complaints without objective evidence of disease dominate one’s personality. This can easily be paralleled to the religious believer’s anxiety about their sins, and obsession about what God desires. Some fundamentalist Muslims for example also show signs of compulsivity through excessive praying.

2. Similarity to Psychosis- Psychosis is a mental disorder characterized by symptoms such as delusions or hallucinations that indicate impaired contact with reality. Impaired contact with reality and delusions seems to be reminiscent of some biblical events. For example Isaiah, a court prophet from Jerusalem (740BC) “saw” the Lord sitting upon a throne in the air, saying, “Go, and say to this people…”. A medical student would be quick to dismiss this sight as a hallucination.

3. Similarity to Delusion – Delusion is a fixed false belief that is resistant to reason or confrontation with actual fact. In an Atheist’s view, religious people are certainly deluded – this ties in with Mele’s deflationary theory of self-deception. The religious believer’s delusion is often fueled by phenomena such as the confirmation bias (as seen in Design/Cosmological arguments), where we are “motivationally biased” by psychological desires or anxiety to bring conclusiveness to our state of being.

4. Similarity to Schizophrenia – Schizophrenia is a state that is very often socially dysfunctional, typified by the co-existence of contradictory or incompatible element, exacerbated by abnormalities in the perception or expression of reality. It is most commonly manifested as auditory hallucination, paranoia, or disorganized speech/thinking. Atheistic communities often see the Christian belief in an all-loving God that one must worship in order not to be sent to burn in hell for eternity as highly contradictory and the perception of metaphysical, divine higher beings have often been considered disturbed views of the physical world.

5. Similarity to Cognitive deficit: This harsh similarity to one’s failure to think can be seen to manifest in the Christian’s resistance to any evidence that can seem to shake their beliefs – To an extent we may see excessive determination to preserve one’s beliefs at the expense of dismissing rational reason. This is a clear example of Mele’s selective evidence gathering or selective focusing/attending. Religious believers tend to count the hits and ignore the misses in their testimonies for the existence of God, or the effectiveness of prayer. A person’s response to Nietzche’s quote “if you wish to strive for peace of soul and pleasure, then believe; if you wish to be a devotee of truth, then inquire” might say a lot regarding this matter. Many Christians, upon being presented with this quote, confide only in the first part and continue to believe in God whilst claiming truth of God’s existence on the grounds that they wish to strive for peace and pleasure. The essence of the phrase, that belief diverges from truth, is almost always ignored.

6. Similarity to Grandiose Disorder: The Grandiose Disorder is a delusion of inflated special relationship with God and his power, knowledge, identity and great benevolence tailored for a person. It is also unclear why religious believers, whilst informed about the finite nature of their being, believe that they can comprehend any aspect of the infinite divine.

I do not intend to post an ad hominen attack against theists, but one significant point resulting from the similarities between religious belief and mental disorders is that it can complicate pathology. Despite the fact that religious belief is in reality not considered pathological since it does not entail maladaptive behavior/suffering, if one was in an environment where religious beliefs were supposedly true then a lot of symptoms, delusions of grandeur for example, may be encouraged by religion and this might do disservice to someone who could actually be suffering from a mental disorder. The problem lies on the fact that religious people are more prone to fail in terms of addressing disorders as a result of their ignorance and superstition.

To a certain extent, I feel that (perhaps) one big question that may tell us whether religious experiences were pathological would be the question of whether God REALLY exists. For if he did, then, obviously there can’t be anything pathological about believing what is true, even if it means believing in a magical being, born from a virgin who separated the sea and turned water into wine.

Ultimately, I am uncertain as to whether medical terminology is better suited to describe psychotic experiences than theological language, but I believe that closer examination of the similarities of religious experience and mental disorder will help us fuel a more elaborate foundation for diagnosis.

 

Bibliography

Brett, Caroline, Spiritual Experience and Psychopathology: Dichotomy or Interaction? Philosophy, Psychiatry, & Psychology, Volume 9, Number 4, December 2002, pp. 373-380 (Article)

Jackson & Fulford , Spiritual Experience and Psychopathology Philosophy, Psychiatry, & Psychology, Volume 4, Number 1, March 1997, pp. 41-65 (Article)

Marzanski & Bratton, Psychopathological Symptoms and Religious Experience: A Critique of Jackson and Fulford Philosophy, Psychiatry, & Psychology, Volume 9, Number 4, December 2002, pp. 359-371 (Article)

Mele, Alfred, Self-Deception and Delusions, ORIGINAL SCIENTIFIC PAPER 2006

Existential Criminology

0

This one’s not from me, from my buddy @ Cambs

By identifying the pivotal role played by emotional and sensual factors in the execution of a criminal event, American sociologist Jack Katz added a new dimension to the criminological debate. Investigation of the relationship between emotion and deviant behaviour has been largely unattended by criminologists, with many of the most influential perspectives focussing on the objective circumstances of the transgressor rather than his or her subjective experience. It is a curious omission: even the man on the Clapham omnibus is likely to consider anger, envy, pride or the sheer-thrill-of-it as motivation enough to commit crime.

Established theories of crime and deviance have evolved from the Enlightenment’s obsessive focus on man’s rationality, or lack of it see, for example, classical criminology, control theory, rational choice theory, even situational crime prevention theory; until recently, the literature has concentrated on man’s reason, or focussed on background qualifications such as class, environment, gender and race at the expense of determining the role played by his emotional self. The majority of the debate has centred on the causes of crime, the consequences of crime and the control of crime but few insights have been offered into the forces that predominate at the moment of a crime’s commission. “Thoughts and ideas appear….as the most important and potent aspect of the way men steer themselves. And the unconscious impulses, the whole field of drive and affect structures, remain more or less in the dark.” (Elias, N. 1982, p.284)

Jack Katz argues that “inquiry should start with the foreground of crime, i.e. with a commitment to describe what always and uniquely occurs in the construction of different forms of deviant conduct.” (Katz 2002 p25) His theory is that these foreground factors, the visceral aspects of committing crime, can be used to provide a rational, alternative explanation for deviant acts, by showing that the appeal to emotional needs exerts a stronger force than the restraint of social laws. With its focus on the subjective, individualistic nature of behaviour, this essay looks at the existential being’s need for self-affirmation to make a meaningful interpretation of his life. Self-affirmation has a seductive power, but this essay will argue that the need to reinforce one’s ontological security over-arches the issue of whether an act is criminal or not.

The Project of Self-Affirmation For centuries, it has been understood that social behaviour can be ordered by the regulation of individual emotions. But, as Elias describes, time and the civilizing process have wrought far-reaching effects on the way in which an individual might handle himself emotionally. “It is they, these relationships within man between the drives and affects controlled and the built-in controlling agencies, whose structure changes in the course of a civilizing process, in accordance with the changing structure of the relationships between individual human beings, in society at large. In the course of this process, to put it briefly and all too simply, ‘consciousness’ becomes less permeable by drives, and drives become less permeable by ‘consciousness’.” (Elias, N. 1982, p.286) To understand the modern relationship between an individual’s rational consciousness and his emotional drives and affects, we turn to the existentialist movement. Fathered by Soren Kierkegaard, nurtured by Frederich Nietzsche, the existentialist doctrine placed its emphasis on the passions and anxieties of the individual man, and rejected the objective and mechanistic systems of thought that had become prevalent by the nineteenth century. Existentialism was a philosophy of subjectivity, a psychology of the spirit. It was a backlash against the predominant philosophers of the Enlightenment who gave little consideration to the subject of the individual, aside from the matter of his reason and the role that passion and imagination might play in emancipating reason’s powers.

Kierkegaard was the first to articulate that objectified knowledge was no answer to the real questions of an individual’s life. “Their resolutions emerge through conflicts and tumults in the soul, anxieties, agonies, perilous adventures of faith into unknown territories.” (Sartre 1966 Mairet p 6) Truth is subjectivity, argued Kierkegaard. And, in the hands of the existentialists, individual man was given supreme responsibility for his own existence, taken in past eras by gods, kings and science. The primary concern of the existentialist school was to explain how an individual consciousness makes sense of existence, in other words, how to explain what it is like to be a human being. It was an aim shared by the phenomenologists, a contemporaneous discipline whose most famous son was Martin Heidegger, who explored the effective and emotive aspects of the mind by looking at its perceptual faculties. Both philosophies started from the same principles: that the world is not a separate physical entity to a man, rather that the man’s reality (‘dasein’) is his interaction with the world and the creation of individual perspectives that make use of what he has found in his situation. The true reality is interdependence. The world does not exist outside the man a denial of much Enlightenment philosophy and man does not exist outside of his relationship with the world reflecting the inherent atheism of key authors such as Nietzsche, Heidegger and Sartre.

The existential atheist Jean-Paul Sartre pronounced that man’s existence precedes and commands his essence, distinguishing between the fact of being, and the nature of being, or way of life; “man first of all exists, encounters himself, surges up in the world and defines himself afterwards.” (Sartre 1966 p28) Because man is a free agent, and his will is exercised without reference to universal standards, he can choose his essence, whether he will be a submissive or a dominant self, a generous or a miserly self. Because essence follows existence, it is clear that each and every man will be what he wishes himself to be. In other words, there can no universal definition of a man. Or, as Sartre puts it, “there is no human nature, because there is no God to have a conception o0f it. Man simply is….Man is nothing else but that which he makes of himself.” (Sartre 1966 p28) This emphatically anti-determinist philosophy places the burden of responsibility squarely on each individual’s shoulders. “He cannot find anything to depend upon either within or outside himself. He discovers forthwith, that he is without excuse.” (Sartre 1966 p34). There is no predetermined human nature to blame for events, rather each man must choose his own course of action and, in so doing, he chooses an image of what he believes all men should be. Not only is man responsible for his own image, he is indirectly responsible for fashioning the image of all other men. This is a frightening responsibility, and Sartre admits that many will suffer anxiety as a result of having to bear it. But no one can hide from the responsibility, for making no choice and taking no action is in itself a choice and an action.

 

As an alternative to anxiety, or angst, some men will try to convince themselves that they had no choice, that they were not responsible for their actions. By blaming the environment, their culture, their situation or their fellow men, they refuse to acknowledge the terrible dilemma of existence. This Sartre calls an act of bad faith, these men he calls cowards. Acting in good faith is acting in the name of freedom. Nietszche and Sartre agreed, man must create his own values and meanings, but that will be done by action rather than justified by reason; as a result, truth and morality will be determined more by personal experience and action than by an objective rationality. Action must, of course, have a goal and to find one, man will invent purposes or ‘projects’, “which will themselves confer meaning upon himself and the world of objects all meaningless otherwise and in themselves. There is indeed no reason why a man should do this, and he gets nothing by it except the authentic knowledge that he exists; but that precisely is his great, his transcendent need and desire.” (Sartre 1966 Mairet p14) “The individual, then, may experience his own being as real, alive, whole; as differentiated from the rest of the world in ordinary circumstances so clearly that his identity and autonomy are never in question”, writes the celebrated psychologist R.D.Laing in his existential analysis of mental illness. Those who do not have such ontological security may find the ordinary circumstances of everyday life “constitute a continual and deadly threat.” (Laing 1960 p43-44) In the most threatened, psychoses may develop. Other risks include a perceived loss of one’s own subjectivity, and feeling oneself to be “no more than a thing in the world of the other… without any being for oneself.” (Laing 1960 p49) Laing argues that any individual who cannot take the identity of himself and others for granted, “has to become absorbed in contriving ways of trying to be real.” In such a way, the development and maintenance of self-esteem can itself be represented as an existential project. Existential Reading of Crime If self-affirmation is the project, how far can acts of crime fulfil the brief? If crime is a social construct, strictly speaking, it has no value in the existentialist perspective. If crime is a social construct, surely it is an act of bad faith to claim that one’s behaviour was constrained by it? Self-affirmation is achieved by acting authentically, and, outside of the social construct, there is equal value in choosing to have children or become an MP as in choosing to join a gang or pose as a ‘badass’. Conventional criminological theories with their emphasis on objective causal forces have generally failed to find a consistent explanation for crime.

The positivist approach in all its guises has proposed that criminals are formed by genetic and hereditary conditions, environmental and cultural influences, psychological aberrations. It has not explained why some criminogenic individuals exhibit none of these factors in their background, nor why many who fit the criminogenic profile do not in fact commit crime. Nor can it provide a convincing argument for areas of criminal behaviour such as non-materialistic and white-collar crimes, or war crimes. Where objectivity has failed, maybe subjectivity has something to offer. Jack Katz directs us to look at “sensual dynamics”, “the ontological validity of passion” and the “genuine experiential creativity of crime” (Katz, J. 1988, pp 6-8) He writes of the “sneaky thrills” experienced by the casual shoplifter. The objective is not the acquisition of an item, but the taking of it. The knowledge of society’s response to the project invests it with significance. Each stage of the enterprise poses a challenge: the anticipation of the deviant act, the art of not drawing attention to oneself, the mastery of the technical wherewithal to acquire the object, the skill involved in getting through the checkout undetected. The many ordinary interactions are made extraordinary by the omission of the one; payment. In some cases, the object itself comes to life, acquiring an almost magical and magnetic power, pulling the shoplifter towards it; ” a conventional object…becomes fascinating, seductively drawing the would-be shoplifter to it, only and just because she is playing with imposing a deviant project on the world.” (Katz, J. 1988, p.58) Katz makes a number of interpretations. Firstly, that committing a criminal act like shoplifting or vandalism “tests one’s ability to bound the authentic morality of the self from other’s perceptions” (Katz, J. 1988, p.66) The criminal learns that he can cross the boundaries into someone else’s world, take what he wants and get away with it. In some ways, it can be seen as a game; there are two sides, there is always a winner and a loser. The ludic metaphor, as Katz terms it, is familiar from accounts of white-collar financial fraud, where offenders refer to their monetary gains as being unimportant except as a means of keeping score. There’s a strong sexual theme to the commission of the crime: “an element of seduction turning into irrational compulsion” (Katz, J. 1988, p.71) heightened by the rush of excitement at the moment of the act and the climax of getting away with it.

Finally there is the powerful and liberating knowledge that one has successfully violated and transcended moral constraint. Katz’s phenomenological analysis is portable across a wide range of deviant behaviours, across gender, social class and race. “The excitement, you know, that’s the part I like: I’m not the sort goes round shooting at random anyone I see. All of my killings they’ve all had a purpose… Firstly, I don’t have to justify myself, there’s no need. I guess the way I’d put it would be to say it’s like we are at war, me and society I mean. I see myself as a law enforcement officer: only my laws, not yours…I’ve chosen [a way] which I thoroughly enjoy; it’s plotting and scheming and working out a strategy, then putting it into action and seeing if it works…I’ve been successful a hundred times more often than I’ve ever been caught for, thas certainly a fact. We’re cleverer than we’re given credit for, people like me, we certainly are.” (Parker 1999 p 90) Another of Tony Parker’s interview subjects describes the search for artistic creativity in a more sedate and conventional setting, which hints at the likely script in high-stakes white-collar crime. “It must be a job with a certain amount of standing and prestige… in addition it must provide me with the opportunity to exercise my brains and ingenuity so that I can consistently fiddle for myself another two or three pounds a week on top of my salary…I want to be able to give expression to this little bent I have, this little quirk or twist that gives me the satisfaction of knowing that just in a minor and unimportant way I’m being cleverer than the accountants or the auditors. This is what gives spice to life as far as I’m concerned.” (Parker 1999 p109) Man’s project of self-affirmation determines how he will behave and appear to himself and others. He may face choices that prove uncontroversial in the eyes of society. Or he may choose to show open hostility to, and disavowal of, the societal norms. “Rebellion may be a clue, then, to the source of meaning for contemporary social man. Given the conscious realization of the inevitability of contradictions in his life, he can transcend what would, under these conditions, be a meaningless existence by rebelling against these very conditions.” (Goodwin, A. 1976 , p.838)

Although not all rebellion involves criminal behaviour, and not all crime is rebellious, the existential criminal can still be seen to be exerting what Nietzsche called the ‘will to power’, rejecting conventions and taking actions that may not be grounded in reason, but are personally authentic, given his situation. “From very young, my sexual orientation and desires have been only for young boys, and because I am what I am and who I am, it seems natural and normal when I express that in a physical way. But no one accepts that: yet I can’t feel any different, even if I wanted to, which I don’t. It’s part of my whole personality and nature, and a very important and solid part…society doesn’t agree with me…And, although I know it, I don’t see how I’m ever going to be able to do anything about it, because it would be like betraying myself.” (Parker 1999 p172-3) The Moral Power of Emotions Katz moves the debate on from the material and physical to discuss how an individual can rise to a challenge to his moral existence. “The criminal action itself is fundamentally an attempt to transcend a moral challenge faced by the criminal in the immediate situation.” (Vold 1998 p225) Although this is about as watertight as the positivist approach (not everybody faces down a moral challenge by resorting to criminal activity), some existential and phenomenological writers argue that nearly all crime can be seen as a response to a grave threat to the emotions. “The emotions of modernism anxiety, alienation, self-destruction, radical isolation, anomie, private revolt, madness, hysteria, and neurosis are not able to be subsumed into self-control.” (Morrison 1997 p380).

Matza talks of the moral nature of the interaction, “infraction”, and Morrison of the moral emotions that are at the centre of the crime experience. Katz theorises about humiliation, as a sort of electrical current that runs through all deviant behaviour. In his chapter on ‘Righteous Slaughter’ (Katz, J. 1988, p.22), he links the opposing emotions of humiliation and rage using the essential stepping stone of righteousness. “The would-be assailant needs only the most fleeting encounter with the principle of moral reflection to move from humiliation to rage.” (p24) Being ridiculed, demeaned, demoted, degraded: the state of humiliation makes one painfully aware of the exigencies of one’s existence. Righteousness and rage make one selectively blind, indifferent to the historical and the future moments while completely focussed on the here-and-now and the response required for moral self-defence, even to the extent of deadly assault. Transcending humiliation with righteous rage creates a moral framework that is completely coherent within its situational context: sometimes this is understood in wider society, as has become the case with women who attack their partners after years of domestic violence, more often it is not, as seen in Katz’s opening example of a father who beat his child to death for crying which he saw as “purposive and offensive” (Katz, J. 1988, p.12)

Humiliation is also highlighted by Keir Sothcott in an article on war atrocity (2002). Lt. Calley led the American platoon that infamously laid waste to the Vietnamese village of My Lai in 1968. Clean-cut GIs were subsequently accused of uncontrollable mayhem and violence, rape and murder of civilians, and Lt. Calley was put on trial for murder. Sothcott describes Calley’s response to his situation in Vietnam: “From imaginary fear and actual experience Calley concluded that the Vietnamese were mocking him, deriding his supposed omnipotence and refusing to comply with his romantic image of heroic war. …For Calley, this realisation that Vietnam was insidiously dangerous, became a deeply humiliating experience and a direct attack upon his self.” (Sothcott 2002 p 30) Far from responding to him and his men as saviours of the situation, heroes of the hour, the Vietnamese showed no gratitude to the Americans. This, in addition to the very real hostility and actual threat, contrived a context for Calley that, combined with the moral imperative of his orders from above to ‘go in and kill them all’, unleashed the sense of righteousness that transformed his humiliation into a righteous slaughter.

The robber who adds a sawn-off shotgun to his bag, the United States government led by George W.Bush which lobbies for war against Iraq: both are engaged in “a transcendent project to exploit the ultimate symbolic value of force to show that one ‘means it’.” (Katz, J. 1988, p.321) It becomes the primary goal, rather than robbing the bank or maintaining manageable relationships in the global supply of oil. As with the shoplifter, who more often than not pays for goods at the check-out while stealing others, the project is not characterised by material gain. For the United States administration, as with Katz’s “badass” and “hardman”, transcendence can be achieved purely by having a presence, by being completely intimidating. “The hardman triumphs, after all, by inducing others to calculate the costs and benefits.” (Katz, J. 1988, p.235) Criminological Theories The impact of emotions is referenced in other dimensions of the criminal justice cycle. In victimology, the dominant affect is acknowledged in the discourse; we talk of the Fear Of Crime, even if the study of this fear is largely actuarial statistics. And perhaps the greatest of all the emotional drives…revenge…is the principal player in the theatre of punishment. Anger and vengeance have a high degree of social acceptance when they are channelled into procedures such as the criminal justice system. “Emotions are the most toxic substance known to man” said Dennis Nilsen. (Masters 1995 p184). Emotions are certainly the most toxic substance known to criminologists, who have studiously avoided incorporating the seductions and repulsions of deviant behaviour into their work, although David Matza’s theory of techniques of neutralization can be compared to the existentialist vision of bad faith, in the criminal’s desire to hide his belief in conventional morality behind excuses rather than challenge it and take responsibility for his actions. Symbolic interactionism looks at the self-image of the criminal, and how he perceives the relationship between himself and other members and institutions of society.

Labelling theorists also concern themselves with image, but in the context of how society will apply it within the formal and informal processes of social control. The solutions proposed by many criminologists seem to apply to man’s reason and/or self-control. To me, it is clear that most people have both, that there is a symbiotic relationship between the two, and that it is this very symbiosis that dictates when either reason or self-control, or both, should be used. As we read Katz’s accounts of righteous slaughter, offending behaviour can be seen contextually as not only moral but rational. To cut the mustard as a ‘badass’ clearly requires as much personal self-control as being a stockbroker. So the question is why the seductive appeal of illegitimate behaviours can be resisted by some people but not by others. Given an emotional cocktail of humiliation, moral righteousness and anger, some men turn into killers but others do not. The existentialist perspective of self-affirmation provides an understanding of, but maybe not an explanation for, criminal behaviour. Taking a phenomenological perspective, one can look behind the arbitrary legal definitions of ‘crime’ to see positive qualities of imaginativeness, sensuality, self-esteem, and creativity being exercised in certain behaviours. We would accept the desire for a rush of excitement, the thrill of courting danger, as valid motivations for bungee-jumping… …but not public affray. We do not approve of the fact that someone injects heroin because he wants to feel good…yet it is highly fashionable to inject botulinum toxin for cosmetic benefit.

The matrix of licit and illicit behaviour becomes even more complex as one moves the context across time or space: activities which once were criminal are now legitimate, behaviour that is proscribed in one society is accepted in another. Clearly the conditions in which people live will determine what possibilities they have, and this is perhaps the point when, like the orchestra coming in behind the soloist, the body of mainstream criminology can further the existential explanation…for example, by examining the ways in which cultural, environmental and social factors might shape choices. In conclusion, a close reading of the work of Katz and other existentialist and phenomenologist writers suggests to me that crime is at its most seductive for individuals who generate their self-esteem from quick fixes, short-term projects with a rapid turnaround where stimulation and instant gratification are the priorities; the clue is present in the language we use – ‘taking control of the moment’. Projects of self-affirmation that can be sustained over a longer term can sidestep the technical transgression that is crime. “The most fundamental characteristic of man and consciousness is his ability to go beyond his situation. He is never identical with it, but rather exists as a relation to it. Thus he determines how he will live it and what its meaning is to be; he is not determined by it.” (Sartre 1963 Barnes pxix)Could this equally well be read as an existential commentary on how futile it is to try and contain the self within the artificial boundaries of a social construct like crime?

Log in