You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

How should statistical evidence provided by empirical psychology be interpreted in assessment of arguments in the rationality debate?

How should statistical evidence provided by empirical psychology be interpreted in assessment of arguments in the rationality debate?

 

1: Introduction

 

In a 1968 paper entitled Reasoning About a Rule, Wason devised an experiment known as the Wason selection task in order to investigate Karl Popper’s idea that all of science is grounded in hypothetico-deductive reasoning, whereby seeking counter-examples (i.e. contradictory evidence to a particular proposition) is a precondition. Wason’s project examined the likelihood of Popper’s claim – is “learning” just the establishment of hypotheses and perpetual search for contradictory evidence?  The selection task involved testing whether subjects have the ability to seek facts that violate a proposition, particularly in conditional If P then Q hypotheses. Astonishingly (!), Wason’s findings appeared to suggest that humans are in fact incapable of completing simple logic task.

 

Since Wason’s tragic discovery of our fundamental incapability, literature on rationality has formed a basis for challenging the basic reasoning abilities of human beings. In the last few decades of the twentieth century, the debate was distinctly split between proponents of the “heuristics and biases” tradition and those of the “evolutionary rationality” tradition. The former interprets evidence from tests such as Wason’s selection task to be conclusive that Homo sapiens are ultimately irrational, whilst the latter derives a much more optimistic approach to human rationality. In recent years, much writing has been done to scrutinize and illuminate the foundations of both camps from a more objective, external perspective. It became apparent that the rationality debate is, to a point, more complex than what either camp assumed.

 

The goal of this essay is to review matters encircling the rationality debate and to posit that amongst the two camps, evolutionary rationality appears to be a more constructive approach to the interpretation of empirical psychological evidence. Nonetheless, we must be aware, as Stanovich and West have pointed out, that as a philosophical discipline the evolutionary rationality approach does not outstrip one fundamental flaw- it does not clarify nor resolve the problem of whether rationality should be seen as a process resulting in optimal results for the gene or for the individual.  Towards the end of the essay, I shall elucidate Stanovich and West’s conclusion and the ramifications it may have for what lies ahead for the rationality debate.

 

2: The Evidence

 

To begin with I shall present the evidence accumulated by empirical psychology in contemporary experiments, which will be interpreted and elucidated towards the end of this paper. I shall first introduce three experiments which have displayed general consistency in results when repeated on different human subjects. The three experiments are all shown as evidence for a specific fallacy that most human beings are disposed to commit in a rationality test, akin to the paradigm Wason selection task. Such rationality tests can be devised in various ways, but are generally equivalent to the following:

 

A human subject is presented with four cards on a table, with upper faces that display ‘A’, ‘D’, ‘3’ and ‘8’.  He is told that a letter is printed on one side whilst a number is printed on the other on each card in the set. The subject is next provided with a piece of paper, which reads “If a card has a vowel on one side, then it has an even number of the other side”. The subject’s task is then to determine which cards need to be turned over in order to establish the truth of the statement conclusively.

 

According to the laws of classical logic, because the proposition is of the form PàQ the sole contingency that could refute the statement is if one of the cards held the form P ^¬Q, meaning if a card had a vowel written on one face but an odd number written on the other. Hence, only the cards with either a vowel or an odd number need to be checked – in which case would be the “A” and “3” cards here. Be that as it may, recurrent experiment has shown that in general, not more than 10% of people come up with the right answer at their first attempt to solve the puzzle. Most subjects have selected either only the “A” card, or both the “A” and the “8”.

 

The second example was established by Stanovich and West in their 2003 paper entitled Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality.  The example is known as the Linda Problem:

 

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable.

a.       Linda is a teacher in an elementary school

b.       Linda works in a bookstore and takes Yoga classes

c.        Linda is active in the feminist movement

d.       Linda is a psychiatric social worker

e.        Linda is a member of the League of Women Voters

f.        Linda is a bank teller

g.        Linda is an insurance salesperson

h.       Linda is a bank teller and is active in the feminist movement

(Stanovich and West 2003: 173-174)

 

 

Most participants of this experiment were shown to have ranked statement f as somewhere below statement h. According to laws of classical logic however, this cannot be correct because statement h is a conjunction of the form A ^ B, which by law of the probability theory cannot have a greater probability of being true than either A or B being true independently. Hence, although most participants rank statement f below statement h, it is, strictly speaking, incorrect to posit that statement f is any less probable than statement h. This finding has been interpreted to show that human subjects are prone to employ instances of faulty reasoning, in this case what is labelled conjunction fallacy.

 

The reasoning employed by those who get a drastically incorrect answer is labelled the base-rate fallacy.

 

The third example, pertaining to the human analysis of statistical data, is found in Tversky and Kahneman’s 1982 paper Evidential Impact of Base Rates.Participants were presented with the following question:

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

(Casscells, Schoenberger and Grayboys 1978: 999, cited in Tversky and Kahneman 1982: 154)

 

Presuming that when a person is in fact a carrier of the disease the test will always be positive, the right answer to the question should be 2%, as uncovered by simple statistical calculations. Approximately 51 out of 1000 people will come out positive, and amongst these people only about 1 person will in fact be a carrier, thus the answer should be 2%, which is 1 as a percentage of 51. However, amongst the original participants of this test, who were staff and students at Harvard Medical School,

 [t]he most common response, given by almost half of the participants, was 95%. The average answer was 56%, and only 11 participants [out of 60] gave the appropriate response of 2%, assuming the test correctly diagnoses every person who has the disease.

(Tversky and Kahneman 1982: 154)

Likewise, this type of test has been re-done over and over with analogous results. The reasoning exercised by those who get answers that are substantially far off is known as the base-rate fallacy. These subjects seem to neglect the fact that out of 1000 people, only 1 would be a carrier of the disease to begin with, meaning that the aggregate number of false positives will drastically outweigh the correctly identified carriers.

 

Thus we have been presented with 3 paradigmatic examples of prevalent human violation of the laws of classical logic, probability theory, and statistical theory. The way we should decipher such violations is dependent on our general understanding of human reasoning, and on our understanding of the said classical theories themselves. We shall proceed to elucidate human reasoning in the next section, and the classical theories thereafter.

 

3: The Rationality Debate

 

In recent years, the rationality debate saw various lines of thought emerge, advocating different strategies for interpreting the types of evidence against human rationality presented in the previous section. Two major groups established were the ‘heuristics and biases’ (H&B) and ‘evolutionary rationality’ traditions. Generally speaking, proponents of the H&B tradition interpret the said evidence as pessimistic to human rationality, whilst evolutionary rationalists leave much more room for conveying human rationality. In what follows we analyze the argumentative thrust of each tradition.

 

3.1: Heuristics and Biases

 

A heuristic is a general rule of thumb, defined by cognitive strategies people employ to make decisions in face of data overload in problems of common form. For example, a girl might use the heuristic “a boyfriend being extra caring suddenly means he is cheating” while making decisions on whether to dump him or not. A partygoer may choose which club to enter based on the heuristic that “the club with the longest queue outside must be the best”. Nonetheless, it is evident that such heuristics do not always work effectively. Often, heuristics entail systematic errors that may be experimentally secluded, thus the label of being bias. Biases are the assumptions fused into a system that are taken for granted as being part of the system’s operation.

 

The H&B camp of the rationality debate as pioneered by Kahneman and Tversky postulates that human subjects are prone to perform poorly in rationality tests because they have limited sets of heuristics from which they can bring logical concepts to apply on problems of rationality. Additionally, human subjects innately hold biases that accommodate and assist the heuristics that are used. The broad idea is that most of the time, such heuristics are too blunt to be depended on when it comes to doing rationality tests given by empirical psychologists. Often, they result in faulty reasoning in daily life.

 

Advocates of the H&B tradition postulate that the rationality underlying our reasoning in daily life (the form of reasoning that experimenters in the second section were attempting to examine) diverges from the rationality prescribed by pure logic. Kahneman and Tvsersky’s extensive empirical work was ultimately aimed at demonstrating that people behave irrationally in many contexts involving decision making under uncertainty and statistical thinking. This happens because when human subjects are presented with a problem, they are psychologically disposed to draw on imperfect heuristics under particular biases, often generating answers that contradict “normative”[1] answers. At times, contradictory responses are also given under circumstantially different but logically equivalent formulations of the tests, casting the phenomenon now known as the “framing effect”, a cognitive heuristic whereby human subjects have the tendency to form conclusions depending on “frameworks” within which situations are presented. How a problem is framed influences the human subject’s subsequent choices. In Kahneman and Tvsersky’s work it has been found that people often accept frames as they are given and seldom reframe (and meanwhile rationalize) problems in their own words.

 

The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences

(Tversky and Kahneman 1986)

 

A classic example of how framing influences people’s choices would be Edward Russo and Paul Shoemaker’s story about a Franciscan who seeks permission from a superior to smoke while praying. When the Franciscan asks “Can I be given permission to smoke while I pray?” his request was promptly denied. However, when he framed the question differently and asked “In moments of human weakness when I smoke, may I also pray?” an opposite response was elicited, showing the power of framing. Describing outcomes as either gains or losses also counts as framing, and often triggers “irrational” humans to make differentiated choices in situations whereby outcomes are actually identical, but only described as gains as opposed to losses, and vice versa (i.e. saying that something has a 1 in 10 chance of winning instead of 90% chance of losing).

 

    Another example of a heuristic employed by humans in judgement formation is known as the availability heuristic. First reported by Kahneman and Tversky, this heuristic causes human subjects to project biased probabilities in favour of concepts that they can think of more easily. Thus, the easier a concept can be recalled to mind, the more ‘available’ it is, and the subject then believes that its occurrence is also more frequent (or available). For example, the suicide rate of a given community may be judged to be higher than it actually is by somebody who personally knows of many specific cases of suicide in the particular community. Representativeness is also a heuristic that influences judgment bias in favour of sample events that seem to be more representative of the wider range of events from which a sample has been taken. For example[2], in a test whereby a die with 4 green faces and 2 red faces is rolled 20 times with the series of Gs and Rs recorded, participants were asked to bet upon which of the following sequences will most likely be yielded: (1) RGRRR (2) GRGRRR (3)GRRRRR.  Many of those unfamiliar[3] with probability theory tended to assert that tossing such a die will more likely yield the sequence (1) RGRRR than (2) GRGRRR, even though sequence 1 in fact dominates sequence 2, meaning that in any test where (2) is yielded, it is necessary that (1) has already been yielded too. The reason people choose (2) is that it more strongly resembles the die (that it has a larger proportion of Green inside), thus what is meant by the “representative” heuristic. Human subjects often choose future outcomes that are strongly representative of pre-existing beliefs about the generating process, and neglect attention to what hypothesis is actually simplest. This is explained by the fact that people tend to have expectations of sample events that conform to the “norm” – if there are more green sides on the die, it is more ‘normal’ for there to be more Greens yielded in the sequence.

 

Not only do people base (fallacious) judgment on pre-existing beliefs, it has also been found that people tend to seek confirmation of data that has already been accepted and actively (though perhaps unconsciously) find facts in support of the data, via what is known as the “supporting evidence bias” as explicated by Hammond, Keeney and Raiffa in 1998 Harvard Business Review amongst a list of biases that people are prone to commit to. The supporting evidence bias is a confirmation bias that affects more than how people collect information as it also changes how data is interpreted.

 

Suppose, for example, you are considering an investment to automate some business function. Your inclination is to call an acquaintance who has been boasting about the good results his organization obtained from doing the same. Isn’t it obvious that he will confirm your view that, “It’s the right choice”? What may be behind your desire to make the call is the likelihood of receiving emotional comfort, not the likelihood of obtaining useful information

(Hammond, Keeney Raiffa 1998)

 

 The supporting evidence bias is a confirmation bias that affects more than how people collect information- it also changes how data is interpreted. At times, it can fuel a form of irrationality that may even be considered delusional according to Alfred Mele’s deflationary theory of self-deception. People are often motivationally biased by psychological desires or anxiety to bring conclusiveness to problems, even when it means neglecting potentially contradictory but truth-bearing evidence. In an Atheist’s view for example, confirmation bias triggers much of the believer’s interpretation of facts in the world (i.e. in the Design/Cosmological arguments) in favour of the idea that an all-perfect God exists, ignoring evidence that potentially harms God’s reputation in order to fulfil the human creature’s desperate, psychological need for a purposeful life with divine meaning (and perhaps also perpetuation of life after death). This, of course, can apply vice versa too as long as one pays too much attention to supporting data and neglects conflicting data. It is believed by psychologists that what underlies our tendency towards such bias is our natural inclination to decide what we want to do before figuring out why and our tendency to prefer dwelling with things or concepts that we like.

 

Yet another form of rationality-altering bias identified by Hammond, Keeney Raiffa is the “sunk cost bias”. This refers to the idea that humans tend to find greater difficulty in letting go or giving up things that they have invested more into[4], even though rationally speaking, sunk costs are investments made in the past that cannot be recovered in the future, and should bear no relevance to decision making.

Sunk costs are the same regardless of the course of action that we choose next. If we evaluate alternatives based solely on their merits, we should ignore sunk costs. Only incremental costs and benefits should influence future choices.

(Hammond, Keeney Raiffa 1998)

This type of bias has been widely demonstrated through research in many aspects of life. For example, many people find that the longer they have waited for a bus to arrive, the harder it becomes for them to decide choosing alternative transport (knowing of course, that the bus will arrive at some point). This is because many people feel an innate sense that “wasting” any form of resource (time or money, for example) is bad. In waiting for a bus for example, people feel obliged to keep waiting (investing more time) for the bus because if they decide to take the train, then the sunk cost (time they have already used to wait for the bus) will have been “wasted”. Speaking in economic terms, however, this is irrational thinking, since the time used to wait for the bus is irrecoverable and should be irrelevant to a decision regarding what should be done from present time in order to get to destination X.  

 

Before we move on to the next section, I shall mention one last common bias- what is termed the “status quo” bias, also known as the “comfort zone” bias. It has been postulated that many people have a tendency to choose alternatives that conserve the status quo. Reasons behind this are fairly apparent, for going against what is normally considered axiomatic often disposes individuals to social isolation or criticism. Adhering to the status quo is generally more emotionally comfortable as it provokes less internal tension, but often people overvalue pre-existing axioms to the extent that they forsake individual rational judgment. Psychologists posit that the cause of this is that when major decision making is necessary, potential losses loom larger than potential benefits. What we lose are things that we already possess and therefore know of, whereas gains can only be taken potentially and are hypothetical. Hence, the impacts of loss form more vivid concepts in our minds and since people mostly hope to avoid regret and strive to preserve reputation, they often find clinging on to the status quo a more secure option even when boldly going against it opens opportunities to greater gains.

 

According to advocates of the H&B tradition, it is thus reasonable to posit that human beings are irrational. We move on to examine arguments from proponents of the opposing camp of Evolutionary Rationalists.

 

 

 

 

3.2: Evolutionary Rationality

 

Since William Harvey demonstrated that the purpose of the heart is to pump blood through the lungs and the body (ref. Harvey’s work on systemic circulation), the functional organization of the human body has been examined to great depth by physiologists throughout the world. Today, it is generally accepted by biologists that the structure of our bodies attends to our needs for survival and reproduction. A sort of cornerstone within such research was established by Charles Darwin, who in his 1859 book On the Origin of Species, described how humans have evolved through the process of natural selection, developing traits favourable to reproduction. Since then, psychological concepts of evolution (analogous to physical evolution) entered the scene and some psychologists have concluded that like our bodies, cognition also has structure. Thus emerged a group of evolutionary psychologists who postulate that the human cognitive structure is designed by natural selection for assisting survival and reproduction, in ways analogous to physiological evolution.

 

The primary advocates of the evolutionary rationality theory are Gred Gigerenzer and Peter M. Todd. In the Evolutionary Rationality tradition, the heuristics found to be used during rational decision making are also examined, but interpreted in ways much friendlier to human rationality. Gigerenzer and Todd claim that our common heuristics have adapted through interaction with the “social, institutional or physical environment” to become useful decision-making tools. In their view, heuristics that we employ are definitely good in one way or another (at what they do) and are not random tactics rummaged around just for the sake of resolving problems whenever decision-making is required.

 

“Ecological rationality” is the term Gigerenzer and Todd use to describe how suitable a decision mechanism has become in terms of its adaptation to assist the human mind. Cosmides and Tooby have made renowned argument that most psychological traits differentiating human beings from other animals (including the more intelligent higher primates) developed during the Pleistocene era, which began approximately 1.8 million years ago and ended about 11,000 years ago.[5] During this era, the human mind began evolving into the particular form it holds today. This phenomenon of change is termed the environment of evolutionary adaptedness (EEA), as put forward initially by John Bowlby, who was noted for his pioneering work in attachment theory. The EEA illustrates how animals in different environments adapted to their surroundings when they were faced with local problems of reproduction. This explains the diversity of existing animals, for they have faced different reproductive problems which resulted in different adaptations. EEA refers to the collection of problems that a particular specie has faced through evolutionary time. It has been advanced that the collection of selection pressures that entailed evolution of the human body are almost unquestionably the selection pressures that humans faced during the Pleistocene era. 2 million years ago the human genus homo began in Africa and a couple millions after that the genus was spread to Asia. Whilst roaming around and spreading throughout the lands of the earth, humans began agricultural practices and abandoned previous habits of hunting and gathering towards the end of the Pleistocene era. A few thousand years later (post-Pleistocene), people began establishing cities and drastic cultural changes occurred rapidly over the previous 10,000 years. This was said[6] to have possibly opened humans to new selection pressures whilst eradicating selection pressures that were significant earlier on. Although the Pleistocene excludes the recent period of novel settlement, its significance in the genesis of the human genus is now identified as the “epoch which shaped human physiology and psychology”[7]. This idea is validated by the fact that the adaptation of vision, for example, would have been eliminated during the 2 million year-long period of the Pleistocene, were it not maintained by stabilizing selection[8]. The following extract explains clearly the evolutionary science behind this:

 

If the sun blinked out 2 million years ago, humans and all other animals with vision would have lost their visual capabilities. Mutations would inevitably have occurred in the genes underlying our visual system, degrading our visual abilities. Since there wasn’t any light, however, this degradation would have been inconsequential, and such mutations not been selected out of the population. 2 million years later, the visual system would be eradicated.

(Hagen 1999)

 

Thus “sunlight” is to be taken as a component of the human EEA, and the visual system we have today is the result of stabilizing selection for vision in the Pleistocene era. So since the majority of evolutionary change in human adaptations to the environment actually occurred before the Pleistocene (or at least before the dawn of the earliest fixed civilization), there is inevitable disparity between EEA and our present day environment to which the majority (in fact, all) of the human subjects from the rationality tests as described earlier in this paper belong. This fact is fundamental to the objections against evolutionary rationality. We shall discuss disputes advanced by Keith Stanovich and Richard West later in this paper (Part 5), whereby it is (in their view) revealed that what is entailed by major discrepancies between the EEA and the current era undermines the arguments put forward by evolutionary rationalists.

 

Gigerenzer and Todd on the other hand endorse interpreting the empirical findings in psychology regarding the human mind and the external world the way Herbert Simon analogized – that the mind and the world are like “blades of a pair of scissors” and “must be well matched for effective behaviour to be produced- just looking at the cognitive blade will not explain how the scissors cut”. Gigerenzer and Todd object to the H&B  tradition on the grounds that its assessments in human mechanisms of rationality has been too confined, since it does not consider environmental factors and account for surroundings in which specific heuristics are appropriate. Issues connected to this point will be elucidated later in this paper (part 4) along with Cosmides and Tooby’s observations about improved performance in rationality tests for human subjects when problems are introduced in structures more compliant to human cognition.

 

3.3: Contentions and Reconcilement

 

Since a mere map of the heuristics accessible for employment in the modern human mind will not provide hints as to how the brain began acquiring those features, it is obvious that a completely established theory of evolutionary rationality needs to surpass the opposing camp of heuristics and biases as an explanatory hypothesis. Some scholars have however pointed out that the conceptual divergence between the two hypotheses is in fact illusory. Rather, as an attempt to reconcile the two traditions we may pose evolutionary rationality as an informative stance to the H&B tradition- it can supply psychologists with ideas surrounding the backgrounds and origins of the heuristics used in contemporary society.

   

The prime endorsers of the reconciliation movement between the 2 opposing traditions are Samuels, Stich and Bishop. In Ending the Rationality Wars, they attempt to moderate the friction between the two opposing traditions through distinguishing the core theses of each from their rhetorical flourishes[9]. They contend that

 

The fireworks generated by each side focusing on the rhetorical excesses of the other have distracted attention from what we claim is, in fact, an emerging consensus about the scope and limits of human rationality and about the cognitive architecture that supports it

(Samuels, Stich and Bishop, 2002)

 

In Samuels, Stich and Bishop’s view, controversy only arises when advocates of one hypotheses fail to realize that the central claims of the opposing hypotheses are not deliberated to be all-encompassing theories about the human mind. In their view, it is entirely plausible to posit simultaneously that humans regularly stray away from appropriate norms of rationality and that at times they act in conformity with such norms. Likewise, as Gigerenzer postulated, it is not implausible to hypothesize that the heuristics detailed by researchers such as Kahneman and Tversky evolved as mechanisms to assist human problem solving and decision making in the EEA.

 

 

However Gigerenzer disputes that heuristics such as availability and representativeness are too obscure and can pertain to any number of undefined cognitive processes that can “post hoc be used to explain almost everything”[10] , and that interpretation of such hinges wholly upon the subjective orientation and personal perspective of the researcher. Terms like “representativeness” are atheoretical and appealing to such heuristics as generators of biases does not provide satisfactory explanation. Gigerenzer thus instead favours the recognition heuristic, which is employed to single out one object amongst a subset according to some criterion.[11] The recognition heuristic operates through the mechanism of ecological rationality which Gigerenzer and Todd puts forward (that humans are innately capable of exploiting the structure of information in its natural surroundings) and is fundamentally based on the ignorance of human subjects. The heuristic can only be applied in situations where ignorance is heavily correlated with the criterion being recognized. For example, assume that the “criterion” in question is “population of a city”. It is “natural” for human beings to assume that “cities with more people are generally more salient to individuals”, in which case the recognition of a city’s name would have a strong correlation to determination of a city’s population.  Gigerenzer and Hoffrage demonstrate this in a research (2005) through asking “Which has more inhabitants – San Diego or San Antonio?” to a group comprised of Americans and Germans in equal numbers. Although the Germans in the group should expectedly be less familiar with U.S. city populations than the Americans, all participants correctly answered that San Diego had the larger population since it was the more “recognizable” of the two choices. The recognition heuristic allows subjects to determine the more populous city with considerable accuracy based on whether they recognize a city or not. Daniel Oppenheimer however, posits that findings of this sort cannot be interpreted as satisfactory validations of the recognition heuristics account. For example, pre-existing knowledge of other aspects of San Diego, such as its size, may have contributed to a subject’s determination that it is more populous in addition to mere recognition of the city, contrary to what the recognition heuristic account suggests – that subjects will choose a recognizable city as the more populous over an unrecognizable one even if they knew that the recognizable one is small in size[12].

 

It appears evident that much work is still to be done in the conciliatory vein prior to an established consensus on what psychologists ought to investigate. The rest of this paper will be dedicated to the evolutionary hypotheses regarding rationality. The empirical work carried out by Kahneman and Tversky now demands complementation with legit supporting theories concerning the how’s and why’s of heuristic development. A substantial portion of Gigerenzer’s work has been dedicated to the appropriateness of our evolved heuristics in the contemporary era: “The goal is to determine what environmental structures enable a given heuristic to be successful, and where it will fail”[13]. Thus, we must consider the deficiencies of supposedly universal heuristics when bound by rationality tests, particularly when deviation from (apparently) rational norms (according to empirical psychology) appears to be systematic. In Part (4) I shall examine some explanations that have been put forward in the past. Nonetheless we must maintain a critical view of Gigerenzer’s notion of “success”. A substantial flaw in Gigerenzer’s evolutionary rationality account (and indeed, the accounts of most proponents of the same tradition) is identified by Stanovich and West, in that it neglects the fundamentally crucial question of what actually is rational – should rationality be taken as a process resulting in optimal results for the gene or for the individual? What promotes success for an individual often diverges from what is rational for their genes. What exactly do we strive to attain success for?  In Stanovich and West’s words,

 

Definitions of rationality must coincide with the level of the entity whose optimization is at issue

(Stanovich and West 2003)

 

Their views shall be elucidated thoroughly in part 5.

 

4: Explanations offered for systematic deviation

 

Why do humans perform poorly in rationality tests? Proponents of the H&B tradition such as Kahneman and Tversky typically answer by accusing humans of being fundamentally irrational creatures.  They believe that this conclusion can be drawn explicitly from their empirical studies.[14] This standpoint, however, has not gone far without being frowned at due to its propensity to oversimplify the dynamics of human decision making. It has particularly been accused of failing to consider the reality of constraints on the subject’s decision making process. Hence, a significant proportion of what is grasped to be the H&B tradition as a whole is dependent on Kahneman and Tversky’s own interpretations of their findings. Thus, regardless of deeper elucidation of why human performance is substandard in rationality tests, the H&B tradition alone is explanatorily unsatisfactory.

 

Elliott Sober posits that the human’s poor performance in rationality tests should not reflect nor be attributed to their underlying rational competence. For example, we would expect an eloquent speaker of English to have grasped all the rules of grammar in the language, but sometimes their ability will be hampered by disruptive factors and we do not therefore say he does not speak English fluently. Hence, it would be inconsistent for us to consider humans irrational beings merely on the grounds that they occasionally fail to respond correctly in rationality-measuring tests. The substandard performances can often be attributed to rationally disruptive factors such as bias.

 

Hilary Kornblith on the other hand does not find conceptual divergence between general competence and performance sufficient for fending off accusations of human irrationality[15]. Even systematic and recurrent errors (i.e. errors depicted by Kahneman and Tversky’s rationality tests) can be elucidated through Kornblith’s conception of the distinction. He analogizes the question at hand with a depiction of a blocked faucet:

 

When the blockage was removed, the faucet worked perfectly. Consider the workings of the faucet prior to removal. It does not work properly, and its failure is neither occasional nor unsystem-atic. But a natural way of describing the faucet would consist of a description of a mechanism which works perfectly – the faucet after the blockage has been removed – together with a description of an interfering factor.

(Kornblith 1992: 900)  

The distinction between the potential competence of the faucet and its actual performance is clear, and Kornblith points out that it is via describing the faucet in the manner cited above that we know what to do in order to make the faucet work properly. In his view, pinpointing the blockage this way bearing in mind the natural function of a faucet is more utile than calling it just “another of the faucet’s many parts”. The faucet is compared to our “belief generating equipment”.

 

If our interest in epistemology is cognitive repair, that is, improving the manner in which we arrive at our beliefs, a description of our cognitive equipment which divides it into a perfectly working mechanism and a variety of interfering factors will serve our purposes admirably.

(Kornblith 1992: 900)  

 

The catch is that although it is reasonable to conceive of a perfectly working faucet that has been blocked by interfering blockage, it is unclear whether our cognitive equipment can be described as a perfect reasoning mechanism with interfering factors. For what is a perfect reasoning mechanism? There is no reason to posit that when we make errors in rationality tests, it is simply due to our reasoning mechanisms being interfered by some external disruptive factor. Further, the competence/performance distinction does not absolve the faucet (or human mind) as long as it does not work the way itought. As soon as we establish a standard against which to measure rationality, conformity to that standard can only be described on the basis of concrete results, and this does not include an idealized notion of what a perfect reasoning mechanism is. Thus measurement of competence for the faucet is whether water comes out, and for the human it is whether positive results are persistently achieved in tests of rationality, which repeated empirical investigation shows to not be the case.

 

Nonetheless various tests have been conducted to highlight the stark contrast of performance in rationality tests when human subjects are presented with problems in different circumstances/formats. Cosmides and Tooby for example have observed a radical increase of positive responses when probabilistic questions are posed in terms of frequencies instead of percentages/decimal form. This lead to speculation that humans naturally acquire and manipulate information more accurately when problems are presented in formats more reminiscent of how homo sapiens have been hypothesised to make reasoned decisions back in the EEA. The general contention is that as homo sapiens did not think in terms of percentages in the Pleistocene, our early primates were pushed to make probability judgments based on frequency, a form of information relatively simple to acquire and mentally recall. This idea has been backed by the observation that tests such as the Wason selection task for example have yielded a much higher percentage of correct responses when presented in social context (i.e. in terms of violation of underage drinking laws) instead of the abstract context originally set up, even though the situation is logically identical to the original experiment. This has been traced back to a higher disposition for social reasoning over abstract deliberation. After all, back in the Pleistocene, human survival was highly dependent on social interaction.

 

Such observations go together with the apologetics made by advocates of evolutionary rationality against the H&B tradition. The general view is that since it is possible for us to conform to rules taken by cognitive psychologists to be guidelines of rationality whenever the information is presented appropriately, it is unfair and erroneous to characterize people as irrational. The H&B advocates argue back that this does not in any way mitigate the findings of original H&B experiments. The human systematic deviation from statistical logical norms when information is introduced in an “incompatible”, yet prima facie equally legitimate format, withstands the evolutionary rationalist’s claims against the human’s inherent irrationality.

 

Kornbith identifies implausibility and unfairness in two common ideals considered guidelines of judging human rationality. Firstly, it appears to be the distinct domain specificity of heuristics that make their limitations seem inherent.[16] What a domain refers to in terms of heuristics is the set of conditions whereby it evolved adaptively to become functional, and for humans this is the EEA. The actual domain where rationality tests for our heuristics take place is the modern world.[17] Kornblith maintains that all heuristic mechanisms, in order to function effectively in its appropriate domain, must consist of fundamental presuppositions. Thus, if it is utilised in an actual domain where such presuppositions are incorrect, it will not function correctly and therefore lead to possibly irrational judgments. Hence it is unfair (and unrealistic) to expect human possession of decision-making algorithms that are accurate in all domains. Kornblith opens the question

 

Why shouldn’t we compare the ways in which people reason against some standard account of proper statistical inference?

(Kornblith 1992: 900)  

 

Here additional problems are unearthed as Kornblith notes that such a standard is unattainable. Gilbert Harman for example has demonstrated that even basic calculations required for humans to reach this ideal to be computationally intractable as a result of the combinatorial explosion problem, explained thus:

 

If one is prepared for various possible conditionalizations, then for every proposition P one wants to update, one must already have assigned probabilities to various conjunctions of P together with one or more of the possible evidence propositions and/or their denials. This leads to combinatorial explosion as the number of such conjunctions is an exponential function of the number of possibly relevant evidence propositions.

(As cited in Kornblith 1992: 900)  

 

Because in reality, human reasoning is limited by constraints of feasibility and practicality, attempting to function generally through degrees of belief or probabilities is too complex. When one believes in something, it is simply that he has an explicit belief about the probability of something. However, functioning in terms of explicit probabilities cannot always work, as combinatorial explosion in resources required limits the use of probabilitstic conditionalization as a mean of updating an individual’s degrees of belief.  Thus real-time resources for rules of rationality to be established are impossible, and it is therefore implausible to measure human rationality against such an ideal standard.

 

For explanatory purposes, it is obvious that accounts which consider standards against which rationality is judged are superior to accounts that do not. However, those who have posited laws of probability as ultimate standards have yet to provide absolute justification for why such laws should be accepted. In Part 5 we continue to explore additional problems with unreservedly taking rules of mathematics and logic as standards of rationality.

 

 

5: Appropriate Norms?

 

Participants of the rationality debate typically do not challenge the ideal of utility that people are expected to aspire to in decision making. General experience should also lead to agreement by proponents of both evolutionary and H&B traditions that when presented with information in “compatible” formats, humans can think upon appropriate norms of rationality. Indeed, Gigerenzer endeavours to prove that in appropriate situations, humans can in fact follow probabilistic norms such as Bayes’ theorem. However, here a deeper assumption is revealed regarding the question of why such logical, probabilistic theorems should be accepted as unconditionally appropriate norms. It appears that the ultimate goal of theoretical reasoning is taken for granted to be that of true belief formation and in the following I attempt to challenge this assumption.

To begin with, we identify two separate stances on rationality as put forward by Samuels, Stich and Faucher. The first of these is the deontologist view

 

To the deontologist, what it is to reason correctly – what’s constitutive of good reasoning – is to reason in accord with some appropriate set of rules or principles.

(Samuels, Stich and Faucher, 2004)

Thus a norm such as ‘One should follow Baye’s rule when he calculates conditional probabilities’ is taken to be deontological since it advocates conformity to a particular principle. The alternative stance is what they term consequentialism, also ‘pragmatism’.

 

Consequentialism maintains that what it is to reason correctly, is to reason in such a way that you are likely to attain certain goals or outcomes.

(Samuels, Stich and Faucher, 2004: 166, emphasis original)

This view promotes the pragmatic objective of efficiently attaining an individual’s goals and desires as the criterion for what constitutes good reasoning.

 

The wall that divides these two views is, however, not completely stable. The Baye’s rule for example, is accepted due to its mathematical soundness and appearance of reliability in the real world. Thus it cannot strictly be seen as ‘deontological’ in the previously defined sense and may instead be seen as a consequentailist rule. It would be trivial to posit a deontological theory that designates which norms should be followed without explanations of why that should be the case. Norms we generally consider appropriate are those that mathematicians and logicians have validated as “true”. Thus, the ultimate validation of rational norms is implicitly ascribed to the maximization of true belief formation. However this pertains to a form of mathematical naturalism that cannot be left unchallenged – it is unclear whether standards of truth should be directly and entirely dependent on what is dictated by mathematical practice.

 

Regardless, our emphasis in what follows shall not be whether true belief can be formed from adaptation to mathematical norms – rather, we shall examine whether true belief formation itself can justifiably be considered the ultimate objective of a rational, reasoned thought process. Now, general consensus indicates that having maximal true beliefs will allow an organism to achieve maximal goals that it can physically achieve. The augmentation of true belief formation is thus tightly associated with the augmentation of individual human utility, making it seem plausible to posit that true belief is in fact the ultimate objective of reasoned thought process.

 

That said, Stanovich and West sharply point out that a crucial distinction between the interests of an individual organism and the interests of his geneshas often been missed out in evolutionary rationality literature. When advocates of evolutionary rationality rush to tell us of the cleverly adapted heuristics in the human mind they forget the distinction between heuristics that serve genes as agents of selection and heuristics that serve individual humans as vehicles built by said genes.

 

Presuming that it is self-evident for humans to have a variety of different objectives and personal aims, the only ‘aims’ that can be attributed to genes are survival and reproduction. Stanovich and West posit that the majority of the heuristic mechanisms developed during the Pleistocene era should be expected to serve such aims. Goals serving the interests of individual humans are expected to be merely secondary or non-essentially.  It is once the genes have developed so much as to build complex creatures like ourselves that we start to exhibit behaviour oriented towards personal utility which can often diverge from or even contradict genetic utility.

 

It becomes apparent that we cannot disregard the disparity between any tenable model of the EEA and the contemporary world. The obvious aim of formal logic is to assist formation of true belief; however, although this (apparently) aids organisms in their various endeavours, it is clear that the extent of logic we have developed is unnecessary for genetic purposes.  Max Delbruck questions the fundamentals of evolutionary biology in “Mind from Matter?”[18]

 

Our concrete mental operations are indeed adaptations to the mode of life in which we had to compete for survival long before science. As such we are saddled with them, just as we are with our organs of locomotion and our eyes and ears. But in science we can transcend them, as electronics transcends our sense organs. Why, then, do the formal operations of the mind carry us so much further? Were those abilities not also matters of biological evolution? If they, too, evolved to let us get along in the cave, how can it be that they permit us to obtain deep insights into cosmology, elementary particles, molecular genetics, number theory?

(Delbruck, 1978)  

 

Delbruck’s conclusion was that he had no answers to such questions, and given the lack of conclusive evidences both for and against evolutionary biology it is not difficult to understand why. As Sober pointed out, many of the mental operations which accommodate scientific reasoning were neither used nor useful back “in the cave”, in the days when much of genetic engineering took place through natural selection. It is therefore unclear how our present information processing techniques can be viewed as products of evolution if they were not selected for amongst a range of other alternatives. As such rationality cannot be seen as an “adaptation” if it is not fundamentally beneficial to the genes.   

 

Stanovich and West, in response to this problem suggest a tentative theoretical division of the mind into two – a part designated ‘System 1’ which serves the goals of the genes, and a part designated ‘System 2’ which refers to an area of higher cognition that specifically serves the individual’s personal needs. Stanovich and West’s contention is that both systems can exist together despite being in conflict at times.  Conflicts are triggered by the occasional unsynchronized goals of the two systems – although it is generally taken that true belief formation is the ultimate end of rationality, the adaptive System 1 did not necessarily evolve in such direction, as the apparent diversity in values and views regarding reproduction also suggests. [19]

 

Given such systematic division of the mind, Stanovich and West proceed to accuse evolutionary rationalists of neglecting this fundamental distinction between the two forms of rationality. They specifically criticize Gigerenzer’s notion of ecological rationality for being conceptually disoriented in the divide between what is rational for the individual and what is rational for the gene, abstrusely interchanging between the two forms of rationality inappropriately.  The surfacing of this very distinction appears to reduce the argument from being normatively concerned to being merely descriptive. The areas of the human psyche that the advocates of H&B labelled “irrational” are, in Stanovich and West’s view, simply expressions of System 1 whilst behaviour conforming to rules of formal logic and probability (and thus true belief maximization) is attributed to manifestation of System 2. Thus, whether we label entities rational or irrational hinges upon the “level” of optimization in question, at either the individual level or genetic level. 

 

Although evolutionary rationalists have always accused proponents of the H&B tradition of being explanatorily dysfunctional, their own thesis, in Stanovich and West’s view, is far from irrefutable for the fact that they neglect such crucial distinction as the divide between individual and genetic rationality.  One cannot obtain legit interpretations of evidence from empirical psychology without first establishing exactly what notion of rationality one is to take as the standard that human performance is to be judged against. Perhaps this is the fundamental question that we, as philosophers need to contemplate upon in order to provide legitimate justification for our choices of standards.

 

6: Further Considerations

 

The role of emotion in human rationality and decision making has also been studied extensively by evolutionary economist Robert H. Frank, who postulated that emotions may in fact be a complimentary facet of human rational process even when they appear irrational. In the book Passion within Reason for example, Frank gave the example of how the emotions of “love” accord extra value to long term romantic commitment. If people chose partners strictly for rational reasons, it is likely that they will dump their partners for someone more desirable on “rational” criteria as soon as he or she encounters better partnership. This would create what Frank terms a “commitment problem”.  Where emotional attachment such as “love” is present however, there is greater meaning in sustaining long term relationships, and this is where Frank’s “commitment model” highlights problems that cannot be solved by rationality alone, thus the saying that “Those sensible about love are incapable of it”. Frank ultimately claims that if we look at emotions through the “longer lens of evolutionary theory”, much of what appeared irrational in emotions in fact form an “effective strategy for achieving agents’ goals and maximizing their reproductive success”.  This, in a sense, befuddles Stanovich and West’s distinction of individualistic and genetic aims for they appear to be equated in Frank’s thesis. Nonetheless it seems plausible to attribute emotions to manifestations of System 1, explicable solely as a facilitator of genes that may even impede instrumental rationality in some cases. Sripada and Stich[20] for example, note that our genes, environment and culture (what they believe to be the three basic sources of value structure) can often produce “maladaptive” value structures that result in irrational emotions. Phobias for example, characterized by excessive feelings of the emotion ‘fear’, is one prime example of irrational emotion. In an evolutionary perspective, Sripada and Stich note

 

 

Plausible candidates for innate fears are those directed at recurrent threats faced by human ancestors. The underlying adaptive logic is that an innate and rapid fear response to a recurrent thread would have conferred a selective advantage on human ancestors who possessed such a trait.

 (Sripada and Stich, 2004)  

 

This causes problems when fears initially adaptively advantageous to our ancestors, i.e. fear of closed space which called for alertness against dangers of being stuck, are inherited by persons of the contemporary world who experience pathological emotions of fear in closed spaces such as elevators, subways, and phone booths.[21] In such cases our emotions and value structures are maladaptive.

We may perhaps decipher phobias as direct conflict between Systems 1 and 2 in Stanovich and West’s terms. While an otherwise rational human being may be completely certain that he will not suffer any danger in using the phone booth, he may still possess a natural, atavistic aversion against using it even if this will prohibit him from achieving his individual goals i.e. making an important phone call. Such “atavistic” tendencies can be construed as the dominance of System 1 over System 2 in a decision making process.

So it seems that evolutionary rationalists are still left with much to do in order to fully justify how and why the laws of logic are to be explained by evolutionary psychology. Further, even if we hypothesize the truth of their thesis, we must still wonder how the normative status of our laws of logic and probability are to be secured once they are reduced to psychological descriptive terms. If human rationality level has surpassed the threshold demanded by genes for mere survival, then we are unavoidably disposed to a normative schism and must establish standards of rationality on new premises.  It is unclear whether anticipating such endeavor can advance the rationality debate in any way.

 

 

 

 

7: Conclusion

 

It appears that the most common, and also gravest failure faced by proponents of all sides of the rationality debate is the inability to establish a definitive standard of rationality which human performance is to be judged against. Failure to attain uniformity of views in this crucial area means that we fail to identify a universal frame of reference within which we can legitimately interpret the broad account of rationality through laws of logic and mathematics. We are warned (by Kornblith) against wrongfully measuring rationality against impossible standards, and the divergence of our genetic goals and individual goals have clearly been brought to light. How we are to establish norms given the fundamentally descriptive nature of much of the evolutionary rationalist’s views on logical laws is also something we must be cautious about. Explanatory gaps are yet to be filled with regards to how and why one, but not another mental adaptation to our past environments gets recognized as adaptively advantageous.  It also remains unclear why we have developed individualistic traits that are not necessarily beneficial to genetic aims such as reproduction. We also cannot begin to speak about whether humans inherently possess rationality at all without robust definition rationality.  All of such considerations need to be further scrutinized should our debate progress towards greater insights.

 

 

 


[1] Logical, mathematical or statistical

[2] Tversky and Kahneman (1983) ‘Extensional versus intuitive Reasoning’ Psychological Review, 90, 2SB 315

[3] 65% of 125 undergraduates in Tversky and Kahneman’s test

[4] Be it in financial, emotional, or any other form

[5] Cosmides and Tooby (1994)

[6] Edward H. Hagen 2002

[7] Edward H. Hagen 1999-2002

[8] Maintenance of adaptations that have already evolved, as genetic diversity decreases after population stabilizes on specific value of a trait

[9] As defined in the actual work, this refers to claims that  (i) are not central to the research program, (ii) are not supported by the evidence offered, and (iii) are typically not endorsed by advocates of the program in question when they are being careful and reflective.

[10] Gigerenzer (2000)

[11] Gigerenzer and Goldstein (2002)

[12] Oppenheimer: Not so fast! (And not so frugal) : Rethinking the recognition heuristic

[13] Todd and Gigerenzer (2007: 168)

[14] Kahneman and Tversky (1973)

[15] He admits however, that the distinction is useful for cognitive psychology

[16] Kornblith (1992: 907)

[17] As distinguished by Sperber (1994)

[18] Excerpt found in Elliot Sober, The Evolution of Rationality

[19] Kornblith postulated that the evolution of System1 in line with rationality is actually impossible whilst Gigerenzer has shown how successful use of heuristics approximate true belief.

[20] Sripada and Stich, Evolution, Culture, and Irrationality of Emotions

[21] Examples from Sripada and Stich, Evolution, Culture, and Irrationality of Emotions P.12

Leave a Comment

Log in