Archive for January, 2003

Justice Paper 1 – Can Torture be Justified as a Last Resort?

Wednesday, January 1st, 2003
Jason Yeo, Oct 2005 

 

[NB: A bibliography apparently was not required for this piece of writing so the references are not cited as thoroughly as I would have liked.  There are many little problems in the essay, of course, but the main flaw is the clear logical fallacy that the rightful powers of the state are always limited by what rights citizens can delegate to the state based on natural rights.  For example, the state has the right to levy taxes, conduct (justified) search and seizure and incarcerate criminals, all of which individuals do not have the right to do in the Lockean “state of nature”.]

Can Torture be Justified as a Last Resort to Prevent an Imminent Terrorist Attack?

No, torture is not justified, even as a last resort to prevent an imminent terrorist attack.  In this paper I will begin by highlighting the precedence and moral priority of individual and state rights as central considerations over utilitarian principles.  I will proceed to draw a clear distinction between death (and capital punishment) and non-lethal torture to show why the difference is not “merely aesthetic” as Dershowitz writes but instead illuminates a clear moral principle against torture.  I will argue that because of its fundamentally cruel, debasing nature, torture cannot be rightfully used by a society that draws its legitimacy and moral force from the consent and will of its members.  At the same time I will refute the assertion that the acceptance of capital punishment provides similar grounds for accepting torture.  I will also use the loosely-interpreted moral idea of “innocent until proven guilty” to reject the reasoning that “imminent terrorists” can be deserving of less than human treatment.  I will conclude by reminding us of the empirical case against torture before reiterating the more powerful argument that rights are prior to utility and torture can never be rightfully practiced.

In considering torture as a last resort to prevent an imminent terrorist attack, which is after all an action of the state against an individual, there are two key issues – the rightful powers of the state, and the rights of individuals within that state, including suspected terrorists.  While both sets of rights are firmly against torture under any circumstances, I will concentrate on the rightful powers of the state although the same reasoning upholds the rights of the individual or suspected terrorist.

So why is a discussion of rights more relevant in this case than a discussion of pure utility, which Dershowitz believes is the central justification of torture as a last resort?  It is because without a system of rights the state cannot exist with any powers whatsoever and hence states are defined by their rightful powers and how these powers are justified, before any considerations of utility.  A democratic state cannot rightfully act, even to maximize utility, if nobody agrees to the action in question or if everyone is opposed to the proposed action.  In responding to a consideration of rights, utilitarian thinkers would postulate either that individual rights are immaterial and imaginary and cannot be raised above the principle of “the greatest happiness for the greatest number”, as Bentham would argue[1], or that individual rights are a reflection of rule utilitarianism, where history has shown as a general rule that the recognition of these rights ultimately leads to the case of maximal utility, an idea espoused by Mill.  Bentham and Mill are right in sharing an underlying assumption with Locke that human society is based on the belief that cooperation improves the utility of everyone above what they would have enjoyed in the pre-political condition that Locke termed the “state of nature”.  However, Bentham and Mill are both mistaken in their assumption that people would agree to any and everything, including increased risk to their own liberty, in order to maximize the social sum of utility, and that individuals cannot possess and uphold moral beliefs and principles beyond utilitarianism in coming together to form a democratic society.  These moral beliefs can stem from religious convictions not justified by utilitarian concerns[2], such as Locke’s belief that individuals do not have the right to willful self-destruction because we are God’s creations.  Alternatively, these beliefs can be based on theories of natural rights, on moral intuition, on simple preferences or even on what Dershowitz disparagingly calls “aesthetics”.  The point is that the state only has as much rightful power as people choose to delegate to it, and people can make this choice (by majority or some other agreed-upon scheme) based on whatever criteria they choose.  Further, the rights of the state flow from the consent and delegated rights of the people, but individuals cannot rightfully authorize the state to perform acts that they would not have been morally permitted to perform themselves in a state of nature.  Thus the rightful powers of the state are constrained both by what the majority (however defined) of its members want, and also by what powers this majority can rightfully delegate to the state.            

So what is it about torture that puts it beyond the rightful powers of the state, and why is it that individuals cannot, even by absolute consensus, morally authorize the state to carry out torture, under any circumstances?  The answer is simple – torture is cruelty, and by definition torture in this case is the deliberate dismantling and crushing of another individual’s humanity and moral will.  By subjecting an individual to pain and torment beyond their capacity to bear mentally or physically, they are robbed of their ability to think, to reason, and to act morally.  Being tortured is equivalent to being reduced to a bestial state of reflexive self-preservation and desperate instinct.  No one is surprised by (and torturers actually capitalize on) the reality that under torture individuals will abandon their beliefs, their loved ones and their self-respect.  In short, the victim of torture can be considered to be in a coerced, inflicted state of insanity, where they have been intentionally deprived of the reason, morality and free will which comprise the essence of their humanity.  Beyond the degrading and devastating moral effect upon the victim, some would argue that performing torture even reduces the torturer to a lesser moral state because they have resorted to bestial, inhuman means to coerce another against their will.[3]  In other words, torturers have abandoned the use of reason – the hallmark of humanity – in favor of coercive force – the domain of beasts and tyrants.  It is not necessary to go that far to see that torture is always a complete debasement of the victim’s humanity and hence is never within an individual’s rights under any reasonable scheme of morality.

It should be uncontroversial that no individual human being has the natural right to subject another person to cruelty or to strip them of their humanity, and society certainly cannot delegate this right to the state when they do not individually possess this right to start with.  Even if they could rightfully do so, it is unimaginable that people would consent to form a society where their self-possession and humanity was even more in peril than in a state of nature.  This limit on state powers based on rights and consent traces itself to Locke, who would no doubt also argue that if self-possession be accepted as the given basis of private property as he thinks, then humanity is the most essential and inviolable of all our possessions.  Again, if no individual can claim this right over other humans – that is the right to rob them of their humanity and coerce them into a bestial state of submission – then no individual or majority of individuals can authorize their state to do the same on their behalf.

Some might be tempted at this stage to argue that currently in America, it is considered acceptable to punish serious crimes on pain of death, and thus a society that supports capital punishment cannot logically oppose non-lethal, non-permanent torture as a last resort on moral grounds.  There are in fact several reasons why this is false.  As a preliminary point, the case for capital punishment is by no means settled in this country, given that many individual states have expressed their opposition, and the bulk of developed nations including most of Europe has already concluded that capital punishment is immoral and a violation of human rights.  It may well be that one generation from now capital punishment will be considered a regrettable past wrong, just as slavery or the denial of voting rights to women is now remembered in America.  More importantly, as may already be obvious, there is a distinct moral difference between torture and death which goes beyond aesthetics.  While death is a fact of human existence, and can be natural, dignified, peaceful and even beautiful, torture can never rightfully be any of these things.  It is impossible to view torture as anything other than state-sanctioned cruelty, designed to strip a human being of their humanity, to reduce them to their most bestial state of instincts.  This is a situation where the victims are deprived of their capacity to reason and to interact with other people as moral equals.  It is a situation of moral coercion (which is an oxymoron since morality is a question of free will and human agency and thus cannot be coerced).  Empirically, victims of torture have described this state as worse than death, with the painful psychological effects of their compromised humanity tormenting them for the rest of their lives.  Society does not condone torturous, cruel deaths for convicted criminals for precisely the belief that even criminals have the right to their dignity and humanity, even up to their final breaths.  The distinction between torture and capital punishment is that torture treats individuals as less than human and contemptuously reduces them to that debased state while capital punishment still respects the humanity of the convicted criminal.

A second critical difference between torture and capital punishment comes from the logical sequence of crime and punishment and also answers the argument that imminent terrorists deserve whatever they get, and torture can be considered in part a punishment for their actions.  American society is founded on the basis of equality before the law, of innocence until proven guilty, and of guilt proven beyond a reasonable doubt.  These maxims are meant to ensure that the rights of individuals are protected above utilitarian concerns.[4]  Only after these conditions have been satisfied (a fair trial proving guilt beyond a reasonable doubt) can the state pronounce a Lockean “state of war” against an individual and enact punishment.  Even ignoring the fact that cruelty can never be a rightful action of the state, we must recognize that none of the three listed criteria can ever be satisfied in the practical situations that proponents of torture as a last resort imagine may arise.  The truth is that there will always be uncertainty, and no time for even a semblance of a fair trial in a situation where torture may be “useful”[5].  In the end, until the ticking bomb (the cataclysmic image of choice by supporters of torture) goes off, that act of terrorism has not yet been committed.  There is thus no case for any retributive element or argument for desert when considering the use of torture as a last resort.  Moreover, even convicted criminals and prisoners of war are not stripped completely of their rights, and we still expect a civilized, moral society to treat them with dignity and with respect for their humanity, a principle which torture clearly violates

It is important to note that there remain many empirical questions surrounding the usefulness of torture under the relevant conditions of uncertainty and time-pressure, the ability of our judicial and police systems to administer and enforce a limited regime of torture in a transparent and legitimate fashion, as well as the negative externalities associated with the US condoning torture under any circumstances.  All of this casts doubt on the utilitarian argument for torture as a last resort.  Moreover, Dershowitz himself writes that the “symbolic setback” to the respect for human rights and the strong prohibition against torture for anything less than situations of last resort is the “strongest argument against any resort to torture,” citing empirical evidence that rule utilitarianism is in favor of prohibiting torture under any circumstance.  While this is certainly another argument in favor of absolutely prohibiting torture, it is not the most important one.  The fact is that considerations of expedience do not change moral facts and principles.  Just because something ordinarily proscribed becomes desirable to do under extraordinary circumstances does not automatically mean that it suddenly becomes morally permissible.  Few proponents of torture as a last resort have suggested permitting the torture of innocent people, such as a suspected terrorist’s mother, or spouse and children in the presence of the suspect.  Yet these tactics are likely to be the most efficient in getting valid information in a timely manner.  Clearly there are, at least intuitively, moral limits to the utilitarian argument.[6]  These limits can be attributed in this case to the rights of individuals and the rights of states.

Dershowitz is most compelling when he says that the slippery slope presented by legitimizing torture obliges us to draw “a principled break” on the limits of what can be done.  The reality is that moral principles show us that this break must be a complete condemnation of torture, even if we might hypothesize optimistically that torture might bring about some immediate good, and that two wrongs make a right.  We have to accept that we cannot prevent all terrible events from occurring.  Police officers, soldiers, hostage negotiators or medical personnel often arrive too late to prevent tragedy.  In the case of a terrorist who has (presumably) successfully planned and initiated a terrorist act on US soil, interrogators trying to prevent this attack may be constrained by time, by the law or even by “aesthetics” in what they can do.  I propose that the actions of the state be constrained by the morality of our society, which is founded on the principles of human dignity, equality of rights and presumption of innocence.  If the only possible reactions to an imminent terrorist attack are an abandonment of these fundamental principles, and an abandonment of the legitimacy of the state’s powers, then these actions are beyond what we should expect the relevant authorities to perform.  If some intelligence agent or police officer feels the need to employ torture, let them do so with the clear understanding that they will have to face their day in court to defend their actions against the weight of morality.  

Bibliography

Bentham, Jeremy, Principles of Morals and Legislation, MR 22 Sourcebook, 2005
Dershowitz, Alan, “The Case for Torturing the Ticking Bomb Terrorist,” from Why Terrorism Works, pp. 142-149
Johnson, Robert, “Kant’s Moral Philosophy”, The Stanford Encyclopedia of Philosophy (Spring 2004 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/spr2004/entries/kant-moral/.
Locke, John, Second Treatise of Government, edited by C.B. Macpherson, Hackett Publishing Company, Inc. 1980
Mill, John Stuart, Utilitarianism, edited by George Sher, Hackett Publishing Company, Inc. 2001


    

[1] Although Bentham argued that only consequences mattered and that conceptions of natural rights were ultimately rested on the utility of the outcomes, his principle of having no person’s utility worth less than the next person’s implicitly invokes considerations of equality and human worth that are the starting point for many, if not all, rights-based moral theories.
[2] Mill argues that all religious morality can be shown to correspond to utilitarian principles, but whether or not this is true, the fact is that while a belief can be shown to correspond to any number of systems of ethics, the critical one is the system that the believer accepts as the basis of their beliefs.  As an illustrative analogy, economists can develop many compelling theories of why someone trusts a certain company or buys a certain product, but what matters is what the individual in question actually thinks or believes.  I suspect that few religious people would accept that their beliefs stem solely and ultimately from utilitarian concerns.
[3] On this point Kant would say that the victim is clearly being treated only as an ends for the torturer’s goals and that the victim’s humanity is being ignored or overridden in a disrespectful manner.  This would be a clear violation of the Humanity formulation of Kant’s Categorical Imperative, which is the basis of our moral duties in a Kantian system.
[4] For example, while utility may be maximized by incarcerating a violent murderer, if it is at the cost of incarcerating another innocent person, moral principles dictate that this is an unacceptable and unjustifiable bargain.
[5] Additionally, it has also been argued that in reality torture usually achieves little in terms of generating useful information because trained victims know how to resist and confound torturers for long enough that their secrets lose their relevance, while untrained victims will simply say anything to end their misery, which equally confounds interrogators.  All this puts serious strain on the utilitarian principle as a general justification for an institutional system of torture under any circumstances.
[6] This is something neither Bentham nor Mill would have recognized, although Mill would have argued that there can be greater utility in the long-run or in the larger sense as a result of accepting lower utility in individual, limited situations.

French 170, Position Paper 4, Walter Benjamin’s The Arcades Project

Wednesday, January 1st, 2003

Jason Yeo
French 170: The City
October 21, 2004 Position Paper 4

Walter Benjamin’s The Arcades Project –

Convolute E: Haussmannization, Barricade Fighting

Benjamin’s mammoth collection of notes, references and illustrations that comprise “The Arcades Project” capture for us a flavor of Paris as a city in flux, poised uncertainly across a half century of changes that marked the rise and fall of the passage, and major urban works directed by Baron Haussmann under the commission of Napoleon III.  While Benjamin deals with many major themes including the historical significance of the arcades and the complex relationships between social movements, architecture and mental constructions and representations of urban space, I will focus this position paper on convolute E titled “Hausmannization, Barricade Fighting”, concentrating on the effects of Haussmann’s projects on the intrinsic experience of the city for the observer and the citizen, paying particular attention to how these developments relate to the representations of the city in previously discussed writings by Mercier, Baudelaire and Balzac.

With the boulevardization of Paris, the labyrinthine city of Balzac, referred to as “musty and close” in this convolute is in rapid decline.  While the Paris of Balzac’s Ferragus is a dark, winding maze filled with mysteries and dangers, a trap for hapless souls, in Haussmann’s Paris the long, wide avenues point the way straight out of the city, offering a means of escape.  Benjamin records that Haussmann’s work to open up the narrowest, most indigent quartiers to allow “the influx of better air” was a “battle against poverty and revolution”, echoing Balzac’s depiction of the sunless streets being havens for criminals and assassins, the decrepit and dangerous environment being both cause and effect.  Yet Benjamin’s research leads us to conclude the despite the outward improvement in conditions within the city, the directions of escape, previously so elusive, are now even more desired and resorted to by Parisians due to the increasing alienation of the citizen from the city. 

The unrecognizable, monumental, artificially inorganic form of the embellished city, with its increasingly uniform architecture, iron constructions and flickering gas lamps transforms the city into a crossroads, a temporary resting place.  In many ways, the city is no longer a place for people, but rather for workers, visitors and speculators.  This is hardly surprising, given Haussmann’s dismissive attitude towards the populace and their needs.   “Hundreds of thousands of families, who work in the center of the capital, sleep in the outskirts.  This movement resembles a tide: in the morning the workers stream into Paris, and in the evening the same wave of people flows out.  It is a melancholy image…” This ebbing, impermanent nature of inhabiting Paris evokes strongly yet contrasts against the eternal city that Balzac and Mercier recognized and described, with its unchanging cyclical rhythms but with streets lined with ancient houses, where people lived and died.

Paris is no longer a backdrop against which the people form the objects of interest in the fl⮥ur’s gaze like that of Baudelaire’s, but rather becomes a series of panoramas and prospects boasting embellishments and other features that now capture the observer’s gaze.  “He saw Paris… His gaze fixed itself most avidly on the space between the column in the Place Vendôme and the cupola of Les Invalides.”  Structures, palaces and vistas have replaced the widows, crones and laborers of Mercier and Baudelaire; the inorganic has triumphed over the organic.  The city has been punctured, ruptured and remodeled so radically, rapidly and violently that the life has escaped from its walls, some of it crushed by the demolitions and some of it escaping to the suburbs.

The Problem with Scholarship Bond-Breakers

Wednesday, January 1st, 2003
So here is what I think about the people who whine about their scholarship bonds – get over it already.  I simply do not accept the line of logic that goes, “Oh, but how can any 18 or 19-year-old be expected to make decisions about the next eleven years of their life?”  The truth is that everyone is faced with important decisions at that age, some of them having a more lasting impact than others, and the onus is on would-be scholars to make a well-informed decision (for which they have many tools at their disposal), and at the end of the day much of the griping demonstrates a desire to personally profit in the absence of personal risk and without concern for others. 

Let us start with the idea that signing a scholarship bond is something an 18 or 19-year-old should not be reasonably expected to do.  On the face of it that is simply preposterous.  Unless the person making this claim also believes that 19 year-olds should not be allowed to marry, stand trial for criminal actions, enter the armed forces, go to medical school (which carries a very similar bond in Singapore) or basically make any major decisions (for which they must bear the consequences), than this position is untenable.  Throughout human civilization virtually all 19-year-olds have had to make major, life-altering or even life-or-death decisions, or have been expected to.   

 

 

 

Of course, the fact that such decision-making is expected of and thrust upon many (if not all) adolescents may not mean that these expectations are reasonable or that the results are optimal, my first point is simply that these expectations are normal and well-established, in every time and place since recorded history began.  Whining, unhappy scholars are in no way unique in this respect and should not expect particular attention on this point unless they wish in retrospect that their right to make such decisions at that age be removed entirely (and also for wider society).

Furthermore, it is reasonable to think that potential scholars are well-placed to make good decisions with regard to the form, location and funding of their tertiary educations.  It is not as if each scholar was forced to make this decision in a vacuum – every potential scholar is welcome to investigate their options, likely prospects within the funding organisation and so on, with the explicit support of the funding organisations who want these decisions to be made in an optimal fashion for all involved.  

Also, these scholars-to-be, whom we can reasonably assume to be as intelligent as or perhaps more so than the “average” university applicant, largely have access to a large support network of friends, family, teachers, seniors and so on who can answer their questions, help them focus on the important factors and weigh the costs and benefits of their decision, whatever this may be.  It is eminently reasonable to expect that the individuals signing scholarship bonds are well aware of what they are signing up for and have considered the likely outcome from a personal standpoint and are happy with or at least willing to accept the terms.  In short, that they are making a well-informed, thoughtful and deliberate decision that they believe is in their best interests.  If they later change their minds because of completely unforeseeable circumstances, that may be considered reasonable, but truly how many scholars decide they are unhappy for unforeseeable reasons?  I have yet to personally hear any anecdotal complaints that sounded even remotely novel or unexpected.  They mostly run the tired gamut of “but I could have a better/more interesting/more glamorous job with colleagues/employers I like better, in a more liberal environment, with better compensation, overseas with my long-distance American/British significant other, where I could live more comfortably, pursuing a PhD which I can’t do here, in the research area I discovered a passion for in college…”  None of these reasons are trivial to those doing the grousing, of course (nor should they be), but neither are they surprising, unforeseeable developments.  The point is that these situations are ones which would-be scholars should have already considered when they agreed to accept the terms of the scholarship.  It is not hard to understand – in exchange for the cost of university education over a number of years, personal expenses, travel, in-house training, guaranteed fast-track career, a salary (and whatever other perks there are), each scholar agrees to give up the freedom to pick their employer, job-location and specific job-type for a number of years.  That is the contract they have signed, and they should be prepared to honour it. 

 

 

Some disgruntled bonded scholars will naturally say that if they are truly unhappy, they should be allowed to leave.  I would respond by saying that if they were indeed desperately, violently or depressively unhappy then sure, they should be allowed to break their bonds (and my hunch is that this is the status quo).  However, this begs the question as to how and why they could have become so extremely unhappy.  Again, I can accept completely unforeseeable or extra-ordinary (or even merely uncommon) circumstances as a legitimate cause, but virtually all foreseeable causes should not result in such a drastic situation.  But back to the idea of being “truly unhappy”, leaving aside clinical depression, how unhappy is “truly unhappy”?  The problem is that outside from the kind of major psychological problem just discussed, almost nothing can be reasonably considered “truly unhappy”.  From the perspective of the scholarship-granting organization, the point is not whether the scholar feels that they could be happier elsewhere, but whether they can be happy and productive for the period of their bond and beyond with the organisation, as these scholars have formally agreed to at least try to do.  It would be unreasonable to expect that the government (or any scholarship-making organisation) should be prepared to disburse large sums of money over many years while in practice being very lax in holding scholars to their bonds.  Thus, when it comes to foreseeable challenges, everything from mild dissatisfaction (“the culture in this office is so parochial sometimes, and my boss can be very uninformed”) to grudging grouses (“I had to turn down that lucrative private sector overseas job offer”) are simply issues that scholars have agreed to work through, and scholarship-granting organisations should not be expected to relax their expectations.  (In other words: shut up, grin and bear it, like you said you would.)  

 

 

This brings us to the next common complaint from unhappy scholars – that there should not be any moral stigma associated with bond-breaking, mainly because they have already paid back the money spent by the organisation on their university education.  And not just that, the aggrieved will emphasize, but paid back with interest!   Therefore, these (pre-/ex-)scholars will reason, they no longer owe the organisation (or society) anything and have done nothing wrong.  This reasoning convinces lots of people (in my experience often people considering breaking their bonds) but there are at least two glaring problems with that logic.  The first has to do with the terms of their scholarship, and the second has to do with the (moral) meaning of contracts.  

 

 

Remember, these scholarship organisations were not acting as banks making educational loans.  In return for their investment in time, effort and money, these scholarship-granting organisations were expecting in return something other than money, and in some ways worth intangibly more than the time and money spent.  They were contracting to have in their service an educated graduate who would be familiar with and committed to the organisation for at least the bond-period.  In other words, unless scholarship bond-breakers can give the organisation such a graduate, they have not, as they may want to believe, “paid back” their bond, and even if these ex-scholars could supply such a graduate, they would not have fulfilled the exact terms of their scholarship (which applied to them individually).  The money they have to pay is instead a penalty, or damages owed the scholarship-granting organisation when a scholar reneges on their contract.  It is organisations cutting their losses after a bad transaction.  

 

 

Next, we come to the idea that “transaction” and “contract” are morally neutral business terms like “interest rate” and “delivery schedule”.  Simply put, this line of thought is just false.  Even generally, transactions and contracts often do carry a moral value, such as when pharmaceutical companies are setting prices for HIV/AIDS medications or when Shell decides to transact with the internationally-unrecognised and politically-repressive Nigerian government or when a construction firm contracts with the government to build elevated highways that conform to established building codes.  Clearly, it is the actual content (and its wider context and implications) of the transaction or contract that determine its association with morality.  When it comes to scholarships in particular, there is quite evidently the larger, societal issue of educational opportunity and meritocracy which many scholars (including those who break their bonds) would recognise introduces a moral dimension.  The fact is that scholarships represent a zero-sum game, as do opportunities for university education, to a certain extent.  By taking up a scholarship, scholars have also taken up the social responsibility to honour the terms of that contract (just as the building contractor building power plants, office buildings or transport networks to-specification); in this case that responsibility is to be educated for an agreed specified purpose.  This is especially true for government scholars where the scholarship board is selecting scholars to serve the people or even run the country.  Of course, the worst case scenario morally is the scholar who pretends to be completely committed to serving the organisation in order to be offered the scholarship when this is not the case, right from the beginning.  This amounts to little more than fraud, which is intuitively morally wrong. At this point it should be briefly mentioned that at least one way to understand the immorality (and not just the illegality) associated with the non-fulfilment of a contract (expressed intuitively by words like “untrustworthy” and “dishonourable”) is to consider that the honouring of contracts and the reasonable expectation that contracts will be honoured is what allows society to function effectively (or at all, Realist political scientists would say).  Working from this starting point, contracts and contract theory form the basis for an entire established brand of morality within the philosophical tradition.  No one should be surprised to find that scholarship bond-breakers are often judged as immoral if morality is so firmly rooted in the honouring of contracts.  

 

 

At the end of the day, my guess is that many of the disgruntled (ex-)scholars I have in mind are well aware of the points I have just made (especially since many of them no doubt took university classes that extensively discussed law, philosophy and morality) but are willing to ignore or downplay them, while focusing on reasoning that revolves around their personal emotions, desires and opportunities.  Of course this kind of self-preoccupation (or selfishness, if you like) is neither unusual nor even necessarily worthy of any moral judgement.  However, in the case of scholars, who have deliberately and willingly committed themselves to a particular and important social or organisational role in exchange for the various benefits of their scholarship, such a willingness to ignore the moral (and wider) implications of their actions demonstrates a certain level of impatient callousness and overriding self-concern that is indeed shameful to observe, whether it be in scholars, or in politicians, teachers, and medical practitioners (not to mention any other occupation).  

 

As a final postscript, I will say that I would not be the least bit surprised if there are perfectly reasonable cases to be made for individual scholars who want to break their bonds.  They are welcome to argue, for example, that by instead doing (insert whatever it is they want to do after breaking their scholarship bond), they can contribute more to society or to the world (an often un-provable argument which still does not refute any of the facts about contracts and its association with morality).  Or, as I have consistently recognised, there may be extraordinary or unforeseeable circumstances affecting that particular scholar.  Nevertheless, as I see it there is simply no generalised argument to absolve all (or even most) scholars who wish to break, or have already broken, their scholarship bonds of the fact that they are acting badly and should be (a)shamed.  It is their individual duty to make a case out of their own unique circumstances.  If everyone did just that, it would probably be easier for those who truly should be allowed to break their bonds to do so since they would not be confused with the run-of-the-mill disaffected and impatient scholar longing for greener pastures.   

(Fall 2005)

NB: I am not (and have never been) bonded to any scholarship-granting organisation, something I decided quite quickly was best for me.

 

 

 

 

 

 

 

 

 

 

 

 

Singlish Conversations

Wednesday, January 1st, 2003

 Introduction:  The following are excerpts from two emails I sent to various Americans in response to the question: “What do Singaporeans Speak?”  My replies have been left unedited for style, grammar and spelling.

> but I guess I don’t read in whatever you speak in
> Singapore (don’t tell me it’s English ) )

There isn’t really a simple answer to the question: “What do people in Singapore speak?”, although if there was one, it would be – English )

Now this is amusing ) Almost every time I’ve been to the States (about a half-dozen times over the years), I get comments like: “Wow, where did you learn to speak English so well?”, and others like it. I always think, “Are they saying it sort of condescendingly? – As in ‘Look, the Asian speaks our language!’” Because the truth is, English is my ‘mother-tongue’ and it used to be odd for me to imagine that other Asians weren’t fluent in English… You see, English is the language of instruction in virtually all schools here (for all non-language classes) from kindergarten up, although everyone is required to take at least ten (count ‘em) years of another language from grade school till high school (parents’ decisions correspond largely to race, although not necessarily – generally, Indians learn Tamil, Punjabi, Sanskrit etc, Malays take Bahasa Malayu, Chinese take Mandarin, and Eurasians/Caucasians take French/German/Malay/Japanese…), and a goodly portion of the brighter kids take a third language for up to six years before college.

So what do we speak in Singapore, which has four official languages (English, Malay, Mandarin, Tamil)? Well, many of the Chinese speak Mandarin Chinese as their ‘mother-tongue’ (with which they are most comfortable) or any of a variety of Chinese dialects (and many people speak a number of these fluently, especially older Singaporeans), including Cantonese, Hokkien, Teochew, Hakka etc. People often babble in their own languages in a racially homogenous group, e.g., a group of Malays friends will mostly converse in Malay, with a sprinkling of English. But in a racially diverse setting, people will just naturally stick to English.

But speaking of speaking a mixture of languages, that brings me to the topic of ‘Singlish’, or our own colloquial strain/dialect of English, which has developed as a nationally understood blend of words, phrases and expressions borrowed and adapted from various languages, structured with a very truncated and mangled version of English grammar/syntax, spoken with the lilting, sing-song manner of Malay, characterised by a very staccato sort of pronunciation, and punctuated with Chinese-style exclamations (I think they are known as ‘particles’ to linguists). The government has spoken out against the use of Singlish a couple of times (using the pejorative term ‘broken-English’ interchangeably with ‘Singlish’), fearing that the pervasive use of Singlish would erode our economic competitive advantage (the fact that everyone understands/speaks English is certainly one of the things that help us regularly attract the highest per capita FDI in the world). At the same time, there is a vocal (pun not intended) portion of Singaporeans who think that we should be proud of our national identity, and celebrate what makes our culture unique.

In any case, the ‘legitimacy’ of Singlish has been growing, with a Japanese-Singlish dictionary being published (in Japan) in the last couple of years, and quite a number of linguists pointing out that Singlish has all the characteristics of a stable, full-fledged English dialect… In fact, some of my seniors at Harvard told me that they used to be invited to a “Dialects of English” lecture every term to just read something or converse in front of the entire class to ‘demonstrate’ Singlish. Personally, I wouldn’t mind having to do that, except I’ve heard that that professor has left the College. )

> I’d love to talk to you more about the way people interact in all the different languages,
> how they influence the culture, and everything.
> I’m especially interested to know whether or not english is the “main” language
> spoken out of basic necessity and commonness,
> and if the result is that there’s no particular “english singaporean” subculture.
What do you mean by “particular ‘english singaporean’ subculture”? If you’re referring to “Singlish”, a purist would note that there are different strains and degrees of ’singlish’, in the way its spoken and its similarity to “Queen’s English” (which is what we’re taught in school).

In fact, there were political reasons as to why English was chosen as the language of instruction/administration in Singapore after we gained independence from the British Empire (aside from the economic ones I mentioned previously). Being a multicultural nation – Chinese (76.5%); Malays (13.8%); Indians (8.1%); Others (1.6%), it was important for then-Prime Minister Lee Kuan Yew to make sure that no one could accuse the infant government of being unfair to any one racial group (e.g., forcing the majority Chinese population to speak Malay or the *indigenous* Malays to speak Mandarin), which might result in violent civil unrest. So English was the solution, since then everyone was put on a level playing field, linguistically, i.e., equally (dis)advantaged. Interestingly, ex-PM Lee himself is english-speaking, and was educated at Cambridge, England.

Today, ordinary Singaporeans tend to associate high proficiency in English with:

a) high intelligence (a vestige of colonial envy of the west?)
b) privileged background (albeit that most of Singapore is resoundingly middle class)
c) a bright future (in employment and otherwise)
d) all of the above

But bilingualism is even more highly valued (particularly Mandarin and English), and linguists (who speak several Chinese dialects, or Japanese or Malay) are both well-respected and in-demand, since being able to converse in a language that a customer/patient/client feels most comfortable in is potentially very valuable (particularly in a country with 7.6m tourists annually, compared to our 4.16m population).

Actually, two decades of national policy to encourage all Chinese to speak Mandarin has all but destroyed our rich national culture of Chinese dialects, to the extent that many young Singaporean Chinese are no longer able to understand Beijing Opera, Cantonese movies (from Hong Kong), or even their grandparents, who may speak little English or Mandarin. Again, the reasons for this move were political, as in earlier days the Chinese would be politically aligned by their dialect groups, and there would be bloody clashes and disputes (a bit like the situation within India, with the Sikhs and the Indians). Anyway, the government’s solution was to try and find common ground and forge stronger understanding amongst the Chinese by making everyone speak Mandarin, and radio and television shows in dialect were taken off the air, to be replaced with mandarin ones. (Interesting trivia: as a result of the reunification of the written Chinese script by Shih Huang Di, the first emperor of China, centuries ago, all Chinese dialects are written the same way, although they are pronounced very differently. So a 1960s radio news broadcast would be read by a number of different people in different dialects from the same written script, though none of the newsreaders could understand each other in conversation ))

I guess the moral is that people have to be able to talk to one another in a common language to forge a strong social compact?

Last thing to mention – there are now last-ditch efforts to revive and preserve the use of dialects in Singapore, but most people believe its a lost cause – once the chain of language use and transmission from mother to child is broken, a certain richness of the language will be lost, and presently there are few compelling economic/political/cultural reasons likely to motivate people to learn them… (same story with the thousands of other fading languages/dialects globally).

Jason S. Yeo, Feb-Apr 2003