The Irrational Origin of the Belief In Truth

0

The Genealogy of Morality begins with an articulation of human ignorance with respect to the self: “we remain of necessity strangers to ourselves, we do not understand ourselves, we must mistake ourselves…” (1). What is the necessity that demands this ignorance? What is the “good reason” that knowers cannot know themselves (1)? This provocative question, this apparent paradox, is never explicitly resolved. I argue that the requirement of self-ignorance does not spring forth from intellectual coercion or psychological aversion, but rather it is the expression of a tautology. For Nietzsche, in order to qualify as a “knower,” one must believe in truth; one must have faith that there are things that can be known — and to know oneself destroys this requisite belief. The language of the text confuses, eliding the distinction between knowing as an act of believing in truth and knowing as personal awareness, as knowing oneself, but they are distinct and Nietzsche believes that the latter precludes the former. To know oneself, to stare into the abyss, leads to awareness of the ascetic ideal and thus, ultimately, the understanding that all idols and prophets are false and cynical; that science is a charade; that the value of truth itself has no foundation. Thus, in order to know about the world — to believe in reason, calculation, causality — one cannot submit to the type of deep introspection required to truly know oneself, to be self-aware.

Now to examine the origins of the knower. Throughout the Genealogy, Nietzsche evokes them, including them as a co-conspirators, as confederates in his effort to awaken man from his hibernation and inactivity. However, this is merely a rhetorical device. The knowers are not intentional participants in this project. They are those committed to a “naturalistic or scientific view of the universe” (120). However, the knower, the scientist-philosopher, the priest, all function similarly and, at this stage, the will to truth compels them to listen. Thus, they contribute against their will. Nietzsche asserts that “all great things necessarily perish through themselves, through an act of self-cancellation” (117). Eventually the law is applied to the lawgiver himself. In this case, when “Christian truthfulness” becomes conscious of itself and applies the rigorous scrutiny that all else has been subjected to; when it asks the question “what does all will to truth mean?” (117). The knower’s will to truth ultimately negates itself by mandating the inquiry that reveals its irrational foundations.

To appreciate the principle of self-cancellation, it is necessary to understand how science arose from asceticism; how it was shaped and reinterpreted by the ideal. Nietzsche claims that science is “not the opposite of the ascetic ideal but rather it’s most recent and noblest form” (107). He sees modern science as a new manifestation of this millennia old belief. The ascetic ideal is the product of humanity’s fundamental need for self deception. It emerged to salve the inadequacy of social life to the defanged, declawed, defenestrated being — whose civilization robbed him of the ability to rely on instinct. It infused suffering with meaning and through this infusion filled the “enormous emptiness” that pressed upon man (118). Without the comfort it confers, the essential meaninglessness of existence would be overwhelming. However, asceticism is a peculiar salve. It sickens those who take it as medicine, yet they feel better for it. It does not alleviate suffering, but it provides the narrative scaffolding that makes the pain meaningful through “religious interpretation and ‘justification’” (101). It offers this sublime release by assigning culpability: forging a meaning for suffering in the nourishing admonishment that “you alone are to blame for yourself” (92). Yet from this simple mantra grows modernity. Religion, philosophy, science are all distinct apparitions of the same underlying phenomenon. And if one looks closely, it is not hard to find the threads that tie the ideals of science to the ascetic ideal.

The demeanor and pose of the scientist maps to that of the ascetic priest. The scientist’s “will to neutrality and objectivity” is grounded in a self-denying instinct (79). What is impartiality if not literal denial of the self, the intentional suppression of the subjective? Furthermore, the philosopher-scientist denies the senses, “demoting physicality to an illusion” (84). But this correspondence does not explain why the concept of truth relies on the ascetic archetype. How did knowers come to bare the mantle of asceticism?

Nietzsche suggests that “[c]ontemplation first appeared on earth in disguised form,” out of necessity (81). The “inactive, brooding, unwarriorlike” instincts of these original thinkers made them the objects of suspicion and mistrust (81). Thus, their continued existence required that they adopt “the previously established types of contemplative human beings —as priest, magician, soothsayer, as religious human generally” (82). And so, the prototypical philosopher-scientist was formed in the mold of those who came before. To sell the illusion required that they act their role, and to act well they had to believe it.
But, to reiterate Nietszchean methodology, the “cause of the genesis of a thing” is divorced from its final function (50). The two must be understood separately. If, in the beginning, science inherited its asceticism out of existential necessity, the ideal has since reinterpreted itself. The tenets of modern science, rather than just the outward appearance and behavior of its practitioners, have been infected by asceticism. The ascetic ideal, through science, as through religion before it, advocates the complete removal and denigration of the self in the name of ‘truth.’ Science is, as a result, directed by this “unconditional will to truth” which requires “the renunciation of all interpretation” (109). There is no space for sensuality or individuality: everything must be reduced to that which can be observed ‘objectively.’ But the demand for objectivity requires “[t]hat we think an eye that cannot be thought, an eye that must not have any direction” (85). Objective truth aims to remove any trace of the self from the act of knowing — the eye is “trained for an ever more impersonal appraisal” — which Nietzsche sees as an “absurdity” (50) (85). For him, there is “only perspectival seeing, only a perspectival knowing” (85).

Knowers, adherents to this cult of objective truth, “are mistrustful of every kind of believer now” as a strong faith “raises suspicions against that in which it believes” (108). So science ‘overcame’ God and exorcised what was “exoteric about [the ascetic] ideal” but this only renewed its vitality. The ideal, now “stripped of its outworks,” was reduced to its core proposition: “its will to truth” (116). The habit of truthfulness, which originated in Christian confession, was surreptitiously “sublimated into scientific conscience, into intellectual cleanliness at any price” (116). And so, while belief in gods became fanciful, belief in truth, despite its divine origin, became “more firm and unconditional” (109).

Here lies the tautological contradiction that condemns “knowers to be unknown to themselves” (1). Modern science is predicated on a subtle metaphysics; so subtle, that those who claim to see the world as it is — the knowers, the scientists, the scholars, the “trumpeters of reality” — necessarily overlook it (107). And what is it that they are not permitted to see? That their belief in truth is derived from the apocryphal axiom that “God is truth, that truth is divine” (110). Despite the godlessness and materialism of modern thinkers, they “still take [their] fire from that great fire that was ignited by a thousand-year old belief” (110). They are blind to their role as heirs to a grand delusion; they “stand too close to themselves” and thus overlook the hollowness of the foundations that sustain their belief (109). In fact, they cannot even conceive of the need for any foundation at all: they think they float in the ethereal realm of absolute truth, and buoyed by that conviction, they overlook its divine and irrational origin. They are honest victims of the “dangerous old conceptual fabrication that posited … such contradictory concepts as ‘pure reason,’ ‘absolute spirituality,’ ‘knowledge in itself’” (85).

The concept of truth emerged from deception: “truth was posited as being, as God, as highest authority; because truth was simply not permitted to be a problem” (110). For a knower to know themselves (in other words, to be self-aware), they would be asked to do what they are “not permitted” to do: to “open their eyes towards themselves, [to] know how to distinguish between ‘true’ and ‘false’ in their own case” rather than kowtowing the inherited proscription (100). Knowers would be forced to justify the will to truth — to prove the value of truth itself, now that it has been stripped of its divine authorization (110). And it is not clear that such a justification exists. This final introspection is the culmination of “a two thousand year discipline in truth, which in the end forbids itself the lie involved in the belief in God,” thus debasing itself entirely (116).

Yet, for Nietzsche, this event, this self-overcoming of Christian truthfulness, is “hopeful” one (117). His call to self-awareness and his disdain for the ascetic ideal do not constitute an entreaty to understand reality more precisely, with more accuracy. He is instead concerned for the progress of mankind. Nietzsche views modernity as diseased and effeminate; dependent on meaning provided by the ascetic ideal, a self-denying delusion, that constricts the strong and beatifies the weak. He fears that modern man, through his embrace of the ascetic ideal, lives comfortably “at the expense of the future” (5). Self-awareness hastens the end of this sickened state by moving truthfulness closer to its ‘self-overcoming,’ which will throw the world into disarray, and perhaps, making life “worthier… of living” (80). When the old systems of morality have been weakened, something new can replace them. With the constraints of ascetic morality abandoned, the strong are unshackled and once again able to stamp their “own functional meaning onto” reality (51). The pending self-sacrifice of truth may destabilize the world, destroy nations, impose a new suffering upon the many, yet these are the conditions under which Nietzsche believes improvement can occur. Remember: “the forfeiture of meaning and purposiveness …. belongs to the conditions of true progress” (51).

Dependence, Freedom & the Structure of Obligation

0

If human beings are entirely dependent upon one another, as Adam Smith suggests, then how is it possible for them to be free?

Does the state of freedom preclude dependence? Can one be free while unable to exist in isolation? Rousseau and Smith clash on this point because of their distinct views on human nature. For Smith, freedom is a result of fortuitous accidents; it is historically contingent, a product of selfish market forces rather than high-minded political intention. This contrasts with the Rousseauvian vision of a highly intentional social contract, where freedom is built upon a deliberate and considered conversion from self-interested individuals into a body politic. Dependence, in its various forms, is central to the stories that both authors tell about freedom. Rousseau condemns it as humanity’s original sin, while Smith embraces it as, not only natural, but also beneficial. However this apparent tension is, in fact, entirely superficial, stemming from the conflation of two distinct types of dependence, one centralized and the other distributed. Both types of dependence entail a reliance on others, yet the structure of these obligations is different. Centralized dependence, where multitudes are maintained by just a few, corrodes freedom and condemns men to slavery. On the other hand, distributed dependence, characterized by a complex web of mutual obligations between many distinct individuals, not only permits, but also promotes, freedom.
According to Rousseau, human beings, in their natural state, were free because, like animals, they relied upon no one but themselves. Each man, he envisioned, had no “greater need for another man than monkey or wolf has for another of its respective species.” (Rousseau 60). Entirely independent of everyone, Rousseau’s savages were endowed with natural freedom, a primitive type of liberty where man was free to do as he wished, to give in to any passing temptation, to live by himself on the fruits of his own labor (Rousseau 167).
Rousseau, seeing freedom as a property particular to natural man and dependence as the fulcrum that lifts man from this state, believed it to be a corrupting force. He asserts that dependence leads to degeneration, pointing to the physical differences that can be observed between domesticated animals and their wild counterparts: “The horse, the cat, the bull, even the ass … have a more robust constitution … in the forest than in our homes. … [I]t might be said that all our efforts at feeding them and treating them well only end in their degeneration.” (Rousseau 51). By providing a comfortable life for livestock, their natural vitality is diminished. And, he argues, the same must be true for human beings, if not too an even greater degree, as humans preserve comforts for themselves that they withhold from the animals that they domesticate (Rousseau 51).
Ultimately, though these examples of dependence are analogies that Rousseau draws upon to make his political point: “The bonds of servitude are formed merely from the mutual dependence of men and the reciprocal needs that unite them, it is impossible to enslave a man without having first put him in the position of being incapable of doing without another” (Rousseau 68). No one can be compelled to slavery another unless the alternative is to forfeit his life. But true dependence, where one is incapable of surviving without assistance, makes this type of coercion possible. Thus, dependence is the precondition of slavery.
Rousseau is not wrong in this assertion. However, his argument applies to a specific type of dependence. In The Wealth of Nations, Smith helps to draw the distinction between dependence that is “degenerative,” to use the Rousseauvian term, and that, which is both necessary and beneficial. By Smith’s account, feudal Europe was subdivided into territories, each dominated by a “great proprietor.” (Smith 440). By owning land, these proprietors controlled the entire surplus that it generated. Yet, without foreign commerce or finer manufactured goods, the bounty of the land could not be exchanged, only consumed: “If the surplus produced is sufficient to maintain a hundred men … [the proprietor] can make use of it in no other way” (Smith 440). The single use of the surplus meant that great proprietors were necessarily “surrounded with a multitude of retainers and dependents,” who were unable to provide anything in return (Smith 440). The proprietor’s land was already worked and additional labor was unnecessary. Thus, these dependents, maintained entirely by the proprietor’s largess, had to obey his command. Their state of dependence made them slaves to the lord who fed them. This is an example of the corrosive, centralized dependence articulated by Rousseau in his Discourse on The Origins of Inequality.
However, a shift from this state of subjugation, where all men exist either as tenants or dependents of a great proprietor, took place and produced commercial society: where a vast number of differentiated laborers are sustained by many unique customers. In other words, a transition from centralized dependence to distributed dependence. And, ultimately, this transformation can be traced back to the division of labor.
It is through the humanity’s shared instinct to barter that the division of labor originally occurs (Smith 16). Talents are not equally distributed across a population and so some individuals have greater “readiness and dexterity” others. (Smith 16) They are able to produce some output more efficiently than their peers. However, if each individual is entirely independent, their unique abilities would be underutilized. An individual can only consume a finite amount, so, if exchange was impossible, there would be no incentive to produce beyond what was required by a single person. The ability to exchange goods to mutual advantage gives rise to labor specialization or as Smith write: “the study of his own advantage naturally, or rather necessarily leads him to prefer that employment which is most advantageous” (Smith 482).
Smith opens The Wealth of Nations by outlining the steps required to produce a single pin. It is an anecdote which illustrates his understanding of the division of labor: what was once completed by the exertion of a single individual, now requires the work of dozens. The labor required to manufacture of a pin has been fragmented: “One man draws the wire, another straightens it, a third cuts it, a fourth points it, a fifth grinds it at the top” with the production line continuing ad infinitum (Smith 4). Each person who is required to produce a pin depends on the labor of every individual prior to his step in the process as well as all those who follow him; the former to furnish him with the raw material which he works upon and the latter to continue the process to its conclusion.
By dividing the process into steps, each worker’s task is reduced to “some one simple operation” (Smith 8). And as this single operation becomes the “sole employment of his life,” the worker unavoidably improves his ability (Smith 8). Many more men are required for this type of production, but the labor of each individual is less significant and the skill required by each step more trivial. Smith shows that pins can be produced more efficiently when the effort is divided, however the efficiency gains come with a necessary corollary: dependence increases as well. However, the resulting dependence is distributed evenly as each worker relies equally upon every other. This complex web of mutually advantageous dependence forms the basis of commercial society.
And commerce is what ultimately destroyed the feudal order. The great proprietors of Europe did not maintain their dependents out of kindness, but rather selfish opportunism. The surplus captured by proprietors could not be spent directly on their betterment, so instead it was converted into power over other men. The emergence of foreign trade and fine manufacturing created an alternate outlet for surplus production, by which proprietors could “consum[e] the whole value of the rents themselves” (Smith 444). All men are naturally self-interested, and so upon finding a means of consuming the surplus without sharing it, proprietors did so. This gradually eroded the foundations of the feudal order as “for the gratification of the … most sordid of all vanities, [proprietors] bartered their whole power and authority” (Smith 444-5). By trading in their surplus for expensive baubles instead of sharing it with their retainers, the cycle of dependence on “great proprietors” was interrupted.
In commercial society, unlike in the feudal order, dependence is highly distributed due to the division of labor. The wealthy remain wealthy, but their political power has decayed as now each worker “derives his subsistence from the employment, not of one, but of a hundred or a thousand different costumers.” (Smith 445). In commercial society as a whole, there is a huge degree of dependence in absolute terms, as the sustenance of every single worker requires might require transactions with hundreds of other individuals. But now even the wealthiest contribute “but a very small proportion… of [workers’] whole annual maintenance.” (Smith 445). The centralized dependence that characterized feudal institutions has been replaced and though the a single wealthy individual might contribute to the livelihood of many more workers than before, “they are all more or less independent of him, because generally they can be maintained without him.” (Smith 445).
So, if dependence can, in certain situations, be a wellspring of greater independence, is Rousseau’s understanding of the concept simply flawed? While Rousseau’s language doesn’t make the distinction between centralized dependence and distributed dependence explicit, the conceptual underpinnings of the social contract reveal that he accepted the division between the two.
In On the Social Contract, Rousseau states that there is a point when humanity as a whole can no longer exist without combining forces, and thus becoming mutually dependent. The obstacles standing in they way of progress become insurmountable if they are faced alone (Rousseau 163). So as independent existence is no longer feasible, Rousseau attempts to formulate a “form of association” that minimizes the risk of degeneration that comes with dependence (Rousseau 164). This effort results in the social contract.
The social contract is an ingenious form of association that “defends and protects” every member while each “nevertheless obeys only himself and remains as free as before. (Rousseau 164).” By surrendering one’s property and rights to the entire community, everyone is equal in condition, and because everyone has the same condition, “no one has an interest it making it more burdensome for the others” because they ultimately shoulder the increased load as well (Rousseau 164). In this process, man loses his natural freedom but gains “civil liberty and the proprietary ownership of all he possesses” (Rousseau 167). The ultimate effect is that the social contract aligns each individual’s self-interest with the public interest.
In his articulation of the social contract, Rousseau outlines the logic of freedom through distributed dependence, arguing that “in giving himself to all, each person gives himself to nobody” (Rousseau 164). Rousseau suggests that when the degree of interdependence is so extreme, it loses its coercive properties and becomes an engine of unification.
Rousseau and Smith both believe freedom to be attainable despite human beings’ dependence on one another and, in some sense, because of it. Dependence is not uniform; it comes in various shades and flavors based on the structure of obligations. And depending on the character of dependence, outcomes are different. Centralized dependence leads to slavery and degeneration, while distributed dependence promotes individual freedom and independence. Both markets and social contracts are mechanisms that foster distributed dependence. The first by contract, where each individual cedes his rights and property to the collective and thus all become equally dependent upon every other. The second by self interest: each individual, discovering his comparative advantage, rationally chooses to specializes his labor and, in doing so, becomes dependent on the multitude of other differentiated laborers who perform tasks better than he ever could. Each system of association dilutes the degree of reliance on any specific individual, instead spreading it in roughly equal proportion across society as a whole, each individual depending, reciprocally, upon every other. When dependence is well distributed, no man can be a slave. Though he relies upon a multitude of other — the butcher for his meat, the farmer for his grain — none of them can control him because they, in turn, depend on him.

 

A Reasonable Expectation of Privacy

0

If, without breaking any property laws, one person is able to easily observe another, the latter has no right to claim an expectation of privacy. When an individual can be seen by anyone, they should assume that they could be seen by everyone. A reasonable expectation of privacy can be assumed only when a physical transgression is required for information to be obtained. If someone must trespass or otherwise violate property rights in order to conduct surveillance, then a general expectation of privacy is reasonable. This is because, in the physical world, an underlying reason for privacy is a concern for bodily security. A stalker isn’t just a stalker, they are also a potential assailant or even murderer. It is a stalker’s inclination (if not actual ability) to penetrate spaces that we thought were ours alone that disturbs us. Privacy in the virtual realm lacks the same justification. Data collection on the Internet severs the connection between potential violence and the acquisition of information. By its nature, virtual surveillance is less physically invasive and thus warrants a more limited expectation of privacy.

Leaving the personal domain of the home has always carried with it the presumption that an individual could be seen by the public. Historically, this may have meant that a handful of neighbors or acquaintances might be aware of one’s movements. Today, it means that one might be under constant CCTV surveillance. Even though outcomes are different, privacy rights are no more infringed upon today than they were previously. In any public space, where one can be easily observed the reasonable expectation of privacy evaporates. Any non-invasive (purely sensory) observation that is conducted in public cannot constitute the violation of the surveilled individual’s privacy. No one has a right to monopolize their own image, or information about themselves that has been acquired in a non-invasive manner.

However, surveillance in the real world often requires the use of more invasive measures because the ability to physically intrude on one’s privacy necessitates an uncomfortable degree of geographic proximity. Suppose someone discovers themselves in the background of a photo taken at Grand Central Station. This picture was taken without their consent or knowledge, yet they likely wouldn’t feel as though their privacy had been violated. Many photographs are taken everyday in which strangers are unwitting subjects. Yet if they were to stumble upon another picture of themselves, taken as they slept in their bed, the reaction would likely be one of immense violation, if not outright fear. This photograph, unlike the other, implies physical vulnerability — the photographer was close enough that they either were trespassing or could have, had they chosen to. Invasions of privacy in the physical world carry with them the latent threat of physical violence. Thus a reasonable expectation of privacy exists in spaces where we are vulnerable or intimate and where access is restricted, such as the home or the bathroom.

Attempts to analogize the privacy protections that are warranted in the physical world to the digital realm necessarily lead to faulty conclusions. In the digital world, what constitutes a violation of privacy changes from physical intrusion and the specter of violence to the pure collection of information. Suppose a woman is browsing a website in the comfort of her own home. As the she uses the site, her behavior is tracked. Usage information, like the type of device she is using, the amount of time she spends on the site and her most frequently visited pages, is recorded in the background. If this type of information had been acquired by a person surreptitiously lurking behind her, having broken into her home and hidden in her closet, we could comfortably say the woman’s privacy had been invaded. However, as this data is collected digitally, the collection process is less invasive – no direct physical threat is posed to the subject of the surveillance.

Though she in her home, which in the case of physical surveillance would grant this woman a reasonable expectation of privacy, when she uses the Internet, her behavior can no longer be thought of as taking place within the privileged space of her domicile, but rather on the unrestricted web. Browsing the Internet should be understood as equivalent to strolling across the public square. Just as in the physical world, if an individual is in a public space, then passive observation is not an invasion of privacy, the same holds true on the Internet. It is the means of acquisition that constitutes the violation, not the information itself that is collected.

In the real world, difficulty of access is what underpins a reasonable expectation of privacy and the delineation between the public and private sphere is simply the threshold of one’s house. Digital surveillance affords no easy distinction, and because data collection on the Internet lacks the threat of violence, it is less invasive. We need to recalibrate our expectations and acknowledge that the Internet is public by default.

Margaret Cavendish and the Epistemology of Imperfect Instruments

0

 

Margaret Cavendish is skeptical of the bold assertion that there is scientific relevance in an imperceptible world that is not directly visible to the naked eye. Though the empirical evidence of modern science has sided with Robert Hooke, the epistemic thrust of her criticisms remains unresolved — when do instruments augment, rather than distort, the senses? How can claims about the unobservable be verified when verification relies on ‘unnatural’ devices, which, themselves, are potentially deceiving? And finally, more broadly, what knowledge is relevant and worthwhile to acquire?

Cavendish opens by grounding her argument in an appeal to expertise. She discusses the observations of artists — craftsmen who are employed for their ability to accurately depict reality — who “confess” the multitude of conflicting perspectives produced by factors such as lighting and positioning.[1] She suggests that as vision is a fallible sense, thus things may appear to contradict their true essence. So, if perception cannot be used to discriminate between truth and fiction, how can the actual form of objects be determined? Cavendish proceeds with a discussion of the utility of objects. She argues that because both knives and pins fulfill their function, their form must enable that function. The true shape of an object is revealed in its interactions with the world. A knife cuts because it holds an edge, just as a pin penetrates paper due to the sharpness of its point. For Cavendish, observations to the contrary provide a condemnation of tools rather than a refutation of her argument. She believes that the ‘imperfections’ made visible via instrumentation are introduced by the instruments themselves.

Cavendish is unconvinced by the tools that are supposed to augment the senses. She sees a “hermaphroditical” property of observations made via artificial means — the purity of nature perverted.[2] She poses the thought experiment of a woman drawn to the proportions as seen from a microscope. The image of a woman is seen through a “glass” and is distorted by “various refraction and reflexion,” resulting in a “monstrous” and false perception of the woman’s form. Here, observation mediated by an unnatural property, is deceiving.[3] This phenomenon leads Cavendish to believe that artificial aids cannot be used to understand nature. Only conclusions derived through the synthesis of un-aided observation and rational contemplation are credible. Though that transition to a more experimental science may seem inevitable in retrospect, during its nascent stages, it was similarly predicated upon belief — if not in God or scripture, than in the devices that acted as portals to worlds beyond the senses.

Cavendish is a peculiar scientific figure for her time, in that she was a woman. Her husband, the Duke of Newcastle-upon-Tyne, was a member of the Royal Society.[4] He was supportive of her work and his position afforded her certain privileges, such as becoming the only woman to attend one of their meetings.[5] Despite the gendered nature of science in England at the time and her perceived eccentricity for engaging in such a male-dominated activity, Cavendish did not shy away from intellectual confrontation. She clashed directly with members of the Royal Society, notably Robert Hooke. While not referenced by name in this passage, Hooke and his promotion of the microscope as a relevant scientific tool are the principle foils for Cavendish’s Observations Upon Experimental Philosophy. The examples she uses, such as the fly’s eyes, the point of a pin and a knife’s edge, are all directly pulled from Hooke’s work. And when she continues, discussing the “unprofitable” science done with microscopes, it is a clear jab at Hooke as he works on the dime of patrons of the Royal Society.[6] Embedded in her criticism of Hooke is the sincere belief that “better arts and studies” were being spurned for fanciful distractions.[7] With the experimental philosophy promoted by Hooke and enabled, in part, by the use of microscopes, Cavendish believed that too much emphasis was placed on seeing “exterior” phenomena rather than on understanding the manner by which things actually function.[8]

So what are the criteria that Cavendish uses to determine what makes knowledge worth acquiring? In the paragraph that follows the selected quotation, Cavendish enumerates the ways in which a science could be considered useful: from improving agriculture to increasing commerce, any tangible benefit to society is enough. Yet, from her perspective, no actionable information can be acquired by a microscope. For example, seeing what a louse looks like doesn’t help the beggar rid himself of lice.[9] Thus, the knowledge is useless. This helps contextualize her closing characterization of the sights seen by microscopes as merely “superficial wonders.”[10] Whether or not instruments do, in fact, allow the natural world to be perceived in a news ways — Cavendish clearly believed the latter — was irrelevant, as even if the images seen through a microscope were accurate, Cavendish did not think that they contributed to a productive society.

Cavendish’s critique of experimental philosophy demonstrates the type of epistemic tension that occurs when an old paradigm confronts a novel one. What instrumentation is credible? What types of knowledge are worth knowing? These questions are important. And even, while Cavendish’s skepticism has proven unfounded, her arguments, for the time, were nuanced and provocative.

 

[1] Margaret Cavendish, Observations Upon Experimental Philosophy (Cambridge: HS100 Editions, 2017), p. 5.

[2] Ibid., p. 4.

[3] Ibid., p. 5.

[4] Alex Csiszar, HS100 Lecture 7 (Sept. 21, 2015), Slide 13

[5] Ibid.

[6] Margaret Cavendish, Observations Upon Experimental Philosophy,  p.4

[7] Ibid., p. 5.

[8] Ibid.

[9] Ibid., p. 6.

[10] Ibid., p. 5.

Transculturation in 18th Century England: Oriental Influences in Handel’s Belshazzar

0

Written in 2013, for SFUHS’s Western Civilization course.

Transculturation in music is more than just novel sounds and unusual instruments. It mirrors the societal zeitgeist – the economic connections, political motivations and social influences present at any moment. Late 18th century England was a small island nation in the process of building the largest empire the world has ever known. The country served as the hub of global trade and was tangled in a web of political alliances. In other words, it was a prime location for the exchange of cultural ideas. During this period, music on the British Isles was dominated by Italian operas and English oratorios. George Friedrich Handel (1785 – 1759) is synonymous with British music of this era; a prolific, virtuoso composer, he wrote over 42 operas and 29 oratorios as well as a plethora of other smaller works.[1] One of his acclaimed works, the 1745 oratorio Belshazzar, taps in to the contemporary social and political influences and masterfully incorporates elements of Oriental culture. Composed during, what Winston Dean–a prominent Handel historian calls– “the peak of Handel’s creative life,” this oratorio masterfully taps into spirit of the time and renders a biblical story into a novel and insightful piece. [2] The transculturation apparent in Belshazzar reflects the exchange of important cultural values. It demonstrates London’s growing fascination with the Orient due to expanding economic ties. It expresses the investment and mercantile interest of the directors of The Royal Academy of Music. Finally, it reveals Occidental society’s attempt to reinforce the tenets of western culture by comparing them to negative stereotypes of the east.

Contemporary London’s fascination with the East, as reflected in Handel’s Orientalist operas (specifically Belshazzar), was the result of growing international trade and greater political interaction between the East and West. During this time, the rise of joint-stock companies led to the emergence of a strong middle class in Britain. Joint-Stock companies are organizations in which the investors’ capital is pooled in a common fund; this allows the commercial risks to be spread across multiple shareholders and limits the liability of any investor to the amount of his investment. This economic structure allowed for an explosion of commercial ventures, with the East India Trading Company as perhaps the most famous, but also including the Royal Music Academy itself. [3] The economic growth sparked by these new companies fostered a substantially greater global competition. During the 17th century, economic rivalry grew between European nations as well nations within the Orient. In the West, European competition for a monopoly of trade came to a head in 1728 with the Ostend Incident. The Imperial Ostend Company was a Flemish private trading enterprise that competed fiercely with the traditional colonial trading companies such as the East India Trading Company. In 1728, what is known as the Ostend Incident occurred; Britain exerted political pressure on the Flemish government and ultimately caused the company to be dissolved.[4] In the East, the competition was intense as well, European efficiency of the industrial system quickly reduced production costs below those of Oriental competitors. However, the cheap labor of the East negatively affected artisans of the West as well.[5] In the early 1700’s, the British parliament attempted to impose a ban of imported materials such as wool and silk, but these measures could not restore prosperity to the local workers.[6] All of these various events resulted in an England intensely curious about the East.

The interests of the directors of The Royal Academy of Music also influenced the subject of Handel’s Orientalist operas. These interests were motivated by many factors, including the possibility of economic gain, the opportunity to combine professional interest with cultural interests, elevating social status and the possibility of influencing political opinions about the East. By supporting the opera, the directors stood to gain economically, socially, and politically. Evidence suggests that many made their decisions largely on the basis of financial self-interest.[7] The directors were typically businessmen actively engaged in investment, government, and international trade (often with the East Indies) and the dramatic music of the time mirrored these interests. Several of the directors were invested in economic ventures in the Orient supplying a possible motivation for the encouragement of orientalist operas.[8] The director’s social motivations played a factor as well: being on the board of such a company, resulted in a higher social status due to its connection with the arts and culture. And political interest controlled decisions.

Though the prospectus for founding the Royal Music Academy promised financial rewards, in reality little profit was made and when the company was shut down 1728, the Royal Academy of Music was clearly a financial sinkhole. Yet during the Ostend incident, the Royal Music Academy remained open, even with extremely heavy loses, because of its ability to influence public opinion: librettos were carefully chosen to cultivate a specific image of the East and sway the Parliament’s votes. [9] Clearly, the directors of the Academy had means and motive to influence Handel’s operas and chose to exercise this power for their financial, social, and political benefit.

Handel reflects exoticism mainly through the subject matter but also with the musical elements of the oratorio. The biblical story that the oratorio is based on describes the king of Babylon, Belshazzar, enjoying sybaritic festivities while the city is laid siege to by the Persians.[10] Though the music of Belshazzar might not sound as exotic as would be expected from the subject, Handel incorporates several distinct elements that emphasize the significance of the Orient. In his letters to Charles Jennens (1700 – 1773), Handel discusses his sentiments about the oratorio saying “It is indeed a noble piece, very grand and uncommon; it has furnished me with Expressions, and has given me Opportunity to some very particular Ideas…”[11] And indeed he does include many strange, novel elements. For example, voice is used as an arbitrarily exotic device. [12]The melismatic coloratura passages (the series of short nonsensical notes) convey an ungainly and primitive effect, which contemporary listeners would associate with the subject matter. Belshazzar is a prime example of, what Ralph Lock would call, the “All the Music In Full Context” paradigm.[13] The exoticism is conveyed through the context and aided only slightly by musical cues.

The depraved Eastern Tyrants depicted in Belshazzar and in many other Orientalist operas are symbolic of societies that have rejected Occidental tenets and mores. In contrast, the oriental influences in Handel’s operas are used to convey the importance the moral ideas and constructs of Western civilization. Belshazzar is portrayed as a self-indulgent tyrant; Winston Dean described his court as a “riot of oriental color – wives, concubines, astrologers and all.”[14] Operas and dramatic oratorios such as Belshazzar reinforced the contemporary ideological system. At the time in England, the growing middle class, including entrepreneurs made possible through joint stock companies, was challenging traditional social structures. Musical works of the time strived to uphold duty, self-sacrifice, and other professed ideals. These ‘occidental’ values are contrasted in Belshazzar with the depraved values of the East. Handel emphasizes the cliché of a Middle Eastern Sultan enjoying a life of excess and pleasure through the musical elements and the lyrics of Belshazzar’s aria ”Let the deep bowl thy praise confess.” He imagines himself as a god while he drinks excessively: “Another bowl! ‘Tis gen’rous wine, Exalts the human to divine.”[15] The rising vocal scale on “Exalts…” parallels Belshazzar’s claim of rising to godhood. His opening words in this aria (“Another Bowl”) are a cry of overindulgence. [16] It is repeated four times, with each reference echoed by the oboes. These reiterating instrumental phrases allow the listener to visualize Belshazzar’s behavior: grabbing and guzzling one cup after another. Ultimately, Handel’s portrayal of Belshazzar not only reinforces oriental stereotypes but also fortifies the importance of occidental social, political and religious traditions. This negative depiction reveals how the influences of the East were used to extol English (and Western) society.

Ultimately, Handel’s Belshazzar exemplifies the social, political, and economic relationships of its time. England’s intense interest in the East is portrayed through both the subject matter and the musical variations of the piece. And beyond the superficial biblical story, Handel conveys a subtler message of Western nationalism and identity by juxtaposing the values of the stereotyped East with those of the idealized West.[17] Europe’s interest in the Orient really stems from its citizens’ desire to understand Western culture. Contemporary scholars explored this in a binary way, comparing the Orient to the Occident. Responding to the desires of his audience, Belshazzar makes clear the superiority of the civilized West over the depraved and barbaric East.

 

[1] Anthony Hicks, “Handel, George Frideric, §10: Oratorios and musical dramas,” in Oxford Music Online, accessed March 27, 2013, http://www.oxfordmusiconline.com/subscri….

[2] Winston Dean, Handel’s Dramatic Oratorios and Masques (London, UK: Oxford University Press, 1959), 435.

 

[3] Judith Milhous and Robert D. Hume, “New Light on Handel and the Royal Academy of Music in 1720,” Theatre Journal 35, no. 2 (May 1983): 149-152.

[4] Gerald B. Hertz, “England and the Ostend Company,” The English Historical Review 22, no. 86 (April 1907): 255-260.

[5] John E. Orchard, “Oriental Competition in World Trade,” Foreign Affairs 15, no. 4 (July 1937): 708-710.

[6] Ralph Davis, “English Foreign Trade, 1700-1774,” The Economic History Review, n.s., 15, no. 2 (1962): 286.

[7] Milhous and Hume, “New Light on Handel,” 149-167.

[8] Elizabeth Gibson, The Royal Academy of Music (1719-1728): The Institution and Its Directors (New York, London: Garland, 1889), 24-33.

[9] Ellen T. Harris, “With Eyes on the East and Ears in the West: Handel’s Orientalist Operas,” The Journal of Interdisciplinary History 36, no. 3 (2006): 423.

[10] “Handel’s Oratorios,” Handel’s Music, accessed May 5, 2013,http://www.portlandhandelsociety.org/d_works1oratorio.html.

[11] Erich H. Muller, ed., The Letters and Writings of George Frederic Handel (London, UK: Butler and Tanner, 1935), 52.

[12] Ralph P. Locke, “A Broader View of Musical Exoticism,” The Journal of Musicology 24, no. 4 (2007): 501.

[13] Locke, “A Broader View of Musical,” 478.

[14] Dean, Handel’s Dramatic Oratorios and Masques, 440.

[15] Charles Jennens, Let the Deep Bowl Thy Praise Confess

[16] Ralph P. Locke, Musical Exoticism: Images and Reflections (Cambridge, MA: Cambridge University Press, 2009), 92-93.

[17] Elissa Hope Keck, “William Walton’s Belshazzar’s Feast: Orientalism and the Continuation of the English Oratorio” (master’s thesis, University of Tennessee, 2010), 5-7.

 

Towards a Utilitarian Metamorality

0

There are three major approaches that can be used to define a moral philosophy: virtue ethics, where moral good is derived through moral character; Kantian deontology where moral good is arrived at procedurally, through complex duties and rules; and utilitarianism where the consequences of actions, rather than the intent, determine what is morally good. Each of these moral philosophies has merits, yet utilitarianism appears to be the best prepared to operate as a metamorality, that is a guiding philosophy for a multi-tribal society

Utilitarianism succinctly answers two central questions that confront any potential metamorality: who deserves moral consideration and what should the common currency of morality be? Utilitarians believe in maximizing happiness, impartially. So happiness, or more specifically the overall quality of experience, is the common moral denominator across different groups. And everyone deserves moral consideration, equally. Furthermore, utilitarianism is based around consequences — the outcomes of, rather than intentions behind, specific actions are what determine whether the actions themselves are justified. This is not how most people are accustomed to thinking of morality.

For many people, morality is defined by the values of their in group or ‘tribe.’ This more traditional approach is the basis for religious morality and tribal morality more broadly. Virtue ethicists ask what a ‘good’ person would do? The moral philosophy is grounded in the idea of Aristotelian idealism: that things are good when they conform to their natural purpose. [1] The limitations of virtue ethics as a foundation for metamorality are obvious. The definition of a good person is highly dependent upon tribal modes of thinking. For Aristotle, it meant looking at role models within a society and working to emulate those who excelled. Virtue ethics is a philosophical codification of innate tribal ideals. It defines “good” in the terms of what is valued by a single group. While a philosophy of virtue ethics is good for inducing cooperation within a group, it cannot function effectively as a metamorality.

The third model for a normative metamorality comes from Emmanuel Kant. Kantian ethics is highly humanistic — and initially appears as though it would provide a solid foundation for a type of universal morality. It is predicated on respect for humanity, the innate dignity of people and individual autonomy. Kant derives these noble principles through the empiricism of pure reason. For Kant, a good person follows the “laws they give themself.” And all of these self-derived laws should follow the categorical imperative. That is be unconditional moral obligations that is binding in all circumstances; it applies to everyone so must be universalizable.[2] The core tenet of Kantian deontology is that people should not be used as means to an end. The deontological theory of ethics that Kant promotes would run into limitations as a universal metamorality. Kant arrives at his conclusions through some impressive rhetorical acrobatics. He appears to rationalize intuitive feelings about morality rather than providing the foundation for a self-consistent moral system. His theory of self-derived universal rules and duties lacks the symbolic clarity of utilitarianism.

That’s not to say that utilitarianism is perfect. There are two categories of criticism that utilitarianism faces: (1) shallow, naive criticisms based on a facile understanding of the philosophy and (2) deep criticisms that engage with the philosophy and hit upon edge cases that appear morally questionable. This paper will focus on the later category. Criticisms that fall into this category tend to be thought experiments that in which maximizing ‘happiness’ seems to lead to problematic conclusion. This paper will examine three of these thought experiments: Ursula Le Guin’s short story, “The Ones Who Walked Away From Omelas,” Robert Nozick’s “Utility Monster” and the “Repugnant Conclusions” arrived at by Derek Parfit.

In “The Ones Who Walked Away From Omelas,” noted science fiction author, Ursula Le Guin, describes the fictional almost-utopia of Omelas. It is a idyllic city where its citizens happiness is maximized: “With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea. … Omelas sounds in my words like a city in a fairy tale, long ago and far away, once upon a time. Perhaps it would be best if you imagined it as your own fancy bids, assuming it will rise to the occasion, for certainly I cannot suit you all.”[3] However this utopia is, for reasons that are left unclear, predicated on the suffering of a single, pitiful child who remains locked up, isolated, in the darkness of a cellar. This child has done nothing to deserve this treatment; it is not a punishment. And yet if the child were to be released, the city of Omelas would be crumble within the hour.

This situation encapsulates what is, in many ways, the fundamental critique of utilitarianism: the discomfort of seeing an individual used as a means to an end. There is not justification for the child’s imprisonment, beyond the supernatural supposition that releasing the child would result in the disintegration of the utopia. Everything about this situation feels wrong. Le Guin describes in great detail the child: “It could be a boy or a girl. It looks about six, but actually is nearly ten. It is feeble-minded. Perhaps it was born defective or perhaps it has become imbecile through fear, malnutrition, and neglect. It picks its nose and occasionally fumbles vaguely with its toes or genitals… They all know it is there, all the people of Omelas.”[4]

Le Guin asks the reader to weigh the suffering of a child against utopia — and she freely puts her finger on the scale. The vivid details she includes certainly speak to her skill as a writer but perhaps not as a moral philosopher. The same moral question that her short story prompts could be stated more dryly: Is it right for the prevention of single person’s suffering to result in an increase in the suffering of many others? The framing of Omelas accentuates an emotional response to utilitarianism. By flattening the narrative of a child’s torture to a neutral, intellectual question, Le Guin’s critique loses some of its sting.

It would be fair to respond that child of Omelas might experience a degree of suffering that outweighs the happiness of all the people. If this were indeed the case, the city of Omelas would be committing an immoral action. However, it would be immoral by the tenets of utilitarianism, in addition to violating Kant’s categorical imperative.

The second deep criticism is Robert Nozick’s “Utility Monster.” This monster is a hypothetical creature that gets such gratification from eating a person that it outweighs all the enjoyment that that person will experience in their entire lives by many orders of magnitude. He writes in Anarchy, State & Utopia that “[u]tilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater gains in utility from any sac­rifice of others than these others lose.”[5] According to Nozick, given these conditions, the good utilitarian would sacrifice herself to this “monster’s maw.” After all, this maximizes the total level of happiness.

This obligation feels repellant. The individual asks herself, “Why should I be forced to sacrifice my life in order to satisfy this creature?” But isn’t it possible that this is the appropriate response? People are so grounded in the primacy of their own existence that it is easy to rationalize reasons that they shouldn’t be obligated to end it. Perhaps she would mention her right to life, or her desire to not be used as a means to this creatures’ end. This feels wrong because humans have evolved to value their own existence and this monster triggers a primal desire for life.

When deconstructed, the core of Nozick’s thought experiment is the obligation of a single person sacrificing themselves in order to produce a good outcome — a better outcome than what would happen if they did not sacrifice themselves. The utility monster is one example of this phenomenon, but so is a soldier throwing himself on a grenade to spare the lives of several of his fellow comrades by shielding them from the blast. So is a starving mother giving the meager rations that she receives to her hungry child.

The utility monster triggers outrage because the situation feels unfair and greedy; in fact the thought experiment is intended to do this. The hypothetical is designed to mask the true moral question, which when rephrased sounds decidedly more reasonable.

A third criticism of utilitarianism comes from Derek Parift in the form of the repugnant conclusion that he arrives at in Reasons and Persons. He believes that the inevitable, immoral corollary of blindly maximizing happiness is that morality is reduced to a ‘mere addition’ problem. Consider two possible worlds. In the first everyone has a quality of life similar to what is experienced by those best off in our world, today. In the second, people experience a quality of life that is precisely half of what is experienced by those in the first world. However, in the second world, there are twice as many people as exist in the first, so the aggregate level of happiness is identical across both. Now add one more person to the second world. The aggregate level of happiness is now marginally higher in the second world.

Which world would be better to live in? Most people would say the former. Yet if impartially maximizing happiness is what truly matters then, according to Parfit, the second world is morally superior. This logic seems like it can lead to a bizarre supposition. Does utilitarianism really advocate for endless slums over a small utopia?

But assuming that each world is unaware of the possible existence of the other, this conclusion isn’t actually that problematic. For all we know, our civilization could be the second world in this scenario. It is easy to imagine a hypothetic alternate world with a dramatically smaller population, which is significantly happier. That doesn’t condemn our society to unhappiness — we simply don’t know what we are missing out on!

Parfit has a response to this. He envisions as world of infinite mildly content rabbits. His critique is that, according utilitarian philosophy, this hypothetical world populated solely by rabbits — and not even the happiest of rabbits, mere somewhat content rabbits — is morally superior to our contemporary society. This is an extreme scenario. And again, on an emotional level, it feels like the logic of utilitarianism resulted in a repugnant conclusion.

These critiques, for the most part, rely on hypotheticals — they are not real world situations and likely never could be. But the innate reactions people have when faced with these moral dilemmas is enlightening in it owns way. The common theme that ties these critiques together is their intention to trigger an emotional response: the defenseless, innocent child of Omelas being tortured; the greedy monster that one is obligated to satiate with the sacrifice of their own life; the mundanity of an infinite sea of rabbits replacing the vibrancy of human civilization.

Many of the traditional criticisms of utilitarianism are simply creatively phrased tradeoffs that trigger a negative emotional response. They feel wrong. Yet when the true moral question of the thought experiment is extracted from the language of the experiment itself, the moral disgust evaporates. The answers that utilitarianism provides are intellectually correct, yet difficult to reconcile with intuitive moral reactions.

When utilitarianism does run up against these uncomfortable conclusions, there are two responses. The first is accommodation: that the outcomes, while they do ultimately follow from the basic principles of utilitarianism, are morally abhorrent. In accommodating these outcomes, the utilitarian implicitly acknowledges that blindly following utilitarianism is not always good. Nonetheless, utilitarianism remains a viable philosophy because these bad outcomes stem from unrealistic situations that would not occur in the real world.

The second response to criticism of utilitarianism is reform. Not reform of the moral philosophy itself, instead reform of society’s traditional conception of morality. This response asserts that though these outcomes intuitively feel wrong, a true metamorality shouldn’t be based on biologically derived emotional responses. In other words, utilitarianism isn’t producing immoral outcomes; rather, humans are just bad at judging what is moral and what is not.

Utilitarianism is valuable as a metamorality precisely because it arrives at unintuitive conclusions. It is an intellectual mechanism that forces people out of the comfort of their traditional moral beliefs — whether those beliefs are based on Kantian deontology or Aristotelian virtue. Perhaps discomfort with utilitarian conclusions says less about the moral system itself and more about the how deeply tribal beliefs are ingrained in the way people intuitively think.

[1] https://plato.stanford.edu/entries/ethics-virtue/

[2] https://plato.stanford.edu/entries/kant-moral/

[3] Le Guin 1-2

[4] ibid 3

[5] Nozick 42

Antibiotic Resistance and the Challenges of Global Commons Problems

0

There are few technologies that have had as significant an impact on the human condition as the discovery of antibiotics. In a very fundamental way, antibiotics enabled the modernization of healthcare. In the twenty years following the discovery of penicillin, dozens of other antibiotics were uncovered. However, in the last twenty years the number of new antibiotics can be counted on a single hand. If the rate of evolved resistance in bacteria stays the same while the rate of discovery of new antibiotics continues to slow, we will soon find ourselves in a world were the field of medicine is thrown back to the barbarism of the 1800s — where any simple surgery is once again life threatening. Antibiotic resistance is an example of a globalized commons problem. Unlike localized commons problems which have been discussed extensively in academic literature, globalized commons problems, ones that ones that occur in domains that lie outside of the political reach of any one nation, such as global warming and antibiotic resistance, are relatively new phenomenon.

In “Revisiting the Commons: Local Lessens, Global Challenges,” Elinor Ostrom examines the nature of common pool resources and suggests alternative institutions for their management, seeing the evolution of cooperative norms as a potential solution. Globalized commons problems are highly difficult to solve — regardless of whether through Ostrom’s proposed evolved norms or through traditional mechanisms, such as increased regulation. In this paper, I will argue that antibiotic resistance is an example of a globalized commons problem and the inability to deal with it, either through state intervention or through the evolution of norms, signals biologically derived principles of cooperation are insufficient and that new supranational organizational structures are needed in order to prevent such problems from metastasizing.

Antibiotics usage exhibits the two features Ostrom suggests characterize commons problems, (1) difficulty of exclusion and (2) subtractability, meaning the “exploitation by one user reduces resource availability for others.”[1] The difficulty of exclusion comes from antibiotics’ ubiquity. They are relied on in hospitals and for medical care more broadly, but also as a supplement in the diet of livestock. A ban on antibiotics would be highly controversial and would have far reaching consequences. Though it could be done, the institutional challenges would be unprecedented and push back would be fierce.

The question of subtractability is more interesting. Antibiotics are not a finite resource in the classic sense. The active ingredients can be synthesized relatively easily. Some knowledge of how antibiotics work is required in order to understand why they should be thought of as subtractable. Antibiotics kill bacteria by targeting specific features, such as the structure of cell walls or the cellular machinery used to build proteins or copy DNA, that differ between bacteria (prokaryotic cells) and the eukaryotic cells that make up multicellular organism like humans.[2] However, when exposed to antibiotics over an extended period of time, with a concentration that doesn’t kill the entire population outright, bacteria can evolve resistance. The bacteria that have a slight natural resistance survive and reproduce, while those that lack the adaptation perish. As a result, anytime an antibiotic is used, its overall theoretical effectiveness decreases as more exposure equates to more potential for a mutation that confers resistance to occur. Antibiotics are a naturally occurring resource that is indirectly made more scarce though human consumption. Unlike with traditional natural resources, the usage of antibiotics doesn’t tax some finite supply, however it does potentially reduce the effectiveness of the drug, globally.

The case of antibiotic resistance demonstrates the limitations of biologically derived principles of cooperation. Evolved norms tend to be more effective on smaller scales and when dealing with lower stakes. At smaller scales, the currency of trust and reputation and the pull of an in-group can serve to force individuals to act more altruistically and think on a longer time horizon, rather than optimize for short-term gains. However, when applied to a more abstract problem like antibiotic resistance, the principles of cooperation seem to incentive short-term thinking. Kin selection is a prime example, showing how what is cooperative on a small scale is actually uncooperative on a larger scale. Parents want their children to have access to antibiotics because (from an evolutionary perspective) they want to see their genes passed on and keeping their child healthy is necessary for that to happen.

Furthermore, the notion of evolved norms implicitly assumes the failure of models that didn’t effectively deal with the problem. On a localized scale, this is acceptable. However, when a failure case has potentially global impacts, even a single error is one too many. To give somewhat of a contrived example, nobody would suggest that evolved norms be applied to the safeguards of nuclear weapons, as a single evolutionary dead-end would have cataclysmic effects. As the scope of the commons problem increases, the consequences of inaction or faulty action increase as well.

Traditional regulatory solutions, that require a centralized arbiter who can allocate the resource optimally, are also insufficient. Regulatory frameworks for localized common problems are usually tied to sovereign nations. Globalized commons problems, on the other hand, are large enough that they fall outside the scope of traditional, unilateral government regulation. Unlike most localized common resource problems, a single government cannot regulate a solution here. Even if the United States severely restricted the supply of antibiotics tomorrow, developing countries like India and China could still use continue their overuse. And when resistance develops anywhere, the interconnectedness of today’s society allows the mutation to spread rapidly.

The scope of the globalized commons dwarfs the regulatory scope of any single nation and thus requires extensive cooperation between multiple nations. Such problems require novel organizational structures that operate on the supranational level. Indeed, these challenges may serve as a critical impetus for breaking through to the next level of cooperation.

[1] E. Ostrom, “Revisiting the Commons: Local Lessons, Global Challenges,” Science 284, no. 5412 (1999): 278-279, doi:10.1126/science.284.5412.278.

 

[2] “What is an Antibiotic?” What is an Antibiotic? Accessed February 17, 2017. http://learn.genetics.utah.edu/content/m….

 

A Post-Work Proletariat? Marxist Thought and The End of Labor

4

 

In economics, things take longer to happen than you think they will, and then they happen faster than you thought they could. — Rüdiger “Rudi” Dornbusch

 

Technology fundamentally changes the relationship between labor and capital. As machines get better producing the things that people need and want, humans may find it difficult to generate economic value from their work. In a future where this connection has been entirely severed, capitalism and economic self-interest cease to provide structure for society. New organizing principles are needed. Utilitarianism is well suited to fill this vacuum and Marxist thought offers a pragmatic framework for implementing utilitarian impulses in the political and economic domain.

In his seminal work, The Communist Manifesto, Karl Marx popularized the notion of the proletariat as an impoverished class of industrial wage-laborers. By his definition, the class of people that compose the substrate of the proletariat has not existed in developed countries since the early 1900s. Furthermore, the potential for a worker’s revolution seems to be rendered inert if we posit that the end of labor itself is near.

However, the etymology of the word, rather than its Marxist usage, suggests something different. The origins of the word ‘proletariat’ can be traced back to the Latin, proles, a word used in the Roman census to describe the lowest class: those whose only contribution to society was having children. In a future where human labor has been entirely divorced from economic productivity, most individuals in society would have no utility beyond passing their genes on to the next generation. Severing the link between economic productivity and human labor threatens to create an idle class, a new proletariat, who are incapable of providing economic value to society. This post-work proletariat will not be defined by wage-labor, but by an idleness brought about through labor market inadequacies.

Will this idle class be destitute and penniless: abandoned by a system of resource allocation that made their labor an anachronism? It is not difficult to imagine, in a society such as this, an elite class, that controls the means of production, consolidating an unprecedented degree of power and wealth. This Luddite vision of the future conceives of automation as a profoundly destructive force, one that could transform inclusive democracy into a bourgeois oligarchy.

However, to others, automation is a panacea. These proponents of technological innovation envision a type of fully-automated “luxury communism” — the means of production owned collectively and operating autonomously — where every material desire can be made real (for free!) by an intelligent robot. For these people, the end of labor is not a harbinger of collapse, but rather a freedom so elusive that humanity had myopically believed it impossible. Should we dare to hope for such an outcome? What should be our aspirations for a society without work and what principles should guide us?

Outcomes are important. This is the lesson taught by utilitarian thinkers. Actions, individuals and societies should be judged upon their consequences, their outputs. Society should aspire to produce the best outcomes for the greatest number of its citizens. When viewed through this lens, success is merely a maximization problem. How should society allocate finite resources in a manner that maximizes the quality of life of individuals living in it?

Marx provides a utilitarian theory of allocation: communal ownership. Rather than following the capitalist model, in which certain individuals are entitled to the immense wealth spun off from their private enterprises, Marx contends that the profits of industry should be distributed “to each according to his need.” In The Utilitarianism of Marx and Engels, Derek Allen discusses the utilitarian underpinning to Marxist teleology:

Marx contends that, since wages and profits vary inversely, “the interests of capital and the interests of wage labour are diametrically opposed.” Whatever enriches the capitalist impoverishes the worker. … Whatever is in bourgeois interests is against the interests of the majority of society. To secure freedom for the majority wage labor must be abolished

The nature of the accumulation of capital results in an expanding underclass of laborers and a shrinking bourgeois minority. The end point of capitalism, as Marx understood it, is extreme wealth inequality. When a vast majority create no economic value and are therefore incapable of providing for themselves, the system is broken. Utilitarianism does not privilege the rights of the minority at the expense of the majority. Thus, the utilitarian response to this inequality is to strive for a more equitable distribution of wealth.

However, there is no use in pining for a utopian society that only can exist in theory. Progress is path dependent. Humanity’s future is a function of today’s conditions. Rather imagining the elements of an ideal society, pragmatism suggests looking for sources to guide the development of an attainable one. The work of Karl Marx not only provides a theoretical optimum — communal ownership of the means of production — but also a realistic pathway to its realization. While his original theory of an industrial working class rising up against its capitalist oppressors has proven false, an updated teleology predicated on the end of labor regains intellectual vitality.

 

Automation and Society After Labor

Automation ultimately renders human labor obsolete and magnifies the return on capital. While vast swaths of workers face declining wages, a small class of capitalists capture the growing profits that previously were spread more broadly. The end of labor centralizes wealth, while simultaneously seeing the emergence of an idle proletariat.

The age of automation became inevitable the day the first computer was created. The steady march of innovation has reduced the typical computer’s physical size, lowered its price, and simultaneously increased its computing power. For decades, this ongoing technological innovation complemented, rather than replaced, human labor. Computers could not perform the physical tasks done easily by humans. Basic intuitions about cause and effect were out of the reach of machines. In the words of Steven Pinker “hard problems [were] easy and the easy problems [were] hard.” This paradox seemed to be an inviolable law of artificial intelligence. However, in recent years skills that were once considered deep within the domain of human expertise, such as vision and the language processing, have been replicated by deep learning programs.

“The Great Decoupling” that Andre McAfee and Erik Brynjolfsson describe in The Second Machine Age is the manifestation of the shift from technology that enhances human labor to that which supplants. While economic productivity continues to increase, wages stagnate. The wealth generated by artificial labor is captured by a tiny fraction of society, those who control capital, rather than the broad middle class that used to work for wages.

The dynamics that Marx witnessed during the industrial revolution now play out again with greater intensity. Marx, while wrong in many ways, was prescient in others. In Wage-Labor & Capital, he outlines the cyclical force of competition, the tension between the wage-laborer and the capitalist, and the teleology of capitalism. He describes the dynamics of automation: “Machinery produces the same effects [as competition between workers], but upon a much larger scale. … [W]here newly introduced, it throws workers upon the streets in great masses.” Automation is the process by which capital is substituted for labor. Automated machinery replaces human labor at a fraction of the cost, often with greater accuracy and speed. Human workers simply cannot compete. In discussing how machines reduce the wages of workers, Marx also explains how automation expands the size of the new proletariat:

In addition, the working class is also recruited from the higher strata of society; a mass of small business men and of people living upon the interest of their capitals is precipitated into the ranks of the working class, and they will have nothing else to do than to stretch out their arms alongside of the arms of the workers. Thus the forest of outstretched arms, begging for work, grows ever thicker, while the arms themselves grow every leaner.

Automation devalues labor and multiplies capital. Economies of scale and winner-take-all effects sharply bifurcate society. The winners, who control the automated machinery, win big. Yet the losers, far greater in number, are left with virtually nothing. This includes the middle and upper-middle class that succeeded in a society where labor retained its value. The lawyers, the doctors, the civil engineers that composed the professional class will also join “the forest of outstretched arms, begging for work” as their jobs are automated.

Eventually society reaches an inflection point. Without new rules, the end result is dystopia. With new rules, utopia is possible. The outcome depends on whether society adopts inclusive, redistributionist policies or chooses to continue traditional practices of laissez faire capitalism. If the political economy can adjust to the realities of automation by providing for the idle class, the future will tend towards “luxury communism” rather than Luddite dystopia. However, if no changes take place, the gap between the richest and the poorest will continue to grow.

Marx would predict that the new proletariat, by nature of its majority, should be able to enact socialist and redistributionist policies. Though he foresaw the need for violent revolution, it is possible that an inclusive democracy might make such extremism unnecessary. If these changes are enacted, society might look radically different than it does today, but the outcomes would be broadly beneficial. The immense wealth generated by automation could be shared with the workers whose labor has been replaced. The means of production do not need to be seized, but the profits generated would redistributed to those made idle.

From this perspective, the post-labor society should be judged by how effectively it implements utilitarian principles. To be sure, such redistribution infringes on deontological property rights and would be judged harshly by libertarians. However, a utilitarian would see that, while a minority is dissatisfied when their wealth is taxed, the benefits to society overall outweigh their concern.

 

Criticisms of Teleology & Marxism

Of course it is necessary to defend any teleology against events that change fundamental assumptions. Teleology is merely an extrapolation from present trends that seems to lead inexorably to a singular outcome. Marx’s original teleology suggested that industrial manufacturing would be the final iteration of the capitalist system — he did not foresee the shift among western nations to a service-oriented economy or the massive wealth that would be unlocked by the Internet revolution. In presenting a similar, albeit updated, teleology, it’s important to outline the most important assumption that are necessary for its realization: human labor must become, for all intents and purposes, obsolete. If there were still ways for a critical mass of individuals to engage in economically productive behavior, transitioning from capitalism would remain difficult.

It’s also important to address the criticisms of Marxism more generally. Marx’s revolutionary teleology has proven incorrect in many ways. The industrial collapse envisioned by Marx failed to occur. His imagined legions of revolutionary workers never materialized. His ideology of revolution coopted by professional revolutionaries, rather than the workers who it was meant for. Criticisms of Marxist thought tend to fixate on its inability to forecast the broad prosperity that would spring from capitalism.

This is a misunderstanding of Marx’s argument. The proletarian revolt is but a revolution deferred. Marx believed that the worker uprising would come at the peak of capitalism, as the system imploded — not that worker’s could never benefit under a capitalist system. Indeed, in Wage-Labor & Capital, he writes that “the rapid growth of capital is the most favorable condition for wage-labour” as the growth of capital implies increasing employment, other externalities of capitalism notwithstanding. By replacing labor entirely with capital, automation will bring about both the peak and the end of capitalism. This is the critical moment when the capitalist system could evolve or be replaced. Whether this development takes the form of abrupt revolution or incremental change depends on how the transition is managed.

Beyond Marx’s failure as a prognosticator, further criticisms of Marxism attack the expropriation and redistribution that is inherent in the theory. This is the libertarian critique. Property rights are at the core of libertarianism. To philosophers like John Locke and Robert Nozick, the defense of such rights is the sole legitimate purpose of government action. A strong defense of property seems to preclude redistribution. Yet a reexamination of Locke’s Labor Theory of Property, under the assumption that human labor is economically irrelevant, shows that Locke’s and Marx’s views are quite compatible. The Labor Theory of Property, which provides the intellectual underpinning for the libertarian conception of property rights, states that property is derived through mixing personal labor with a natural resource:

The labour of his body, and the work of his hands, we may say, are properly his. Whatsoever then he removes out of the state that nature hath provided, and left it in, he hath mixed his labour with, and joined to it something that is his own, and thereby makes it his property.

Without labor, the libertarian understanding of property breaks down. In a future where robot automatons can  produce any good, who owns their output?  If no labor input was required, by what principle should the capitalist be the sole beneficiary? There is no obvious justification for property rights. Indeed, An elite bourgeois minority that captures all the economic output without mixing in their (or any) labor is an easy target for redistributionist efforts. Separating human labor from economic productivity debases the Lockean justification for property rights. In this context, the abolition of property rights is hard to criticize when the institution of private property itself has been rendered obsolete.

This paper is not intended as a broad defense of Marxism as it could exist in the world today, but rather an exploration of whether Marxist principles have anything to say about organizing society after the end of labor. Automation, by substituting human labor for capital, accelerates the centralized accumulation of wealth. Those who control capital stand to benefit disproportionately, while workers who are replaced lose their income. The structure of society must change if it is to withstand the economic shock of the end of labor. Marxist thought is relevant as it suggests a vision for society, undergirded by sound utilitarian logic, which would be capable of doing so.

While “luxury communism” seems somewhat fantastical, I am hopefully optimistic that, in the short term, redistributionist policies such as a negative income tax or a universal basic income will ease the transition from a labor to a post-labor economy. A socialist society, where the economic benefits of automation are distributed more broadly, will be better equipped to manage and mitigate wealth inequality than a capitalist society that refuses to address the problem.

 

 

Bibliography

Allen, Derek P. H. “The Utilitarianism of Marx and Engels.” American Philosophical Quarterly 10, no. 3 (1973): 189-99. http://www.jstor.org.ezp-prod1.hul.harvard.edu/stable/20009494.

 

Merchant, Brian. “Fully automated luxury communism.” The Guardian, March 18th, 2015.

 

Brynjolfsson, Erik, and Andrew McAfee. The second machine age: work, progress, and prosperity in the time of brilliant technologies. New York: W. W. Norton, 2016.

 

Marx, Karl. 1978. Wage labour and capital. Foreign Languange Press Peking.

 

Locke, John, 1632-1704. The Second Treatise of Civil Government and A Letter Concerning Toleration. Oxford :B. Blackwell, 1948.

 

No Shortcuts to Democracy: A Refutation of Rapid Democratization

0

In “Economic Backwardness in a Historical Perspective,” Alexander Gershenkron suggests that there are certain advantages that exist for nations that industrialize comparatively late. Does delayed democratization have similar advantages? The notion that latecomers to a process have advantages is seductive to modernization optimists, but doesn’t hold water when applied generally. Democratization and industrialization are two very different processes and the advantages afforded to Gershenkron’s “backwards” nations don’t apply to democratization. Namely, unlike industrialization, democratization does not benefit from being sped up. It’s decidedly more difficult for countries to democratize successfully after major world powers have already undergone the process because pressures, primarily exogenous in nature, ensure that late democratization is rapid democratization. When rushed, the process of democratization doesn’t furnish enough time for necessary cultural transitions to take place, leading to the formation of competitive authoritarian regimes rather than the consolidation of democracies.

First, a note of terminology: for the purposes of this paper the process of democratization, is just that: a processes. The phrase does not presuppose its completion; that a state transitions to an established democracy. When a regime is referred to as democratizing rapidly, it does not suggest that the regime becomes a democracy more quickly, but rather that various stages of the process are attempted more quickly than when compared to the path taken by more established western democracies.

Stable democracies in the West have, historically, required time to mature; time for the norms of political competition to develop and for systems of mutual security to be worked out. For Robert Dahl, this process is fluid. In “Polyarchy,” Dahl outlines a matrix describing the different stages of democratization, one dimension being the level of public contestation that is acceptable and the other the level of participation that a regime allows. The extent of public contestation varies with the competitiveness of the regime, that is, the capability for dissenting political positions and opposing parties. Participation, on the other axis, is a proxy for suffrage. The inclusiveness of a regime increases as suffrage expands (Dahl 206).

Dalh continues to define three main paths that can be taken from a state of closed hegemony to one of polyarchy, his term for an idealized democracy. This paper will focus on the gradual first path —the slowest and most common— approximating the developmental paths of England and Sweden, and then the perilous third path —the most rapid. On the first path, liberalization precedes inclusiveness or, in other words, competitive politics come before an expansion in participation. Political life is allowed, but only to a narrow minority. With the intensity of any conflicts tampered by similar ideologies and backgrounds, this ruling elite gradually develops the rules and practices of competitive politics. Dahl makes it clear that this search for a system of mutual guarantees is “likely to be complex and time consuming” but is entirely necessary for political conflict to be “safe to tolerate” (Dahl 221). Then, political norms already exist when members of a lower stratum of society are included in the democracy. This route is common among established, stable democracies.

Alternatively, the fastest path that traverses this matrix is the third path — Dahl’s shortcut, in which “a closed hegemony is abruptly transformed into a polyarchy by a sudden grant of universal suffrage and rights of public contestation” (Dahl 219). France’s transition from monarchy to fledgling democracy marks the beginning of the modern struggle for European democracy. Dahl cites France as one example of a democracy that took his third path to polyarchy — the shortcut — and succeeded. The French Revolution was a discontinuity, a shock which left France political inclusive and open to public contestation. Yet this was a brief moment of success. The blood letting of The Terror shows that this regime quickly reverted back from Liberté, Egalité, Fraternit. Less than a decade later, France would once again be under a monarch. Followed later, by a military dictatorship (Berman 288). Today, France is an established democracy, but the process of French democratization was not linear or rapid. It was a protracted, violent struggle that, from start to finish, took more than 150 years (Berman 292). Dahl is mistaken, his third path is a chimera, an accelerated transition from authoritarianism to stable democracy is not possible.

Late democratization tends to be rapid democratization due to external two factors, (1) the loss of legitimacy of authoritarianism, and (2) the pressure produced by the international demonstration effect. First, in the post Cold War environment the minimum standard for the legitimacy of a state was raised — blatant authoritarianism was no longer acceptable on the world stage if a regime wanted any degree of respectibility (or foreign aid). In it’s theoretical form espoused by Samuel Huntington, democratization is composed of three discrete steps, the end of an authoritarian regime, the installation of a democratic regime, and the consolidation that regime. International pressure can assist countries with the first two steps. As blatant authoritarianism is no longer viable, authoritarian regimes often created superficially democratic institutions of their own accord as “granting suffrage can clothe the [a] hegemony with the symbols and some of the legitimacy of ‘democracy’ — at little cost” (Dahl 221). This, to an observer, looks like democratization.

Second, the international demonstration effect suggests that the expectation of democracy is elevated in countries that are close geographically or culturally to other countries, in which the process of democratization is successful (Huntington 371). During England’s gradual march toward democracy, no expectations existed. The average British citizen did not presume to have a say in the functioning of the state. This afforded the British elite time to reach systems of mutual security and develop political norms. For late democratizers, the fruits of democracy are known and widely desired. There is less patience among the public to wait. “In Poland democratization took ten years, in Hungary ten months, in East Germany ten weeks, in Czechoslovakia ten days and in Romania ten hours” (Huntington 373). These periods are just blips when compared to the centuries it took England to transition from monarchy to democracy. Democratizing at this rate “drastically shortens the time for learning complex skills and understandings and for arriving at what may be an extremely subtle system of mutual security” (Dahl 220). The cumulative effect of these pressures pushed many countries to begin to democratize quickly. And as could be expected, the outcomes were mixed.

To Gershenkron, the key lesson of late industrialization was that “relatively backwards” countries could attain similar levels of development at an accelerated rate. The distinct trajectories of Russian industrialization and democratization demonstrate that simply maximizing the speed of these processes can lead to different outcomes. The narrative of Russian industrialization is a story of unprecedented success. Russia industrialized faster than any nation in history. The process began late — after most other European countries — and still caught up within a span of decades. Russian industry was criticized as being “altogether imitative” but the results speak for themselves (Gershenkron 39). Gerschenkron highlights the case of the blast furnace. The Russian state did not start with light industry and progress from there accordingly. It entered heavy industry with the full force of the Russian state behind the effort — capital was provided not from the private sector but rather from state coffers. The English, the first to industrialize, had blast furnaces used for iron and steal production. German furnaces out paced their English counterparts, only to, in turn, be surpassed by the Russians (Gershenkron 40). Rather than undertaking the slow meandering path of innovation, Russia simply aped the most modern technologies and techniques and managed to “outstrip” its competition.

Gershenkron’s theory describes Russia’s industrial dynamic, but falls short when the notion is applied to Russia’s attempted ‘democratization’ after the fall of the Soviet Union. Democratic structures, copied directly from established western democracies, were put into place. The Duma, the legislative branch of the Russian government, is an allochochonous structures — it’s not native to Russia, modeled instead off the British parlimentary system. Boris Yeltsin, the first elected president of Russia, on the other hand, was “of the soil.” He was a cultural product of a country that hadn’t yet developed mutual guarantees of security. He bombed the parliament when they disagreed with his executive actions (Levitsky, Week 5 Lecture). His reelection campaigns were rigged, with massive electoral fraud and the embezzlement of millions of rubles worth of government bonds siphoned off to his campaign. His successor, Vladimir Putin, further restricted public contestation by seizing control of major media outlets and jailing the owner of Russia’s largest oil company for supporting the opposition party (Levistky 384). Under Putin’s rule, the opposition party was suppressed to the point of nonexistence. Russia adopted the trappings of democracy — a federalist system, a parliament and an elected president — but that wasn’t enough. The norms of competitive politics could not be imported as easily as, say, a better blast furnace.

When applied to industrialization, this increased in rate is a decidedly positive outcome — the manifold benefits of modernization come in to reach more quickly. However, the net effect of increasing the velocity of the democratization process is less clear. Industrial technology and know-how can be transplanted, but political conventions cannot. Rather than culminating in an established democracy sooner, the half measures and shortcuts taken often lead to an altogether different sort of regime.

For many late democratizers, democratization efforts began and then subsequently fell short of their intended goals. If not democracies, what are these transitional regimes that superficially liberalize and expand participation, but stop before giving up any real power? Levitsky provides the answer — competitive authoritarian regimes. These regimes look superficially like democracies (this is the “cheapest concession possible”) but are distinctly different (Dahl 221). They have courts, elections, and all the trappings of democracy, yet the substance is lacking. These regimes are systematically biased against the opposition and use the apparatus of the state to maintain power. Using libel laws, tax audits and the courts opposition is driven underground (Levistky 385). Competitive authoritarian regimes are regimes where assurances of mutual security didn’t have time to develop — where a precarious regime left by a hasty democratization process was displaced by hegemony. Regimes like this occur when nations are forced to adopt the structure of a democracy without also undergoing profound cultural shifts.

A key lesson is that getting rid of an authoritarian regime is comparatively easy; creating a stable democratic regime in its place is where most fledgling democracies stumble. There is no blueprint for democracy. States are different — separated from one another by innumerable different “critical junctures,” each with a distinct political culture. For a democratic regime to consolidate, cultural shifts have to occur and, unlike the technical skills required for industrialization, these changes can’t be imported, copied or “borrowed” from abroad. They require time, perseverance, luck and occasionally violence. Western democracies have an interesting penchant for forgetting their own tumultuous past. Why does the West expect modern undemocratic regimes to democratize successfully in a time period measured in mere years, when they, themselves, took centuries? Cultural change is gradual. And there are no shortcuts.


 

Economic Backwardness in a Historical Perspective, Alexander Gerschenkron

Lessons from Europe, Sheri Berman

The Third Wave, Samuel Huntington

Competitive Authoritarianism, Steven Levitsky and Lucas A. Way

Polyarchy, Robert Dahl

Industrial Earthquakes

0

“Finally, in the same measure in which the capitalists are compelled, by the movement described above, to exploit the already existing gigantic means of production on an ever-increasing scale, and for this purpose to set in motion all the mainsprings of credit, in the same measure do they increase the industrial earthquakes, in the midst of which the commercial world can preserve itself only by sacrificing a portion of its wealth, its products, and even its forces of production, to the gods of the lower world – in short, the crises increase. They become more frequent and more violent, if for no other reason, than for this alone, that in the same measure in which the mass of products grows, and therefore the needs for extensive markets, in the same measure does the world market shrink ever more, and ever fewer markets remain to be exploited, since every previous crisis has subjected to the commerce of the world a hitherto unconquered or but superficially exploited market.

But capital not only lives upon labour. Like a master, at once distinguished and barbarous, it drags with it into its grave the corpses of its slaves, whole hecatombs of workers, who perish in the crises.” (Marx, Wage Labor and Capital)

In Wage Labor and Capital, Karl Marx examines, what he views as, the structural instability of capitalism: its reliance on ever-expanding exploitation. Marx presents capitalism as a teleology — marching along an inexorable path towards a singular end, a crisis. This treatise, written in 1847, comes before his signature work “Das Kapital,” however it can be viewed as an significant precursor. In it, history is presented as a deterministic path and though it appears Marx has yet to come to his vision of a proletarian revolution — that would be articulated nearly two decades later — Marx already believed that capitalism was heading toward a catastrophe.

The ending of Wage Labor and Capital does not present an optimistic vision for the future, but also perhaps gets closer to Marx’s actual understanding of capitalism at the time. He envisions capitalists caught in an indefatigable cycle of expansion, “compelled” to exploit the means of production and yet, by doing so, increasing the fragility of the system that made them wealthy, leading to its ultimate destruction. At this point in his life, Marx believes that “capital not only live[d] upon labour” but also would die upon it too. He saw no outcome for the workers other than “perish[ing] in the crisis.” When he discusses the death of labor, Marx breaks from the analytical tone that he has kept throughout the piece, where he asserts his beliefs methodically as if writing a proof. Now capitalism is wrought in fiery language and metaphor, as if it was a living being, a master both “distinguished and barbarous” who drags the “corpses of its slaves” down with it to oblivion.

But what is the crisis that Marx sees as inevitable? The metaphors he uses are loose, primal in a way. He casts the crisis in terms that suggest the raw force of nature — “industrial earthquakes.” These earthquakes presumably allude to the labor shocks that he thought would occur as industrialization made more and more workers obsolete. The tremors grow “more frequent and violent” as capitalism begins to eat its own tail. Furthermore, his metaphors suggests a biblical or even pagan aspect, when he writes that the “gods of the lower world” require a sacrifice of wealth, products, and even the means of production to be placated. And this infusion can’t suffice but will merely stave off the inevitable. Capitalism, for Marx, survives on a cycle of continuous and expanding exploitation. He writes that previous crises have “subjected to the commerce of the world a hitherto unconquered or but superficially exploited market.” When there are no more markets that remain unexploited the system implodes in a devastating way.

Marx views markets in an interesting manner. He suggests that as the mass of products grows, so to does the need for extensive markets. However, Marx sees the increasing mass of products as inversely correlated with the size of the world market. This seems relevant today, with the glut of products brought about by globalization. As more nations industrialize, and there are fewer undeveloped countries to exploit for labor, capitalism has nowhere to expand. When the American middle class can no longer take advantage of Chinese children, the system implodes. The cycle of exploitation must stop somewhere, but when it does, the outcome isn’t prosperity but rather collapse.

 

Log in