You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Who Controls the Media?

3

The Rationalization of the Attention Market

The phenomenon of mass communication is not coupled to any specific technology. Its essential feature is the capacity to articulate a message that saturates a social body. The mechanism of transmission is only related in so far as it enables this. Neither the printing press nor the transistor is prerequisite. The vast, networked audiences of modernity are heirs to a much older tradition.

The history of mass communication can be characterized by progression from meaning imposed coercively from above to an entertaining, if nihilistic, spectacle directed by the fascinations of the masses. In “The Implosion of Meaning in the Media,” Jean Baudrillard cryptically outlines this tension: “Are the mass media on the side of power in the manipulation of the masses, or are they on the side of the masses in the liquidation of meaning?” (Baudrillard 84). This paper seeks to destabilize the binary introduced by Baudrillard. The relationship between masses and media is not static. Through historical study, I will trace out the series of transformations in the dominant medium of mass communication that resulted in the shift from the former to the latter. I argue that the degree of competition for attention alters whether media is able to deliberatively articulate a purposeful message to the masses or whether the fascinations of the masses – that is what draws their attention – dictates the content that the media produces. The competitiveness of the attention market is shaped primarily by the logic inherent in the dominant technology of distribution: what Marshall McLuhan refers to when he says that “the medium is the message” (McLuhan 153).

The Medium becomes its Message

McLuhan is hasty in asserting an absolute identity between the medium and the message. Although a medium’s message is intrinsic to it, constitutive of it, generated out of it, the medium is not the message. Rather, the medium becomes the message, in a type of convergence. For McLuhan, the message is the medium’s ideal. It is an essential principle, the “integral idea of structure and configuration,” untethered from any specific content (McLuhan 155). Indeed, if the message referred merely to content distributed through a medium, McLuhan would be rendered unintelligible. After all, the essence of a medium of communication is that it can convey a multitude of meanings, contexts and disparate information. A medium that was only able to transmit a single impulse or idea would negate itself. Any interpretation of ‘message’ that reduces the medium to a singularity cannot be correct.

McLuhan employs the word, not as a signifier of discrete content, as in “I left him a message,” but instead to denote a significant point or central theme, an abstraction away from the content or, in his words, an “awareness of the whole” (McLuhan 155). The message should be understood as almost akin to a moral or highest principle, as in “the book’s message was reinforced on every page.” Much confusion has resulted from McLuhan’s subtle distinction between essence and content, between the singular and the plural. Messages, plural, are the insights and impressions that are transmitted through a medium: its content, the substance of communication. The Message, singular and for clarity capitalized from here on, is something else entirely. 

The medium becomes its Message, its ideal, its promise. The Message of a novel medium is not immediately apparent; though it is endogenous, it is not discernible right away.  Like an embryo that from inception contains the promise of adulthood, the medium contains the germ of its Message. Yet, just as the fetus is not an adult, the medium cannot yet be said to be its Message, for neither ideal is embodied. The fetus must first gestate in its mother’s womb. It still must quicken and be born, and even then it does not know itself and it will not for many years. Similarly, every medium has its own history: the incremental, haphazard process by which its essence was ascertained and then embraced. Gutenberg printed a Bible, not the New York Times. It takes time for the substance of communication, the messages, to begin to grasp at and then conform to the medium’s internal logic, its Message. Any given technology comes laden with various constraints and presuppositions. A medium’s Message arises from these idiosyncrasies: that the telephone is useless without its network; that broadcast media, such as radio and television, is beamed out along a finite spectrum and is identical for everyone. Messages are gradually made becoming of the medium, in the sense that they are suitable or appropriate for their mode of transmission. Only then do the medium and the Message coincide. McLuhan’s aphorism categorically omits the crucial process of becoming. This Paper will address his oversight.

Communicative Epochs & Slippage Between Them

“Is the Iliad possible at all when the printing press and even printing machines exist?”

Karl Marx, The German Ideology

If the invention of movable type in 1439 foretold the end of oral culture, its death knell took the better part of two centuries. Yet Marx’s question reinforces an important truth: specific technologies of mass distribution, like the printing press, alter what can be expressed, who can say it and whether it will be heard. But these technologies go further still. The dominant medium shapes the context of the media landscape, determining the degree of competition for attention. A medium can, for example, increase competition by reducing distribution costs and lowering barriers for new entrants. Or it can impose new constraints, such as introducing clear chokepoints for regulatory intervention. The inherent peculiarity of a new medium must debase the assumptions and practices formed under preexisting technologies. Yet, these sedimented conventions, produced through extended familiarity with a dominant medium’s Message, exist beyond the medium itself. The dictates of a medium etch themselves on to the social order, in politics, commerce, and entertainment. The concrete instantiation of this nebulous constellation of ideals, political relations, and competitive logics gives form to a communicative epoch.

Each communicative epoch is delineated primarily on the basis of the dominant medium of distribution. This is an intentionally vague definition. Sharp divisions, like right angles, are an unnatural contrivance of the human mind. Boundaries are always fuzzy. And, correspondingly, the transitions from one communicative period to the next are never discrete. New technologies do not emerge in a vacuum. Every epoch, at least in the beginning, betrays its inheritance from the preceding order. While innovation might present a technological break from the past, there are no true social discontinuities. The application of technological progress is always initially backwards looking. For example, the printed newspaper received its form from the earlier manuscript avissi, short elite-oriented news dispatches written by hand (Pettegree 184). Though mechanically reproduced at scale, the earliest newspapers did not immediately embrace their mass market promise. Instead, they simply reproduced the form of their predecessors: no headlines or illustrations or contextualization. No affordances, either in content or layout, were introduced for new readers who “might not be so well versed in international political affairs as the narrow circle of courtiers and officials who had read the manuscript news letters” (Pettegree 184). While the shift from script to print held profound structural implications, the new realm of possibility would be left to future entrepreneurs. It takes time and experimentation for the new competitive logic, the medium’s Message, to assert itself against the received practices of the previous paradigm. That said, once the dictates of distribution become apparent, they are all but irrefutable, particularly when competition is fierce.

Oral Epoch

According to Marx, with the emergence of the press, the conditions necessary for epic poetry disappeared. But although their absence was perhaps vital for the production of the Iliad, neither the mechanical reproduction of text nor widespread literacy was a necessary precondition for mass communication. The oral culture of early modern Europe was extensive, encompassing everything from tavern rumors to tawdry songs to royal decrees. However, spoken language itself is not the technology of distribution I intend to highlight, but rather the sovereign-religious apparatus that amplified speech, transforming it into a medium of mass communication.

The early modern European attention ecosystem was quite fragmented. Postal infrastructure was still in its infancy, so communication over large distances was both slow and expensive. Reports of international and domestic affairs were consumed primarily by the wealthy, generally as letters from acquaintances or commercial partners abroad. Eddies of public sentiment, when they formed, were short-lived and localized. While rumor and news could spread quickly, if inaccurately, attention held no explicit commercial value. Attention remained outside of the rationalized sphere of commerce and its direction was only relevant to the organs of social reproduction. During this communicative epoch, the scale of communication was that of conversation, either between individuals or among small groups congregating in the marketplace or tavern.

Logic of Orality

Space and time both constrain oral communication. Public speech takes on the characteristics of a mass medium only when an orator addresses large crowds. Therefore, effective mass distribution through an oral medium required the synchronized assembly of a localized audience. In other words, everyone had to be in the same place at the same time. Unlike later technologies of distribution, where the medium was untethered from the physical constraints of space and time, oral mass communication was intrinsically a shared experience as it demanded the simultaneous presence of many individuals. Communication needed to be communal if it were to succeed at spreading a message to the masses.

The Message of the oral medium is purposive spectacle, not entertaining, but didactic. The link between oral communication and visual extravagance might seem somewhat counterintuitive. It is explained by the fact that the foremost difficulty wasn’t holding attention, but the act of gathering it. The traditional order was faced with the challenge of creating a focal point for attention amidst a chaotic environment. It was not competing against other actors, but rather the background noise of social life (and, oftentimes, the literal noise of the market). The spectacle, a striking sensory display, was the solution as it overwhelmed all else. It produced a moment of order, an instant where all eyes converged, that then permitted an orator to fill the void with his speech.

The difficulty and expense of a mounting a concerted public display by which to convey a message restricted mass communication to the most established institutions of the early modern period: the church and the state. The church and state each held separate claims to their subjects’ attention. The church promised hellfire for those who did not attend the weekly mass, while the state asserted its power directly on the flesh of its citizens. These joint stewards of the spiritual and physical realm upheld the social order, each reinforcing the influence of the other. The oral communicative regime was concerned with the production of meaning. The message imprinted in the minds of those listening was entirely functional. Its purpose was tied directly to attaining broad consensus for the existing social order, both in the communal norms espoused by Christianity and in the sovereign enunciation of new ordinances. Indeed, the church’s claim to attention was backed partially by the use of sovereign force: “while ministers were recalling their congregations on a Sunday, many European cities employed officers charged with patrolling the streets, ensuring that shops or taverns did no business” (Pettegree 138). And this relationship went both ways: weekly sermons played a crucial role in shaping the community’s understanding of changes of government, worship practices, declarations of war and peace, natural disaster and human catastrophe. Preaching helped to regulate and interpret the disordered and organic cacophony of rumor and gossip, preventing the public from spiraling out of control (Pettegree 137).

For those who could marshall the resources, the attracting attention in early modern Europe was not challenging. So few institutions were capable of such concerted effort that doing so, in itself, drew attention. The spectacle of mass communication simultaneously communicated a symbolic message, as well as the overwhelming authority of the message’s originator – either the sovereign or God. And when inattention carried such brutal theological and corporeal punishments, the public listened. But if the presentation was spectacular, the content of the message itself was generally uninteresting: “proclamations could be long, and couched in formal legal language, complex and intricate” (Pettegree 120). People would go to church each weekend, only to sleep through the sermon because of the underwhelming rhetoric of the preacher. But fascination, as an additional quality of information, was unnecessary because there was no direct competition for the public’s attention. Demands on communal attention were reserved for the reproduction of the social order. The content of oral mass communiqués was purposeful, rather than interesting.

The discussion of oral communicative epoch is presented mainly to show the contrast between the Message of oral mass communication and the Message of print. In the subsequent section, I will show how the communicative traditions, shaped and enabled by an oral culture, discordantly flowed into the print epoch. The print medium inherited an incongruous hierarchical regime, a relic of the oral Message. It would take nearly three centuries for the new medium to fully embrace its own Message.

Print Epoch

Embedded in the essence of the printing press is an almost teleological promise of cheap and efficient mass distribution. In McLuhan’s terminology, this is the Message of movable type. However, one would be mistaken to view this property as obvious to its early operators. The manual reproduction of text naturally proceeded the press, which was invented in order to speed up the existing process of laboriously copying books “from dictation or a rented master copy”  (Pettegree 58). The first generation of publishers who took up the technology were “remarkably conservative” in what they printed (Pettegree 59). Early staples of the press echoed established tastes of the preexisting market for manuscript books, such as liturgical works, legal texts on civil and canon law as well as medieval medical and scientific journals. The printing press’ early usage was backwards looking: it was used to print large, expensive books. The mass market potential of the machine went overlooked. However, the logic of mass distribution was present from the presses’ inception. To print a single page requires a fixed amount of labor to layout the content, while the cost of any additional copy is reduced to the cost of materials.  In other words, it is dramatically cheaper to print the same book one hundred times than to print one hundred different books. Each subsequent copy sold amortizes the initial fixed layout cost. For the printer, a large volume of the same thing is always better. 

Although the printing press reduced the cost of distributing textual information, the addressable, literate audience was still a fraction of the population that was reachable through the oral apparatus. But nonetheless the capacity to broadcast a message, to issue command or spread an idea, which once had been exclusive to traditionalist institutions, took a democratic turn. A limited, but robust, culture of communication developed among the bourgeois. The distribution of printed books, pamphlets and newspapers, while still censored by the state and obstructed by widespread illiteracy, generated the ideal of mass communication unmediated by the sovereign or other coercive authority. Information could be disseminated through the “force of the better argument,” rather than at the whim of the sovereign or the church. The content distributed to the masses would no longer bear the mark of domination.

Censorship & Publication in France

The printing press and the ideal of mass communication that it promoted were recognized by the state as a potentially disruptive pairing. As a result, publishing in early modern Europe was subject to strict regulation. The domination of religious-sovereign authority, while now less directly tied to mass communication than under the oral epoch, persisted in shaping the views that could be articulated to the public. The dictates of the powerful, rather than the fascinations of the masses, still held sway, despite the shift to a mechanism of distribution that empowered individuals to present their own ideas publicly.

The publishing industry at the cusp of the French Revolution invites further inspection. In the decades leading up to the revolution of 1789, which would topple the ancien régime, the chaffing between the order inherited from the old communicative epoch and competitive logics of the new could already be witnessed. The transformation of the French publishing industry attests to the transition from the dominance of oral mass communication towards that of print. The specific features of this transition clarify the model of medium-driven communicative epochs: (1) an inherited social order, which betrays the logic of the previous medium, (2) a technically-induced change in the capacity for domination, which undermines received norms, and (3) competitive pressure as an impetus for the logic, or Message, of the new medium to be made apparent.

The Crown exerted a tight control over the dissemination of ideas in the old French literary system. Censorship of speech and writing was the official policy of the state.  A large enforcement apparatus literally policed thought, surveilling printers and booksellers and censoring work deemed a threat to “received religion, established power, or accepted morality” (Darnton and Roche 14). Despite the printing press’ promise of mass communication freed from the domination of the sovereign-religious order, the state still was fully in control of distribution. In this new communicative era, the medium and the Message were not yet one and the same.

The implementation of this ideological control took two parallel forms: active censorship of works and symbiotic cooperation with publishers, attained through the granting of economic monopolies. All works that were to be printed in France were subject to review by agents of the state, under the Office of the Book Trade (Darnton and Roche 7). These government-employed censors were specialized by discipline and would review the works that fell under their domain of expertise. Approved works could be printed only by licensed publishers. An efficient ‘book police’ oversaw the production and distribution of all printed material. Publishers were subject to regular inspection, and even unannounced searches of their property. The police would account for “typographic equipment such as presses and type fonts” in order to prevent backroom print shops from escaping supervision and would examine inventory to ensure that no forbidden books were being produced (Darnton and Roche 20). Violations were met with strict penalties, from heavy fines all the way to imprisonment.

But, rather than enforce a purely punitive regime against publishers, the French state simultaneously granted them certain privileges. The book police, in their zealous prosecution, were not only looking for ‘bad books,’ but also counterfeits, pirated books and other foreign works that violated the monopoly granted to licensed publishers. The cooperation between publishers and the state sheds light on the relationship between competition in the attention market and the possibility of control over information. By weakening the market force through the enforcement of a publishing monopoly, the state could exert more ideological control. Publishers had more to lose from circumventing the censorship regime and were buffered from the impartial, autonomous power of competition. Rather than being directed by the masses through the logic of the mass market, the media remained subservient to the sovereign, still “on the side of power in [their] manipulation” (Baudrillard, 84).

However, when the decisions of the censors diverged to far from public opinion, their legitimacy was called into question. As the culture of enlightenment permeated the expanding public sphere, the gap between the existing law and what the censors could expect to enforce grew larger. Furthermore, the administrators of the book trade, who, for the most part, were well-off bourgeois intellectuals, had no desire to stifle literary production and were torn between ideological and economic responsibilities. This fracture was felt even more acutely by publishers themselves: “control and commerce became increasingly uneasy bedfellows” (Darnton and Roche 25). The demand for ‘philosophical books’ – the terminology used by publishers for books that contravened the censors – soared in the years leading up to the Revolution. And publishers who stocked this unsanctioned inventory were well rewarded: the price of one of these ‘philosophical books’ was usually twice that of a comparable book. The massive public demand for forbidden content eventually overshadowed the threat of punishment by the sovereign. On the eve of the Revolution, the century-old edifice of press control, established under the ancien régime, simply evaporated. The dam broke: the fascination of the public directed the media, who gave into market demand, disregarding the establishment’s wishes. The print epoch of mass communication finally came into its own, shedding the inheritance of the previous era. The print medium had discovered its Message.

There is an inverse relationship between the domination of those in power and the level of market competition the media is exposed to. As the old literary order of the ancien régime receded, many new entrants joined the newly deregulated publishing market. Publishers could no longer comfortably rely on monopoly profits from books sales and instead had to compete for mass market appeal. Without the protection of the state, the media would need to find a new patron: the world of commerce.

Habermas and the Emergence of the Public Sphere

The emergence of the bourgeois public sphere occurred at the same time as the development of early print media. The two are co-vital. From the beginning, the world of letters – the vast array of periodicals, journals and publicly distributed essays – was invigorated by the mercantilist demand for information on foreign affairs, commodity prices and other commercially relevant happenings. However, slowly there was a shift from publications “containing primarily information” towards those that offered “pedagogical instruction and even criticism and reviews” (Habermas 25). The ‘world of readers’ expanded from merchants exclusively focused on commercial information, forming the foundation for a critical public. Criticism of art, theater, literature and even politics, which began as conversation between individuals, expanded into print as “contact among these thousands of circles could only be maintained though a journal” (Habermas 42). Those who contributed felt themselves as part of a broader public discussion that “knew of no authority besides that of the better argument” (Habermas 41).  Content was still purposeful, rather than merely entertaining. So new entrants to the ‘world of letters’ were brought up to the level of culture, rather than debasing it. During these early years of print media, publications were not intended to be commercially viable on their own. Fledgling newspapers were either backed by wealthy patricians or the independent initiatives of the well-educated. Early print media “violat[ed] all the rules of profitability” because the measure of its success was not the return on investment, but the publication of a message (Habermas 182).

The early print media, on Habermas’s account, largely lives up to the bourgeois ideal that generated it. The implicit physical and psychological coercion that sustained oral mass communication could be replaced. A new guiding criteria could replace the coercive commandments of social reproduction in selecting the information that should be spread. A new standard would indeed emerge, however it would not conform the the bourgeois ideal of rational-critical debate.

Habermas on Advertising

In Habermas’s account of the public sphere, the introduction of advertising as a business model for the early press “put financial calculation on a whole new basis” (Habermas 184). Before this transition, culture was a commodity in form, but not content. The elements of culture were made more accessible economically because they were able to be mechanically reproduced at a low cost. Paperback books are the principle example of this phenomenon. While cheaper to produce, the content of paperbacks remained essentially unchanged from their previous, more expensive hardcover instantiation. While at first the expansion of the public sphere was facilitated exclusively by reducing costs, in the end, the press succumbed to the leveling demand of the mass market and a “commercially fostered consumer attitude” took hold (Habermas 169).

The possibility of turning an advertising-funded newspaper into a profitable investment proved to be an inflection point.  Publishers, once less receptive to market forces, were simultaneously emboldened to act by the prospect of new profits and compelled into doing so by competitive pressure as those newspapers that failed to maximize sales, lost their influence in the long run. The assembly of the newspaper, which before had been a literary endeavor, took on a new profit-oriented motivation. The peculiarity of this relationship revealed itself only as the logic of the market began to “penetrate… the substance of the works themselves” (Habermas 165). This is the other function of the market, which Habermas labels psychological accessibility, where economic imperatives demand culture be consumed more easily, with fewer “stringent presuppositions” (Habermas 166).  Commercial pressures that originally were restricted to the economic accessibility of the works, started to change the essential character of the products themselves. When subjected to the incentives of the mass market, producers could more easily achieve “increased sales by adapting to the need for relaxation and entertainment” rather than through the promotion of “culture undamaged in its substance” (Habermas 165).

Mass Media Epoch

The emergence on radio in the early 1920s, followed by transition to television in the 1950s, signaled the establishment of a new communicative epoch. The technical innovation that made both television and radio possible, that inaugurated the mass media era, was wireless broadcasting. Unlike the protracted transition from an oral culture to one structured around print, the logic of broadcast was rapidly adopted. Broadcast’s Message conformed to and, indeed, radicalized the preexisting ideal of mass distribution, embedded in the printing press. Mass audiences could be reached through both mediums and monetized through advertising, yet broadcast was structurally more competitive. As new instruments measured audience engagement more systematically, attention became increasingly legible. The media knew exactly what fascinated the public, which in turn meant increased viewership and sponsorship revenue. The mass media’s response mirrors Habermas’s account of advertising’s effect on the early newspapers of the public sphere. The influx of commercial pressure changed the character of the content. The masses were now “directing the media into spectacle”  (Baudrillard 84). And those ostensibly in control, the media executives, many of whom had been in the industry since the early days of radio, slowly realized they were powerless to resist. The dictates of the rationalized attention market could not be ignored.

Logic of Broadcast

With broadcast, the medium is comprised of two technologies: the transmitter and the receiver. Each shapes the Message in a different way. Analysis that focuses too narrowly on the physical presentation of content misses the important structural features of transmission. Conversely, the receiver enables participation in the distinctly American ritual where every night, “a handful of people speak, [while] the rest listen” (Mander 27).  To simply ignore it is to disregard a physical device that was present in millions of homes and was the locus of mass culture. These dual technologies reflect the decoupling of distribution from physical instantiation. It is this separation that accounts for the intense competition that distinguishes the mass media epoch.

Walter Lippmann once said of the television networks, “It’s as though this nation had three might printing presses. Only three.” (Friendly 294). His analogy offers an accurate description of the mass media landscape, but no real explanation for its emergence. For that, look to the underlying technology. Television content was transmitted via the Very High Frequency (VHF) band of spectrum. And there was only a finite amount allocated by the Federal Communication Commission (FCC). This restriction exerted a centralizing force on networks. Furthermore, the broadcast format itself, a single, centralized source pushing out a message, imposed certain basic demands. As everyone received the same content, the mass media had to orient itself towards mass appeal. The content of mass media needed to make itself psychologically accessible, to use Habermas’ terminology.

As for the receiver itself, its most relevant feature is something taken almost for granted in our age of ubiquitous screens: that an infinite variety of content could be displayed by a single device. The device, the receiver, was a chameleon. Individuals who who collected vast libraries, would now only ever need a single television set. Unlike print media, which one purchased for its specific content, the television could communicate anything and everything. Yet, despite a theoretical capacity for infinite variety, cultural critics have noted that, in practice, the diversity of content was severely limited. Mass media was infected by uniformity.

The physical decoupling of the substance of content from its embodied form had huge implications. As all of the networks broadcast their content concurrently, yet a television set could only display a single channel at a time, the competition for attention within each time slot was zero sum. Every show had to vie for viewership.

Peak Attention in American Television

The case of television in the United States from 1950 to 1980 explains the media’s descent into spectacle. In three decades, as a result of a highly competitive and newly legible attention market, the national television networks came to understand the Message. Though I will focus on television, much of this analysis applies to radio as well. 

Early on, programming, the mixing and matching of various types of content to maximize audience and, therefore, revenue, was more of an art than a science (Wu 101). The top executives at each corporation wielded enormous personal influence. They personally made the choices about the content that would be shown on the television. Of course, these media executives were still trying to increase the size of their audience, but this required a level of personal intuition, rather than formulaic by-the-numbers approach. 

The three networks, ABC, CBS and NBC, were all engaged in a zero-sum competition for attention that took place every night. There was only a few hours of primetime, so the viewer was forced to actively choose between networks. And maintaining the viewers’ interest was essential, especially when an alternative was just a press of the button away.  Attention was made rivalrous, in a way that it simply had not been in the print epoch. As David Halberstam would later write, “the competition was so fierce” yet it was fought “with weapons so inane” (Wu 417). The daily newspaper could be read at one’s leisure. It could be supplemented by a magazine or by a book. The choice was left to individual preference and had no impact on others’ consumption. From the perspective of the print publisher, once their product had been sold, whether it was subsidized by advertising or not, the reader’s particular habits were commercially irrelevant. Their circulation numbers, which determined how much they could charge advertisers, were centered around sales and subscriptions. This was a sufficient proxy for attention given the structure of print media, but was nonsensical for television.

Yet, concrete insight into which network was actually winning the competition for attention was surprisingly hard to come by. Everyone knew that the primetime shows received a mass audience. They had to because there were simply three to choose from and everyone watched television! Advertisers were more than willing to pay for access to it. The difference between 30 million viewers and 40 million seemed irrelevant to an industry known for the adage: “half of the money spent on advertising is wasted; the trouble is nobody knows which half.” Statistical legibility into viewership remained spotty until the invention of the Nielsen ‘Peoplemeter’, a device which claimed to “scientifically measure human attention” (Wu 104).

Nielsen Ratings & The Legibility of the Attention Market

The Nielsen ratings changed television.“If you can put a number on it,” Arthur Nielsen was known to say, “then you know something.” (Wu 104). Nielsen took the ebb and flow of attention and made it visible for all to see. The art and feel that once guided programming decisions was replaced with a science. Whereas before human intuition had moderated the pressure of the market, media companies now “marched to the beat of a distant drummer called ratings” (Friendly 168). The zero-sum competition intensified because networks and advertisers alike now knew which shows were ‘winners’ and which were ‘losers.’ Shows had to prove themselves objectively, to draw an audience or otherwise face cancellation (Wu 140). The content of television, while always oriented towards the mass market, seemed to descend to new lows as a result the competition for ratings. Executives were “incapable of stopping the inexorable flight from quality” (Friendly 168). They were no longer responsible for holistically selecting content, but instead optimizing a single number. The imperative was mass amusement, to be entertaining to everyone. The characteristics of content that met these requirements is described by media theorist Neil Postman: “bitesized is best, … complexity must be avoided, … nuances are dispensable, … qualifications impede a simple message, … visual stimulation is a substitute for thought, and … verbal precision is an anachronism” (Postman 105). To defy the commercial need of easy accessibility was to reject the “television’s requirements” (Postman 106). The system, as it was no constituted, brooked no dissent. In the blind pursuit of profit, the reigns of the media had slipped from hands of the powerful, into the outstretched arms of the masses.

Speculations on the Internet Epoch

The internet, like television, is an multifaceted medium that describes a constellation of interrelated technologies. The unified experience, familiar to over two billion users worldwide, is a carefully constructed illusion that appears when each component is working in concert. While ‘television’ as a phenomenon might be subdivide into transmitter and receiver, the internet’s proper level of analysis has yet to be determined. One could examine TCP/IP and the decentralization inherent in the system. The internet can trace its origins to a DARPA project which aimed to build a communication network that could survive a nuclear strike. Another layer would be the World Wide Web and the browser, which made the web accessible to a nontechnical audience. And one can continue further still to the centralized services that structure the internet’s petabytes of content. These services might host content themselves, as Youtube does for the three hundred hours of video that are uploaded every minute to its servers. Or they might merely direct attention to links hosted elsewhere on the web. For example, Google ingests a search query and spits out the links that are predicted to be most relevant. Some services, such as Facebook and Twitter, exist somewhere in between, combining external links with hosted content into an algorithmically personalized ‘news’ feed. The internet is an amalgam of distribution technologies, layered on top of each other and competing against each other, each suggesting its own competitive logic.

Logic of Digital Distribution

There is a further difference between the internet epoch and the others discussed. The internet as a medium is so novel that its Message remains incompletely articulated. It is still unclear which specific technology of distribution will win out. However, I will abstract away from any single service and attempt to highlight the threads that are common to the features of the internet itself and offer a few speculative remarks.

The internet’s Message departs significantly from mass distribution, though it maintains the focus on the legibility of attention. The concept of market research is taken to an extreme. The segmentation of a target audience into ever narrower, more specific demographics is intensified until each exactingly precise category is populated by a lone individual. The radicalism of this achievement would have gone unnoticed while television was the dominant medium. Producers would never have contemplated chasing such small audiences. Given the constrains of mass broadcast, it made vastly more economic sense to appeal to the average. On the internet, this logic is inverted.

Individualized Media

The internet is the antithesis of mass communication. To return to the initial definition, mass communication required that a message reach, if not everyone, than a large portion of the community. As the organs of mass media are replaced with a an algorithmically mediated feed, even when a YouTube video receives hundreds of thousands of views, it is still only watched by a fraction of the total user base. Where the mass media produced entertainment for “addicts of mediocrity,” individualized media delivers to each user precisely what fascinates them, personally (Friendly 274). By arranging the infinite content of the internet according to individual fascinations, internet platforms obviate the commercial pressure to pursue mass appeal. The potential audience is unimaginably vast. To entertain even a fraction is to become, for an instant, the focus of more attention than all the kings and prophets of history.

The rulers of the past commanded attention because they were powerful, those who receive it today, do so because they are entertaining. To consistently channel attention on the internet is to constantly be competing for it. In the individualized feed, before a piece of content is presented, it is compared against every other article, video and tweet and if, according to the proprietary algorithm that models your interests, it is insufficiently fascinating to you, you will never see it.  Even those who build up a following are still subject to these demands. Nothing uninteresting can be communicated, for the very property of being uninteresting precludes distribution on platforms that algorithmically match content to the interests of the individual. Mass media had to avoid being hated, individualized media must be loved. If the mass market required an average palatability, a perfectly legible attention market demands the extreme preference of a small niche.

Competition & Technological Determinism

My reading and application of McLuhan is not intended as a bland regurgitation of technological determinism (though McLuhan himself might have fallen into this trap). The Message is the promise of a technology, its imperative. But importantly, this imperative is not teleological. Just as the medium is not the Message, it is not inevitable that the medium become its Message either. We must remind ourselves that individuals are not powerless and impotent. Engineers can resist. Producers can boycott. Journalists can inform, rather than entertain. The demands of the market can be rejected. Wikipedia exemplifies this. Assuming a modest CPM of five dollars, with ten lines of code, Wikipedia could monetize the eighteen billion pageviews it receives each month to the tune of ninety million dollars, give or take. In fact, every minute that Jimmy Wales holds off pushing that ten line commit into production, he gives up two thousand dollars.

So it is possible to resist, but we are not all Jimmy Wales. The pressure to submit is overwhelming. On the level of the employee, there is the specter of unemployment. To willfully reject the logic of the market is to be rightfully terminated. But the pressure to conform to its demands exists at every level. The executive, though better paid, is essentially in the same position as the janitor. To resist is to go. Even the founder, the inventor, the owner, the celebrity is precariously positioned. While they might be spared termination at the hands of management, in the long run, they too end up in the same place. In 1938, Bill Paley, the chief executive of CBS, presciently stated that “too often the machine runs away with itself” (Friendly 168).  To obstruct the machine is to be crushed by it. To refuse to go along is to be replaced. Everyone and everything is fungible.

When it comes meaning in media, we are confronted with an unpalatable choice. Either stable meaning imposed through deliberative control by the few (as tyranny) or the autonomous, impersonal and invisible hand of the attention market, which, in the end, results in the “liquidation of meaning” (Baudrillard 84). Any point in between is an unstable equilibrium. And one can at least negotiate with a tyrant.

Bibliography

Baudrillard, Jean. Simulacra and Simulation. University of Michigan Press, 1994.

McLuhan, Marshall, et al. Essential McLuhan. Anansi, 1995.

Pettegree, Andrew. The Invention of News: How the World Came to Know about Itself. Yale University Press, 2014.

Friedland, Paul. Seeing Justice Done: the Age of Spectacular Capital Punishment in France. 1st ed., Oxford University Press, 2012.

Spierenburg, Petrus Cornelis. The Spectacle of Suffering: Executions and the Evolution of Repression: from a Preindustrial Metropolis to the European Experience. Cambridge University Press, 1984.

Habermas, Jürgen. The Structural Transformation of the Public Sphere: an Inquiry into a Category of Bourgeois Society. 1st MIT Press pbk. ed., MIT Press, 1991.

Wu, Tim. The Attention Merchants : the Epic Scramble to Get inside Our Heads. First Vintage Books ed., Vintage Books, 2017.

Vogel, Harold L. Entertainment Industry Economics : a Guide for Financial Analysis. 7th ed., Cambridge University Press, 2007.

Friendly, Fred W. Due to Circumstances beyond Our Control . Random House, 1967.

Mander, Jerry. Four Arguments for the Elimination of Television. Morrow, 1978.

Postman, Neil. Amusing Ourselves to Death : Public Discourse in the Age of Show Business. Viking, 1985.

The Foreclosure of Revolutionary Imagination

ø

On the Concept of History is in many ways a prelude to Adorno and Horkheimer’s Dialectic of Enlightenment. Indeed, after receiving a manuscript of the work, Adorno wrote that it manifests “the idea of history as permanent catastrophe, the criticism of progress, the domination of nature and the attitude to culture” – all of which are themes that Adorno and Horkheimer later explore. These works are undoubtedly born of the same intellectual tradition. Each is undergirded by a deep skepticism: apparently neutral concepts are exposed as instruments which obstruct revolution and endorse the status quo. However, an examination of the theoretical implications reveals a radicalism in Dialectic of Enlightenment that separate the two. Benjamin’s treatment of history salvages its revolutionary aspects, while Adorno and Horkheimer raze the concept of enlightenment entirely. History can be appropriated and redeemed, but the use of reason forecloses the possibility of emancipation.

The opening essay of the Dialectic of Enlightenment is designated “The Concept of Enlightenment,” which evokes the title of Benjamin’s work. With this allusion Horkheimer and Adorno signal that their intention is to self-consciously mirror Benjamin’s analysis. Thus, we are presented with a broad parallel between history and enlightenment: both concepts are to be viewed suspiciously and brushed against the grain. In On the Concept of History, Benjamin dispels the traditional notion of history as “recognizing [the past] ‘the way it really was'” (OCH 391). This methodology,  the uncritical “establishment of a casual nexus,” is denigrated as historicism (OCH 397).  Again with enlightenment, Horkheimer and Adorno turn conventional understanding on its head. Enlightenment is popularly conceived of as an affirmative project, as an emergence from man’s self-imposed ignorance. This fantasy withers, giving way a vision of the “wholly enlightened earth … radiant with triumphant calamity” (DE 1).

These two works share an animating impulse to expose the intellectual instruments that perpetuates domination. Historicism is one such tool, as it promotes history as “a sequence of events like beads on a rosary” (OCH 397). Through its solidarity with the “heirs of prior conquerors,” it produces a vision of progress as continuous, inevitable victory (OCH 391).  The process is additive, without any “theoretical armature” (OCH 396). Historical happenings are strung together to fill the vast undifferentiated time that constitutes history. The narratives of those crushed beneath this triumphal procession have no audience. There is no possible catalyst for change; this notion of history abrades each moment until all are equally empty. No order exists beyond simple chronology, which ossifies into an “‘eternal’ image of the past” (OCH 396).  History becomes objective, and in doing so can only offers an inherited and backwards-facing justification for the present. It’s benefit to those who rule is the result of an entirely uncritical disposition, which seeks only to transmit the “document[s] of of barbarism” from one generation to the next (OCH 392).

Similar to historicism, which sycophantically sympathizes with “with the victor” and justifies his rule, enlightenment, according to Horkheimer and Adorno, also renders critical thought inert (OCH 391). The process of enlightenment has always “sought to report, to name, to tell of origins” (DE 5). This nominalistic tendency is a salve for man’s primordial fear of the unknown. It allows man to understand the world and control it. The terror of an ‘outside’ is confined first within myth, then metaphysics, and finally a positivistic schema. But by supplying a name, enlightenment also exerts a social power. In the universal symbol, man sees “the permanence of social compulsion” reflected back at him (DE 16). He intuits that “the entire logical order” is grounded in the current reality (DE 16). The coin of reason, which claims impartiality and objectivity, bears the indelible stamp of society.

Both historicism and enlightenment perform a similar intellectual function: a fortification of what exists, “a subsumption of the actual” and a rejection of revolutionary possibility (DE 21). Each concept feigns neutrality in the struggle between oppressed and oppressor. Indeed, it is in this false objectivity that their true allegiance manifests itself. By casting a given reality as natural and absolute, while asserting a lack of bias, the sympathy of enlightenment, like historicism, is clarified. Any confirmation of “the eternity of the actual,” whether “in the clarity of the scientific formula” or in stable, linear historical narratives, secretly aligns with the status quo (DE 20). The unfolding process of enlightenment where “every definite theoretic view is subject to the annihilating criticism that it is only a belief,” culminates in an intellectual tradition where only the extant is legitimate (DE 7). There is no place for criticism as critique presupposes a world different from the present: “revolutionary imagination feels shamed as utopianism” (DE 33). Thus, “[t]he actual is validated, knowledge confines itself to repeating it, thought makes itself mere tautology” (DE 16). In an entirely enlightened world where history is written exclusively by the victors, all that can be thought is the repetition of the actual: “the regression of the masses today lies in their inability to hear with their own ears what has not already been heard” (DE 28).

Notably, however, Benjamin takes a crucial step that distinguishes his theoretical approach from those inspired by him. Benjamin splits historical practice in two. He amputates historical materialism, allowing him to ‘disassociate’ it from the practice of historicism. This partition is fruitful, as it carves out a space for history to be used to further revolutionary action. Conversely enlightenment, for Adorno and Horkheimer, is monolithic and totalizing. They are far more radical in their denunciation: “enlightenment, in the service of the present, is turning itself into an outright deception” (DE 34). Nothing in enlightenment can be redeemed, which means that the critique leveled against it is similarly absolute: no scrap of reason is left as a foundation for critical thought.

Benjamin views historical materialism as the proper mode of historical study. His denouncement of historicism allows him to maintain a historical impetus for revolution. Historical materialism offers this critical counterpoint: a deep empathy with the ruled. Benjamin’s project is no less than the declaration that history must be reinterpreted. His theses on history are littered with cryptic references to the necessary insights of a historical materialist. His inquiry demands a dramatic reconceptualization of the use of history.

Benjamin examines the source of revolutionary imagination: the messianic power of the past. Here, he breaks from traditional Marxist doctrine, which conceived class struggle as the locus of human emancipation in history. Benjamin, via his allegory of the automaton, suggests that a hidden agency powers the outward operation of historical materialism. This secret, vitalizing “theology” requires some explication, for it is not an evocation of organized religion. Instead, Benjamin is referencing the “weak messianic power, on which the past has a claim” (OCH 390). Proper contemplation of the past forces a peculiar awareness in the present. It brings a realization that society has not always existed in its current mode. One cannot be envious of an unimaginable future, but the past permeates into each subsequent moment: “In the voices we hear, isn’t there an echo of now silent ones?” (OCH 390). The historical materialist understands that history is library — a “secret index” — of unfulfilled prophesies (OCH 390). These muffled voices call out for their messiah to come at last. The call to completion, the redemption of this inheritance, is what Benjamin labels theological. These “cited” moments sustain the revolutionary impulse (OCH 395). Thus, history edifies the revolutionary just as the vision of ancient Rome galvanized Robespierre.

Benjamin, in a mere twenty theses, offers something that is not present in the Adorno and Horkheimer’s work: he affirms the theoretical possibility of a revolutionary impulse, which is all but denied in Dialectic of Enlightenment. Through the distinction between historicism and material historiography, Benjamin suggests that a discontinuous break from the present is attainable; that the past can still be redeemed. Benjamin is capable of an unflinching articulation of the challenge, without lapsing into the same fatalism that resigned Adorno and Horkheimer to the Grand Hotel Abyss.

A Chemist’s Eye for Difference

ø

The Periodic Table can be viewed as an interrogation of difference. Primo Levi’s writing is animated by polarity. He recognizes division everywhere, questioning both the mechanisms by which it is enforced and the veracity of its claims. From the obvious division between Jews and gentiles, a number of other smaller dualisms emerge, either as allegory for this primordial separation or in opposition to it, destabilizing and complicating the narrative of estrangement.  Levi’s ambivalence towards difference is clear. However, he never openly advocates bridging the divide. For him, the will to unity is fascistic at its core. Throughout the book, there is a rhythmic oscillation between the general and the specific, similarity and difference, which never reaches a stable conclusion. This lack of finality is the point: to erase difference is to lapse unthinkingly into Fascism, but to elevate it is to reinforce traditional delineations, which demand a stereotyped similarity of their own. Ultimately, Levi delivers a hagiography of nuance against blind abstraction, finding a precarious balance that acknowledges the generalized differences across groups, while simultaneously preserving the individuality of the various members.

Judaism is the first difference, it proceeds all others and casts its shadow over the rest. This division is introduced as timeless, as an ancient “wall of suspicion, of undefined hostility and mockery” that transcends any individual (4). The division is based on generalities and stereotypes. It is the work of the collective. So, an entire essay, “Argon,” is dedicated to committing to paper the idiosyncrasies of specific Piedmontese Jews relative to their gentile counterparts. In this opening explication of difference, Levi threatens the clarity of the separation. By examining its human element, he deflates the logic of religious tension. For example, when discussing his father, Levi characterizes him as “superstitious rather than religious,” describing in detail his weakness for prosciutto, a meat prohibited by Talmudic dietary restrictions (19).  That the man’s love for prosciutto would so regularly overpower his adherence to the tenets of his faith humorously subverts the stereotyped division. While the ancient tension is felt in the world of Levi’s youth as “the myth of a god-killing people dies hard,”  he reminds the reader that ‘the Jew’ is not a monolith, nor the stereotype (11). By instantiating the Jewish faith in individuals, he humanizes it.

Yet to destabilize stereotypes is not to suggest that differences do not exist. Indeed, for Levi, to valorize uniformity is Fascistic: “it wants everybody to be the same and you are not.” (34). Levi constructs another duality which complicates the notion that differences are a merely an anachronistic religious inheritance, which have no real meaning to individuals. Through experiments with zinc, he explores the relationship between purity, “which protects from impurity like a coat of mail” and impurity, “which gives rise to changes, in other words, to life” (34). Zinc will not react if it is homogenous, it is rendered inert unless a foreign reagent is added. Thus, difference is necessary, Levi proposes, for it enables life to unfold in all of its complexity. The demand for purity, rather than a vitalizing force, as supposed by Fascist ideology, instead results in stasis. In this framing, Levi apparently throws his support behind ‘impurity.’ And he does to some extent, but not entirely.

Levi recoils from the reification of difference, just as much as the compression to uniformity. According to Levi, the Jews’ minority status has been assumed for so long that its mark has infiltrated the language itself.  The dialect of his youth is a “crafty language meant to be employed when speaking about goyim in the presence of goyim” (8). Language, despite its usual communicative function, is here used as a tool to further the separation. The intentional obfuscation helps to build up “symmetrical barriers” of distrust (4). Levi condemns the mutual distancing that occurs as a result of this “atavistic terminology” as something incomprehensible, for the those on either side of the barrier were not as different as they supposed (124). He returns to the theme of language as mechanism of division when he is visited by a Piedmontese customer whose dialect puts him “ill at ease” (170). This misgiving does not reflect any animosity on Levi’s part, but rather the difficulty of constructing a response without suggesting a divide: “it is not good manners to reply in Italian to someone who speaks in dialect, it puts you immediately on the other side of a barrier” (170). Levi seeks to avoid building walls between himself and others. Simple signifiers, like language, offer purchase to old stereotypes. For example, responding in Italian to this customer would have placed Levi on the “side of the aristos, the respectable folk” (170). To answer in such a way would condemn him to be an abstraction in the mind of the other.

One must be careful in embracing generalities. Levi learned this via his mishap with potassium; that seemingly similar elements can possess very different properties. The potassium explodes into flame, while sodium would not have: “the chemist’s trade consists in good part of being aware of these differences” (60). Awareness of individual difference is essential. Levi possesses a rare talent for seeing through abstractions, through cobwebbed stereotypes, to uncover what is genuinely there: the individual behind the type. When Levi writes that “[o]ne must mistrust the almost-the-same … the practically identical, the approximate, the or-even, all surrogates, and all patchwork,” he is taking aim at the simple delineations offered by religion or class or any other socially salient divide (60).  As a chemist, he is sensitive to particularities for small differences can lead to radically different outcomes. His hesitation to collapse the similar into the same is productive. Levi asserts the value of difference without allowing the cult of easy demarcation to reduce individuals to a stereotype.

Levi does not yield unflinchingly to abstraction, even in cases where it might offer an easy, moralistic simplification – yet he feels the pull. He truly grapples with the challenges posed by stereotyped abstractions when confronting his Nazi captor: Doktor L. Müller. Do “perfect Germans exist?” Levi asks, “Or perfect Jews?” (216). Just as one Jew is not (and can not be!) interchangeable for any other, one German cannot represent all Germans.  Levi is characteristically much more interested in the particular than the general: “when the interlocutor without contours, ghostly, takes shape before you, gradually or at a single blow” (216).  Yet when Müller presents himself “with all his depths, his tics, his anomalies and incoherences,” Levi cannot help but be frustrated (216). Müller wanted absolution and pins Auschwitz on “Man, without differentiation” (219). He retreats into abstraction for it offers him an easy redemption. Müller slips into “stereotyped phrase” when looking for a means of overcoming the past (222). Levi is prepared to do the same. In his draft, he is ready to say that “every German must answer for Auschwitz, indeed every man,” parroting Müller’s line back at him (223). Yet before the letter can be mailed, Müller calls and asks to meet in person, a setting where he would be more “a man than …. an opponent” (218). Levi agrees, but Müller dies before the meeting, and once again the tension between the general and the specific finds a dubious, unsettled equilibrium.

Levi writes with a keen eye towards difference; he eschews lazy abstraction, instead grappling with individuals as such, a much more challenging task. The Periodic Table scrutinizes the relationship between difference and abstraction, never definitively picking a side. Difference both creates individuality, as when something differs from the stereotype, but also cleavages for division. Abstraction helps create useful categories but can also unfairly paper over the important distinctions between individuals. Levi is careful to balance abstraction with concreteness, difference with similarity and specificity with generality. He has an eye for difference, seeing people in their complexity, rather than reducing them to a category or stereotype.

Montaigne and Modernity

ø

Montaigne’s essay, “On Vehicles,” contains, latent in it, a critique of fledgling modernity. In classic Montaignian rhythm, the work meanders hesitatingly between precepts for rulers, a recollection of the great Roman spectacles and a brief interlude into the frailty of human understanding, all culminating in an exposition of European conquest of the Americas. But with Montaigne, despite the eclecticism, there is always a thread; to find it, one simply needs to look for the frayed edge. While this essay hinges on an implicit comparison between the New World and the Old, one must interrogate the source of the tension  that leads Montaigne to say, “I very much fear that we shall have greatly hastened the decline and ruin of this other hemisphere by our contact, and that we shall have made it pay very dearly for our arts” (277). Montaigne sees a toxic seed that has taken root in the European world. This informs both his antiquarianism and his Rousseauvian views on the people, and notably the monarchs, of the New World. The titular theme — vehicles — become symbolic of modernity, an outward manifestation of the distinction between the old and new.

The essay begins with a brief exposition on causes and the difficulty of settling on a single “fundamental” explanation given the myriad of potential factors (264). Quoting Lucretius, he writes: “It is not sufficient to state a cause, We should state many one of which will prove to be true” (264). This provides us our starting thread. Montaigne is in search of a cause; he seeks an explanation for the decline of the the Old World through a comparison with the Americas. Nothing about his inquiry is methodical – perhaps why it’s so easy to overlook. However, Montaigne is genuinely perplexed the fallen state of West and this essay is his attempt to “pile up” causes to see if the reason can be found among them (264).

After a false start on Hungarian war vehicles, Montaigne transitions toward the concept of liberality, a willingness to to give or spend freely. While a virtue among private citizens, the liberality of monarch’s — despite convention — should not be considered a royal virtue. A monarch’s generosity with his subjects is false, for it comes at their expense. But the truly insidious nature of this type of royal largess can be seen when kings try to purchase loyalty through their beneficence. This transactional allegiance is entirely precarious. The monarch degrades his connection to his citizens; he “exhausts himself in giving” (271). The natural obligation of subjects to their sovereign is tempered into a purely commercial relationship: “do you want your subjects to look on you as their purse-bearer, not as their king?” (272).

Even half a millennia ago, Montaigne felt the tremors of the tremendous discontinuity to come. It is easy to compress history while reading Montaigne today. On the modern ear, his discussion of proper princely behavior is flattened; the chronology is muddied. Both the exemplary and the warning cases are shunted together from the modern perspective. The anecdote about Cyrus equated, at least subconsciously, with the rulers of Montaigne’s era. After all, both are history to us today. This flattening deemphasizes Montaigne’s antiquarianism. However, he is drawing on the great rulers of the past to provide advice for the princes of his era. He writes to serve his contemporaries, “the kings of today” (271). He sees something has been lost: the “inestimable treasure” of loyal subjects has been replaced by a false coin of “mercenary men” who do not hold the ruler in any special regard (272).

While a subtle point that can be lost in the Montaigne’s wondering prose, this conception of transactional commerce as a deracinating, brutalizing force appears again with respect to the European treatment of the New World. Indeed, if anything this motif is sharpened by the comparison: “So many towns razed to the ground, so many nations exterminated, so many millions put to the sword, and the richest fairest part of the world turned upside down for the benefit of the pearl and paper trades” (279). Here, Montaigne offers the first inchoate, yet simultaneously prototypical, indictment of modernity, and thus brings the concept into being — for the idea of modernity truly begins with its first critics. He observes the tremendous exertion of human energy and violence for such a mundane end. One cannot help but be struck by the sacrifices demanded in order to ensure “mere commercial victories” (271). Economic self-interest is substituted for natural obligation or loyalty: “nothing goes so naturally with greed as ingratitude” (271). And so, greed comes to replace virtue as the impetus for all human activity.

There is an implicit comparison between the European monarchs and those recently deposed rulers of the new world — and the latter come away looking much better. To Montaigne, the “last representatives of the two most powerful monarchies in that world” embody the virtuous, yet abandoned, precedent set by the occident’s early rules. The king of Peru is described by Montaigne as “a frank, generous, steadfast spirit, also of a clear and orderly mind” (280). And the story of the torture and execution of the Mexican king conveys a deep courage and nobility, when even after being humiliated and subjected to sadistic treatment at the hands of his captors, he maintains his composure. The royal virtue of these kings is always exemplary. “The use of the coin being entirely unknown to them,” the loyalty of their subjects is not of the cheap and counterfeit kind that Montaigne sees prevalent in Europe (283).

Their extraordinary wealth —  the motivation for the Spaniard’s brutality — was solely ornamental, not the wellspring of their power. It was only “an object for show and parade,” not a instrumental implement of commerce, “to be divide[d] and converte[d] into a thousand shapes,” as in Europe (283). These vast gold deposits were merely “piece[s] of furniture that had been preserved from father to son by many powerful kings” (284). This is a telling line because it recalls the earlier dictum offered by Isocrates: “be sumptuous on furniture …  but avoid all such magnificence as would drop out of use and memory” (268). In this parallelism, Montaigne suggests the New World realizes the ancient wisdom that fallen out of practice in the West. Yet this realization contains an element of tragedy: by excelling the West in its purported virtues, “they ruined, sold, and betrayed themselves” (277).

From this tension between old and new, the horse takes on a symbolic significance in the work. The idea of the vehicle is affixed to Montaigne’s conception of modernity. Horses, being unknown and alien in the “infant world” of the Americas, demarcate the European conquistadors from the natives (277). The horse, the vehicle, becomes the emblem of modernity. The mode of a man’s transportation becomes an outward sign of his ‘historical age.’  When the Spaniards received their ransom from the last king of Peru, they ensured “their horses were never shod with anything but solid gold.” (280). Here, quite literally, the instrumentality of a dominating modernity are venerated at the expense of the old ideals of virtue. The horse allowed the European, “these strangers mounted on great, unfamiliar monsters,” to impose himself on the New World (278).

And in the end, when the Peruvian king falls in battle, he is not brought down on equal terms. Sitting in his golden litter, he surveys the battle; his subjects willingly sacrificing their lives to keep their king abreast “by the sheer strength to their arms” (284). When the last great monarch, this exemplar of Montaignian virtue, is finally wrested from his litter, it is by a man on horseback. Locomotion becomes a manifestation of modern advantage, while simultaneously juxtaposing the old ideals against the new.

On Language and All That Is Lost in Transit

ø

Is there anything more than those ideas and thoughts that can be put into words? Can you think something that is unable to be expressed, something that cannot be mechanically translated from the ‘pre-lingual’ notion in your head into a collectively understood syntax? Is there ‘thought’ before language? This is the question that remains to be answered.

In a highly scientific age, such as our own, the habitual response is to make thought a universal feature in nature; to flatten thinking, until it is ubiquitous. Clearly, the brain operates before language is acquired. Infants ‘think’ in this colloquial sense – that is, they exhibit the outer signs of life, they have some sensory experience. They smile back at the face of their mother. They cry when they feel pain. Animals could be said to ‘think’ in this way as well. The lion intends to hunt for it is hungry. The sparrow builds its nest and feeds its young. The outworks of intent are present. These ‘thinking’ creatures respond to stimuli and mere “sensations are enough to guide them automatically” (Durkheim 338). Thought understood like this – as a feature of sentience, undifferentiated – is a reflection of our modern, scientific ethos, which urges us to imagine ourselves as indistinct from nature. Of course, the scientists are right. Each of us is part of nature. We are composed of the same material and have a shared physiognomy: we are animals. Yet, it seems to me that the drive to entirely untether thought from anything specifically human is misguided. Schopenhauer said somewhere that “the mere addition of thought gives rise to the vast and lofty structure of human happiness and misery from the same basic sensations of pain and pleasure, which are experienced by every animal.” (Schopenhauer, 17). He captures a subtlety that has been lost in modern discourse. We can embrace our animal nature, while simultaneously understanding mankind as somehow distinct, and that thought is the demarcation. It seems, however, we are still in desperate need of a definition of thought. To settle on one, we must seek out that which separates man from animal because that is thought.

Vision once seemed to me to be the most objective sense. However, a recent visit to an optometrist disabused me of this notion. Regardless, I think the intuition is quite common. Unlike smell or taste, which are challenging to describe in language, the experience of sight seems so easily communicable. I am now convinced that, in actuality, vision is the most deceptive, precisely for the apparent effortlessness of its transliteration into words. If I were to attempt to describe a sensation, I could convey the abstract concept, but none of the feeling: “it is impossible for me to pass a sensation of my consciousness along to some else’s consciousness; it has the stamp of my body and my personality and cannot be detached from me” (Durkheim 329). Indeed, I have no way of knowing that the subjective experience of the other has any true correspondence to my own. The general concordance gives no reason to be suspicious, all the while the actual texture of the experience is abraded by language.

Thus, for accurate measurement to occur, one must be removed from reality and introduced to an entirely artificial environment: a well lit, alabaster white room. The depth and color of the world constitute the first sacrifice: a reduction of dimensionality and a desaturation of vividness. I sat, as one does, in the chair opposite the eye chart. The rows of letters descended into oblivion, marching down the wall until they were impossible to resolve. Through the precision of the vision test, I was reminded of just how much reality is sacrificed to construct perception. So much of everyday experience can not be as it appears. If overlaying a convex pane of glass in front of my eyes, suddenly sharpens the world and brings it into focus, then what reality was I living in before? It didn’t feel blurry. Few considers the particularity of their own perspective until it scrutinized relative to others. The limits of perception are all but imperceptible to any single individual. We can only know ourselves through comparison for “each is furthest from himself – with respect to ourselves, we are not ‘knowers'” (Nietzsche, Genealogy, 1). The artifice of seeing is taken for granted. 

Conditions, pathologies and abnormalities of all types have an interesting power to illuminate the unspoken definitions of normalcy. What does life look like when realized in bodies which betray the absence of things that other hold in common? Just as no one considers oxygen, until they are short of breath, we need the counterfactual to recognize the function of the mundane and the ubiquitous. Otherwise, all these things held in common recede from view and they are so overtly present as to become part of the background.

Insanity has a revelatory character precisely because the extent of the normal is defined by what is considered deviation. I find that insanity is best understood as a breakdown in communication, where an individual slips out of mutual intelligibility and their thought processes takes on a ‘irrational’ character. It is more a social label than a medial diagnosis. It is a decline from a previous state of mental health. I had two brushes with insanity over the summer. The first I no longer remember. The second prompted this reflection. “Who is more sick, the man who bellows out incoherent cries, which slip the bonds of language and communicate something more primal: anguish; or the man – the men, women and children – who listen to that anguish, that insensate shriek, and pretend to hear nothing. Who is mad in this exchange? One has lost his grip on reality, the other willfully rejects it. The former has nothing to hide; he could not lie even if he wished to mask his suffering. The latter, however, feigns ignorance – he pretends to an immaculate conscience. He is pierced by the scream, yet he does not react outwardly. Perhaps, at most, a head is lifted from a screen, a gaze averted, an eyebrow raised. But even this is unusual. The normal response is a peculiar type of unseeing. An incomplete blindness where one observes, but does not feel. A dispassionate affect that is most disconcerting, especially when I came to recognize it in myself. Everyone, for an instant, is an actor. Each is compelled to play the role, for if the mask were to slip, even if just for the briefest second, the illusion would disintegrate. The crowd –  the cast of this lifeless tableaux – would be forced to acknowledge the oh-so-tenous order of our streets, of our relations with other men and of the mind itself. Insanity is studiously ignored in order to foreclose this reckoning. We don’t avert our eyes and muzzle our reactions for their sake, but for the maintenance of a shared illusion.”

To return to my subject, it seems to me that genuine incoherence, an inability to communicate, is where thought ends. The contrapositive gives us a definition of thought: thinking begins with mutual coherence, the capacity to formulate an idea in such a way that it can be conceptually revitalized in the mind of another. To think is to enter communion with other minds. Durkheim says bluntly that lacking this capacity, man “would be inseparable from animal.” True abnormality in thought is not the realm of heretics or contrarians who, despite their pretense of nonconformity, are indistinguishable from the normal in comparison to the unarticulatable foreignness of someone who has never known language.

Languaglessness of this sort is not conjectural. Particularly before the rise of mandatory childhood education, when it was not rare for the pre-lingually deaf to go unexposed to anyone besides their immediate family for decades. For instance, in 18th century France, Jean Massieu was without language until the age of seventeen (Sacks 35). Born deaf, his sole mode of communication until his teenage years was a invented form of sign language, so rudimentary that it lacked any sort of grammar. A human being is not mindless or mentally deficient without language, but he is severely restricted in the range of his ideas. “It is not that he lacked a mind, but that he was not using his mind fully” (Sacks 34). Language is more than a simple substrate. It is not just strings of symbols on a page or successive utterances, vibrations in the air produced in response to breath pushed out over the larynx. Rather, it exerts an active force on our thoughts. In a concrete way, Wittgenstein is entirely right in saying that “the limits of my language mean the limits of my world,” particularly with respect to language’s absence. The essential function of language is not to merely to enable the mechanical act of communication, but produce the shared conceptual understandings that make communication possible. Unexposed to language’s symbolic inheritance, people are largely restricted to a world of immediate sensation, devoid of any abstract, communal concepts. These isolated souls are perhaps the only true individuals.

Though I have no direct experience with deafness, I have always had the vague idea that others possessed an ability that I did not. At various times, I’ve attempted to analogize it as a lack of “social proprioception” – proprioception being the ability to sense the orientation of your body in your environment. It is another one of those sedimented capacities that is invisible until absent. It allows you to move quickly and freely without having to consciously think about where you are in space. Some people lose the ability or are born lacking it. They are still entirely mobile, yet each motion must be deliberate and conscious which results in a wooden, uncanny movement.

While I can move my limbs intuitively, conversation has always demanded a concerted effort. My attempts are clumsy and awkward. I sound uncoordinated as if my tongue is decoupled from the thoughts it articulates. My words lurch out either in a rapid staccato or languid, hesitating drawl, interrupted by long pauses. I am fortunate to look the way that I do and so my ineptitude is charitably mistaken for aloofness or disinterest. I wonder what I would have been diagnosed with if I appeared differently or if I wasn’t adept enough to compensate for my lack of social intuition with rote stories, each practiced until the artificiality of the performance had been erased.

I am excruciatingly aware that the concepts and grammars that sustain communication across distinct and fundamentally isolated consciousnesses are not without their costs. Damage had to be done to facilitate this communication. Nietzsche says that “the history of language is the history of a process of abbreviation.” (Nietzsche, Beyond, 216). I’ve described already the cost of maintaining the shared illusion, which is the lie that each of us can truly speak and be heard. We compress ourselves and make ourselves similar, common. Still, when we talk to each other, though we speak in the same language, it has been subtly particularized. If language is mere abbreviation, then the dictionary is imprecise – contingent upon various experiences and individual interpretations.   

I am struck by how much is lost in that vertiginous space between consciousnesses, an intellectual Sargasso littered with the wrecks of ill fated voyages from one mind to another. Or perhaps this distance better conveyed as a desert that must be traversed: a dry and barren plain, whose unyielding sands overwhelm all but the most equipped caravans that attempt a crossing. I imagine a vast expanse, like those depicted by Dali. Wastelands studded with heaps of broken images, jettisoned or abandoned in transit.

Language enables the journey, but the delicate, and entirely personal, structure of an idea is denatured in the transition from one mind to the next. The cost of thinking is the sacrifice of specificity. To convey an idea, one must express themselves in stable, universal concepts, which are the work of the collective. For my part, I prefer for my ideas to remain in my head. Language has a way of dulling them. The damaging conversion to thought is one to be avoided if it can be helped. My attempts to communicate an idea are almost always an exercise in mediocrity. I feel that my language doesn’t have the expressive capacity to present anything beyond the most tenuous contours of an idea, the most hazy depiction, always in an autumnal hue, never with the vibrance of a living thought. “We immortalize what can live and fly no longer – only weary, mellow things! And it is only your afternoon, you, my written and painted thoughts … but nobody will guess how you looked in the morning, you sudden sparks and wonders of my solitude.” (Nietzsche, Beyond, 327).

Bibliography

Durkheim, Emile. Elementary Form of Religious Life

Nietzsche, Friedrich. Beyond Good & Evil

Nietzsche, Friedrich. On the Genealogy of Morality

Schopenhauer, Arthur. Essays and Aphorisms

Sacks, Oliver. Seeing Voices

The Physiology of Politics

1

Friedrich Nietzsche binds the political to the physiological. His political intuitions emerge from a theory of breeding: that cultures and peoples are shaped by the conditions in which they are produced. Adverse conditions breed strength and conformity of type, while superabundance leads to variation, “whether as deviation (to something higher, subtler, rarer) or as degeneration and monstrosity” (211). In Beyond Good & Evil, he pushes against the false universalism that he sees as ascendant in Europe, and injects the countervailing idea that advancement of man-as-species requires the recognition of hierarchy, of distinctions between men. Nietzsche sees the political movement toward democracy as the outer-works of a “tremendous physiological process,” the leveling and mediocritization of biological man, which is taking place concurrently (176). He condemns this future of commonness and vulgarity, but suggests that it may yield, in the exceptional cases, strong “human beings of the most dangerous and attractive quality” — a new Nobility (176).

For Nietzsche, what is noble is “all that is rare, strange, privileged, … and the abundance of creative power and masterfulness” (139). Nietzsche emphasizes the elevation of the noble: “the higher man, the higher soul, the higher duty, the higher responsibility” (139). Noble qualities imply a hierarchy. If the noble soul “knows itself to be at a height,” this elevated status begs the question: higher than what? (215).

Nietzsche’s political theories are tinged with a proto-evolutionary logic. He argues that adversity in conditions produces a “fixed and strong” type of man (210). A type, a species, a culture learns to prevail because it must prevail, as failing to do so risks extermination. This “long fight with essentially constant unfavorable conditions,” both fortifies and culls (210). Through conflict and conflagration, the essential qualities that allowed a people to “always triumph,” reveal themselves more readily (210). Hardship clarifies the demands of necessity. These qualities are the “conditions of existence” and they alone are baptized as virtues and cultivated as such (211). Those who posses these common traits are valorized, while aberrant individuals are at a severe disadvantage. They will not survive or reproduce as they “easily remain alone, succumb to accidents, being isolated, and rarely propagate” (217). Early aristocracies, such as the ancient Greek polis or the city state of Venice, were embedded in such hostile conditions and thus produced “a type with few but very strong traits” (210).
The emergence of the concept of nobility coincides with conquest and exploitation; the stronger, more barbaric type of man-as-species asserting dominance over “weaker, more civilized, more peaceful” types (201). The noble caste always began as the barbarian caste. It was a ruling group which reflexively determined what was ‘good’ by looking inward at itself and fixating on those qualities that “conferred distinction and determined the order of rank” (204). What was noble was that which distinguished the rulers from the ruled; the characteristics of the ruling class — “severe, warlike, prudently taciturn” — were forged in the fire of existential necessity (211).

Aristocracy and nobility begins with an initial act of domination; they owe their origin to the brute physicality of the barbarian overcoming a more civilized culture, however this is not their end. Nietzsche believes that the enhancement of man-as-species stems from the structure of aristocratic society. The ingrained differences between a ruling caste and its conquered subjects birth a new urge, a desire for “the development of ever higher, rarer, more remote, further-stretching, more comprehensive states” (201). When Nietzsche asserts that “society must not exist for society’s sake,” he has this higher purpose in mind (202).

In order for enhancement to occur, a society must believe “in the long ladder of an order of rank and differences in value between man and man” (201). The mandate of political life is not to superimpose a false equality or to extend superficial rights, but to construct the “foundation and scaffolding” on which superior individuals can develop. Society is not produced by a collective for the improvement of all, as suggested by other political theorists, instead it justified by the betterment of an elite class. A healthy aristocracy experiences its own reproduction as the “meaning and highest justification” of political life (202). The ruling caste must willfully accept the sacrifice and reduction of “untold human beings” in order to sustain itself (202). The central conceit that “life is essentially appropriation,” cannot be corrupted or abandoned (203). Nobles require a forceful belief in their own ‘goodness’ and right to rule, for if this requisite belief fades, the enhancing function of aristocracy decays in tandem.

The political movement toward democracy belies the fragility of this belief in the modern context. The “ordinary consciousness” of Europeans resists the idea that society must be exploitative (203). Although Nietzsche rejects the “nonsense of the ‘greatest number'” — as noted earlier, for him, the end of political life is not the greatest good, but the production of a superior type — the opposing value is gaining momentum (117). And as the unfavorable conditions which maintained the virtues of the aristocracy no longer exist, the structure itself is debased. A democratic structure of governance can only result from a corrupted aristocracy; one that “sacrifices itself to the extravagance of its own moral feelings” as did the French aristocracy before the revolution (202). The political agitation for democracy, Nietzsche argues, is not “only a form of the decay of political organization but a form of decay … of man,” reflecting European man’s impending mediocrity and diminution (117). He believes that “the democratization of Europe leads to the production of a type that is prepared for slavery in the subtleness sense” (176). The source of the corruption, according to Nietzsche, is in some sense, biological.

European modernity severs the link between a type of man and the conditions and climate which produced him. Europeans are becoming homogenous. Peoples have become “more and more detached” from the conditions of their origin and thus more similar to each other (176). There is less that makes a people unique, fewer distinction which can separate one type from another. Europeans are “increasingly independent of any determinate milieu that would like to inscribe itself for centuries” (176). The harsh — and specific — conditions that once enforced virtue and produced the noble type come to an end and “the tremendous tension decreases” (211). The old requirements, which enabled existence under adversity, no longer appear necessary. Given conditions of abundance and adequate protection, variation becomes possible. The individual dares to be different. During this period, the variety of forms and modes of living explodes. Some will be improvements on the previous type of man, but many will merely be degenerate forms.

Nietzsche presents a highly physiological, and somewhat deterministic, theory of political life. He interrogates the origins of peoples and cultures, exploring how varied situational conditions can impact the political realm and is preoccupied by the portent of degeneration that he sees. He suggests that society should not be constructed for the benefit of a degenerate majority, but instead for the development of a select few, the Noble. Nietzsche provocatively asks, “today—is greatness possible?” (139). And a retrograde, backwards-looking greatness is not; the conception of the noble is the product of specific conditions, which under modernity, may no longer exist. However, while ‘modern ideas’ lead to a general mediocrity across the population and degrade previous manifestations of greatness into “an archaizing taste,” they involuntarily present a fertile ground for the cultivation of a new nobility (211).

 

Beyond Good & Evil, Translated by Walter Kaufmann

Reconciling Antinomy: Durkheim on Apriorism & Empiricism

ø

In The Elementary Forms of Religious Life, Emile Durkheim reconciles the apparent antinomy between apriorism and empiricism. In order to unify these two approaches to understanding, Durkheim explores the source of their contradictions: an “essential duality,” which goes by many names — sacred and profane, part and whole, specific and general, material and ideal, all of which, Durkheim suggests, are various incarnations of the same relationship, that between the individual and society (39). Durkheim succeeds in situating the external truth that characterizes apriorism within the constrains of the material world by explaining reason as social phenomena, constructed via collective thought and imposed by the moral authority of society. Sociology, the new science of man, is the product of this synthesis.

The classical understanding of knowledge can be divided into “two contrasting doctrines:” empiricism and apriorism (15). Each offers an apparently irreconcilable explanation for the source of the categories of understanding — the “notions of time, space, genus, number, cause, substance, [and] personality” — which enable rational thought and are basic requirement for social life (11). Empiricism asserts that “these categories are constructed, made of bits and pieces, and it is the individual who forges this construction” (15). The individual is subjected to repeated sensations and experiences and, in order to make sense of the maelstrom of perceptual inputs, the mind categorizes. In contrast, apriorism maintains that these categories are “simple givens, irreducible and immanent in the human mind by virtue of its inherent make-up” (15). Reason — “the whole set of fundamental categories” — precedes experience; it is an essential feature, a precondition, of human cognition (15). Each of these explanation has its own merits. Empiricism reflects the reality of the individual biological being, while apriorism reflects that of the social being. Yet both also present difficulties: empiricism’s irrationality and apriorism’s unknowability. Durkheim sets out to reconcile the two.

Empiricism asserts that the reality can be understood through the senses. All that is empirical is “essentially individual and subjective” as sensation is tactile and specific (15). A sensation exists only for those who experience it and remains open to divergent interpretation. The reaction that one individual has to specific stimulation has no bearing on how another will necessarily respond. For empirical theorists, the relative uniformity of the categories across individuals reflects “illusions that can be practically useful,” but no fundamental truth about reality itself (15). Indeed, this is one of the difficulties that Durkheim notes: that “to reduce reason to experience is to conjure it away” (16). While the material world may impose a sensation, the individual is master how he conceives of it and his response to it. The categories of understanding constitute the “the common ground where all minds meet” (15). They are essential as a function of their universality; without this set of “homogenous” concepts there could be no “agreement between minds” (19). It is only because all men’s thoughts are framed by these notions that is common life possible. Therefore, the implications of empiricism are abrasive to this necessary universality: “[i]f reason is only a form of individual experience, there is no more reason” (17). This damage to reason explains Durkheim claim that “classical empiricism verges on irrationalism” (16).

Durkheim finds the alternative to be superior. He endorses the basic thesis of apriorism that “knowledge is formed from … two distinct and superimposed strata” — the phenomenal world of ideas and ideals layered on to the noumenal, material world (17). However, while he is amenable to apriorists’ conclusions, he questions their grounding: to merely assert that reason “is inherent in the nature of human intelligence is not an explanation” (17). By supposing that categories are innate, Durkheim sees apriorists attributing a transcendental power to the human mind: the inclination to accept certain, necessary ideas without “previous examination” or “additional proof” (18). The unusual capacity goes unexplained and unjustified. Durkheim contends that apriorists rightfully reject the constraints of empiricism, allowing for an unimpoverished conception of reason. However, they subsequently consign the source of reason to inscrutability, placing it “beyond the boundaries of nature and science” (17).
For centuries, philosophers seeking to explain man’s “superior and specific faculties” were given this choice: either to ground reason purely in the material world and individual senses, which in effect extinguishes its claims to universality, or to attach it to a “supra-experiental reality that was postulated, but whose existence no observation could establish” (342). Durkheim seeks to maintain reason’s universal character while exposing its source to scientific interrogation. He believes that apriorist principles can be explained without resorting to obscurantism. His newly formulated theory of knowledge “grants reason its special power but accounts for it without leaving the observable world” (21).

Durkheim begins his inquiry with the “well-known formula” that “man is two-fold” (18). Man exists simultaneously as an “individual being that originates in the organism” and a social being that channels the moral and intellectual framework created by society (18). As an organism, man is unremarkable. Like all biological beings, individual humans are specific. They are oriented in time and space, situated in a physical body, and constrained by the limitations of the material world. The reality that individuals directly experience is purely empirical. This state is sufficient for isolated, biological beings to fulfill their needs: “sensations are adequate to guide them automatically” (338). Durkheim notes that animals are frequently observed to return to familiar places at the proper times without needing to invent categories. Man, without his sociality, is reduced to mere instinct. He is isolated, unable to communicate, to carry out the rich “intellectual commerce” that characterizes social life (329). He is confined to his own consciousness and, thus, “reduced to only individual perception, he [is] inseparable from animal” (334).

However, man when in society is not like other animals. While animals participate in the world solely though momentary sensation, human beings perceive an additional layer superimposed on to the material world. It is this duality that separates man from beast. Human beings experience the world through the senses but, in addition, have “the capacity to conceive of the ideal and add it to the real” (316). The scholars of apriorism would suggest that the ideal is eminent in the universe and, through reason, man uncovers it, while their empiricist counterparts would argue that experience leads to an individually constructed ideal. Durkheim proposes a third path, that of a socially constructed ideal. Durkheim describes that when “collective life reaches a certain degree of intensity,” the individual undergoes a profound psychic change: his senses are overstimulated and something new is awakened within him (317). In this moment, he is overwhelmed with powerful sensations and in order to account for them, this primitive man superimposes a new world upon empirical reality which “exists only in his thoughts”; this is the world of the ideal (317). Collective life simultaneously provides the basic substrate for society, a “consciousness of consciousnesses,” and gives rise to the ideal, thus endowing humanity with its dual nature (339).

Durkheim’s fundamental insight is that society should be thought of as more than a simple collection of individuals, more than a set of laws and institutions. Society is the “system of active forces” that manufactures the conditions for social life and it could not exist without them (343). It makes man something other than himself. “[C]ollective thought is possible only by the grouping together of individuals” and this grouping would be untenable without the production of the ideal (342). Organizing any collective activity, such as a feast or a hunt, demands that all individuals in the group have a shared understanding of time and its subdivisions. To cooperate at all generally requires agreement upon a shared goal and the means of attaining it. Thus society must enforce a “minimum of logical conformity” because the basic pursuits of civilization would be impossible without common beliefs (19). If every individual was free to construct their own set of categories, nothing would be common and collective life would be impossible. In other words, the creation of an ideal “is not an optional step” or a “finishing touch” — society is the “idea it fashions of itself” (317)(318).

So society extends this set of shared ideals; it impose parameters on thought and creates a conceptual vocabulary which can be used to describe reality. However, society cannot exist without specific instantiation in the minds of individuals. It is real “only to the extent that it has a place in human consciousness” (257). It requires physical organisms to act as hosts. These hosts are ‘infected’ with the ideal and tether society to the material world. However, the essence of society cannot be understood through its origins in the amalgamation of individual biological beings. From this essentially particular substrate, something permanent and stable is produced. While society presupposes collective life, it is not be defined by this prerequisite, but rather by the new and inherently different reality that it generates. The birth of society is inseparable from the birth of the ideal; society “has created [a] new world by constructing itself” (318).

The invention of this ideal world of “impersonal aims and truths,” which bears a striking resemblance to that conceived by the apriorists, remains grounded in the empirical world; it is a product of “cooperation [between] particular wills and sensibilities” (342). Society, situated “above individual and local contingencies,” introduces a new mentality (340). It compels individuals to think in an impersonal and stable manner, which subsequently developed into organized and collective thought. As social reality is external to any single individual, universal categories can still be formulated — though now as a social fact, rather than Kantian truth. Durkheim rightfully conceives of these categories as “artful instruments of thought” that are the products of centuries of intellectual labor (21). The truths generated by society are common to everyone and do not bear the mark of any individual’s consciousness. Such collective artifacts “present guarantees of objectivity” as a function of their persistence; if they did not correspond to a material nature, they would not continue to hold “extensive and prolonged” sway in people’s minds (333).

Durkheim’s new science of sociology is a compelling synthesis of apriorism and empiricism. Apriorism’s vision of universal reason before experience has been salvaged and, at last, its origins are accounted for. Impersonal reason is recast as “another name for collective thought” (341). The ideal becomes simply “a natural product of social life” (317). Furthermore, despite his affection for apriorism, Durkheim does not go as far as to entirely eschew the notion of any empirical reality. Indeed, he asserts that social reality will necessarily correspond to the nature of things, encapsulating an essential principle of empiricism. Durkheim, by examining the sources of the antagonism between empiricism and apriorism, effectively integrates the two.

Rousseau and Locke on Property and the State

ø

Jean-Jacques Rousseau and John Locke each explore the origins of the state, seeking its essential purpose and the source of its legitimacy. Their inquiry diverges over the question of property, specifically over whether property proceeds the state. For Locke, property rights arise prior to the state as an element of natural law, whereas for Rousseau, a social contract is a necessary precondition for the creation and legitimacy of property rights. This subtle distinction metastasizes into a salient difference between Rousseau’s vision of the general will and Locke’s view of supreme power. The essential purpose of the state differs between them: the Rousseauvian contract fostering civil equality and Lockean compact preserving natural inequality.

Locke asserts that private property precedes the state; legitimate ownership is not created by contract, but derived instead from a natural right. For Locke, the origins of property can be traced to one’s undeniable ownership over their physical body: “every Man has a Property in his own Person” (Second Treatise, Ch. V, 287). From this original ownership over the body, the Lockean understanding of property unfolds. Labor, the physical actions that constitute “the Work of [one’s own] hands,” mixes the sole thing that man can claim legitimate ownership over, his corporeal body, with raw, natural material that is common to all (Second Treatise, Ch. V, 288). This exertion removes the object of his labor from the “common state Nature placed it in,” annexing it as his own and excluding it from other men (Second Treatise, Ch. V, 288). By mixing his labor with some common resource, man ‘fixes’ within it something that is unequivocally his and thus “makes it his Property” (Second Treatise, Ch. V, 288). Notably, this conversion occurs without the “assignation or consent of any body” (Second Treatise, Ch. V, 289). Locke’s conception of a right to property directly relies on the axiomatic belief that man has incontrovertible possession over his own body. By exercising this sole object over which he has complete ownership, man can plant the same seed of ownership in other resources that are external to him and common to all. He affixes part of himself within them and thus can rightfully claim them as his own. For Locke, no collective agreement is necessary for the creation of private property as reason itself vindicates and affirms this right. Labor endows property with its legitimacy.

Rousseau, on the other hand, finds nothing natural in the institution of private ownership. Property is a right that cannot exist before contract. It is not the product of reason or natural law, but rather the culmination of the “most thought-out project that ever entered the human mind,” carried out by a few ambitious men for their own profit (Second Discourse, Part II, 79). Property, for Rousseau, is merely the name given to “adroit usurpation” that gain state sanctioned and thereby was converted into an “irrevocable right” (Second Discourse, Part II, 79). While Rousseau sketches out a familiar process by which the idea of property emerges—from the cultivation of land to its division, labor conferring the appearance of ownership—he refrains from granting this right any manner of true legitimacy. Rousseau splits the mere act of possession from any moral right. In the state of nature, each can lay claim to physical control over their holdings, yet given the constant specter of expropriation, this form of ownership is tenuous. One can state the empirical fact that they control their property, yet these grounds are insufficient. Possession is decried as a “precarious and abusive right” and as lacking any justification beyond an appeal to brute force (Second Discourse, Part II, 78). As the right to property in the state of nature is derived through force alone, it could justifiably be superseded and appropriated by any greater power. Though individual labor coupled with continued possession provides an explanation for the idea of property, any right was implicitly sustained by strength.

For Locke, property is a natural right that proceeds any collective agreement; thus, the creation of the state occurs later. Rousseau rejects this view, attributing the creation of property to “convention and human institution,” so necessarily following the formation of society (Second Discourse, Part II, 84). This subtle difference in sequencing dramatically alters each philosopher’s conception of the legitimate role of the civil state. The contours of the process by which a new state is formed are strikingly similar; however the essential purpose of the state is distinct. Locke envisions a right secured by the state; Rousseau, a right created.

Locke sees “the preservation of Property being the end of Government”; that goal provides the impetus that drives men to join together and enter society (Second Treatise, Ch. XI, 360). For Locke, it is “obvious” that legitimate property exists before the state, yet “the Enjoyment of it is very uncertain” (Second Treatise, Ch. IX, 350). So, on Locke’s account, man joins society for the preservation of a preexisting right rather than the creation of a new one. As property rights originate in natural law, something which is innate and inalienable, the state’s ability to expropriate must be curtailed. Locke emphasizes the protection of property when enumerating the limits of the sovereign: “Supream power cannot take from any Man any part of his Property without his consent” (Second Treatise, Ch. XI, 360). The prominence given to this argument makes sense as it would be an “absurdity” for men to submit themselves to the restrictions that society imposes without at least gaining the security over their holdings that was promised in the initial contract. However, if property is sacrosanct, then the differences that result from natural inequalities — as “different degrees of Industry were apt to give Men Possessions in different Proportions” — are legitimized by the state (Second Treatise, Ch. V, 301).

Rousseau believes that “it is utterly on the basis of … common interest that society ought to be governed” (Social Contract, Book II, Ch. I, 170). The sovereign should rule, in other words, in accordance with the general will, which favors equality. The general will can be ascertained by summing up all the individual wills and cancelling out any particular differences. While “the private will tends towards giving advantages to some and not others, … the general will will tend towards equality,” as it refuses to prioritize any one individual’s perspective (Social Contract, Book II, Ch. I, 170). For Rousseau, the needs of the community are always elevated above the preferences of individuals. For example, “[e]ach private individual’s right to his own land is always subordinate to the community’s right to all” (Social Contract, Book I, Ch. IX, 169). As Rousseau believes that property derives its standing solely from the authority of the collective, the collective is therefore empowered to determine how these rights should be allocated. Society acts with a “universal compulsory force to move and arrange each part in the manner best suited to the whole” (Social Contract, Book II, Ch. IV, 173). The goal of the social contract is not to preserve property but to create a new equality upon the substrate of an unequal reality. The social contract “substitutes a moral and legitimate equality for whatever physical inequality nature may have been able to impose upon men” (Social Contract, Book I, Ch. IX, 169). Men are made equal by society; the state is advantageous to men only insofar as they all have something and none of them has too much.

At a cursory reading, the respective societies proposed by Locke and Rousseau appear quite similar in structure; one can find many homologies between the two. However, in essential role, they could not be more different. For Locke, men converge on society for the simple purpose of protecting existing rights; this is the central function of the state. As the source of these rights is outside of the purview of (and prior to) the state, the government is limited by them; there is a higher authority to which men can appeal. In contrast, there are no limitations on the power of the general will: “the social contract gives … an absolute power over all of its members” (Social Contract, Book II, Ch. IV, 173). All rights are constructed by the community and come from within it. As rights for Rousseau are a social creation, he is willing to grant society the power to transfigure itself radically in order to attain a new civil equality. For Locke, the preservation of existing rights is paramount which, in effect, maintains natural inequalities.

The Irrational Origin of the Belief In Truth

ø

The Genealogy of Morality begins with an articulation of human ignorance with respect to the self: “we remain of necessity strangers to ourselves, we do not understand ourselves, we must mistake ourselves…” (1). What is the necessity that demands this ignorance? What is the “good reason” that knowers cannot know themselves (1)? This provocative question, this apparent paradox, is never explicitly resolved. I argue that the requirement of self-ignorance does not spring forth from intellectual coercion or psychological aversion, but rather it is the expression of a tautology. For Nietzsche, in order to qualify as a “knower,” one must believe in truth; one must have faith that there are things that can be known — and to know oneself destroys this requisite belief. The language of the text confuses, eliding the distinction between knowing as an act of believing in truth and knowing as personal awareness, as knowing oneself, but they are distinct and Nietzsche believes that the latter precludes the former. To know oneself, to stare into the abyss, leads to awareness of the ascetic ideal and thus, ultimately, the understanding that all idols and prophets are false and cynical; that science is a charade; that the value of truth itself has no foundation. Thus, in order to know about the world — to believe in reason, calculation, causality — one cannot submit to the type of deep introspection required to truly know oneself, to be self-aware.

Now to examine the origins of the knower. Throughout the Genealogy, Nietzsche evokes them, including them as a co-conspirators, as confederates in his effort to awaken man from his hibernation and inactivity. However, this is merely a rhetorical device. The knowers are not intentional participants in this project. They are those committed to a “naturalistic or scientific view of the universe” (120). However, the knower, the scientist-philosopher, the priest, all function similarly and, at this stage, the will to truth compels them to listen. Thus, they contribute against their will. Nietzsche asserts that “all great things necessarily perish through themselves, through an act of self-cancellation” (117). Eventually the law is applied to the lawgiver himself. In this case, when “Christian truthfulness” becomes conscious of itself and applies the rigorous scrutiny that all else has been subjected to; when it asks the question “what does all will to truth mean?” (117). The knower’s will to truth ultimately negates itself by mandating the inquiry that reveals its irrational foundations.

To appreciate the principle of self-cancellation, it is necessary to understand how science arose from asceticism; how it was shaped and reinterpreted by the ideal. Nietzsche claims that science is “not the opposite of the ascetic ideal but rather it’s most recent and noblest form” (107). He sees modern science as a new manifestation of this millennia old belief. The ascetic ideal is the product of humanity’s fundamental need for self deception. It emerged to salve the inadequacy of social life to the defanged, declawed, defenestrated being — whose civilization robbed him of the ability to rely on instinct. It infused suffering with meaning and through this infusion filled the “enormous emptiness” that pressed upon man (118). Without the comfort it confers, the essential meaninglessness of existence would be overwhelming. However, asceticism is a peculiar salve. It sickens those who take it as medicine, yet they feel better for it. It does not alleviate suffering, but it provides the narrative scaffolding that makes the pain meaningful through “religious interpretation and ‘justification’” (101). It offers this sublime release by assigning culpability: forging a meaning for suffering in the nourishing admonishment that “you alone are to blame for yourself” (92). Yet from this simple mantra grows modernity. Religion, philosophy, science are all distinct apparitions of the same underlying phenomenon. And if one looks closely, it is not hard to find the threads that tie the ideals of science to the ascetic ideal.

The demeanor and pose of the scientist maps to that of the ascetic priest. The scientist’s “will to neutrality and objectivity” is grounded in a self-denying instinct (79). What is impartiality if not literal denial of the self, the intentional suppression of the subjective? Furthermore, the philosopher-scientist denies the senses, “demoting physicality to an illusion” (84). But this correspondence does not explain why the concept of truth relies on the ascetic archetype. How did knowers come to bare the mantle of asceticism?

Nietzsche suggests that “[c]ontemplation first appeared on earth in disguised form,” out of necessity (81). The “inactive, brooding, unwarriorlike” instincts of these original thinkers made them the objects of suspicion and mistrust (81). Thus, their continued existence required that they adopt “the previously established types of contemplative human beings —as priest, magician, soothsayer, as religious human generally” (82). And so, the prototypical philosopher-scientist was formed in the mold of those who came before. To sell the illusion required that they act their role, and to act well they had to believe it.
But, to reiterate Nietszchean methodology, the “cause of the genesis of a thing” is divorced from its final function (50). The two must be understood separately. If, in the beginning, science inherited its asceticism out of existential necessity, the ideal has since reinterpreted itself. The tenets of modern science, rather than just the outward appearance and behavior of its practitioners, have been infected by asceticism. The ascetic ideal, through science, as through religion before it, advocates the complete removal and denigration of the self in the name of ‘truth.’ Science is, as a result, directed by this “unconditional will to truth” which requires “the renunciation of all interpretation” (109). There is no space for sensuality or individuality: everything must be reduced to that which can be observed ‘objectively.’ But the demand for objectivity requires “[t]hat we think an eye that cannot be thought, an eye that must not have any direction” (85). Objective truth aims to remove any trace of the self from the act of knowing — the eye is “trained for an ever more impersonal appraisal” — which Nietzsche sees as an “absurdity” (50) (85). For him, there is “only perspectival seeing, only a perspectival knowing” (85).

Knowers, adherents to this cult of objective truth, “are mistrustful of every kind of believer now” as a strong faith “raises suspicions against that in which it believes” (108). So science ‘overcame’ God and exorcised what was “exoteric about [the ascetic] ideal” but this only renewed its vitality. The ideal, now “stripped of its outworks,” was reduced to its core proposition: “its will to truth” (116). The habit of truthfulness, which originated in Christian confession, was surreptitiously “sublimated into scientific conscience, into intellectual cleanliness at any price” (116). And so, while belief in gods became fanciful, belief in truth, despite its divine origin, became “more firm and unconditional” (109).

Here lies the tautological contradiction that condemns “knowers to be unknown to themselves” (1). Modern science is predicated on a subtle metaphysics; so subtle, that those who claim to see the world as it is — the knowers, the scientists, the scholars, the “trumpeters of reality” — necessarily overlook it (107). And what is it that they are not permitted to see? That their belief in truth is derived from the apocryphal axiom that “God is truth, that truth is divine” (110). Despite the godlessness and materialism of modern thinkers, they “still take [their] fire from that great fire that was ignited by a thousand-year old belief” (110). They are blind to their role as heirs to a grand delusion; they “stand too close to themselves” and thus overlook the hollowness of the foundations that sustain their belief (109). In fact, they cannot even conceive of the need for any foundation at all: they think they float in the ethereal realm of absolute truth, and buoyed by that conviction, they overlook its divine and irrational origin. They are honest victims of the “dangerous old conceptual fabrication that posited … such contradictory concepts as ‘pure reason,’ ‘absolute spirituality,’ ‘knowledge in itself’” (85).

The concept of truth emerged from deception: “truth was posited as being, as God, as highest authority; because truth was simply not permitted to be a problem” (110). For a knower to know themselves (in other words, to be self-aware), they would be asked to do what they are “not permitted” to do: to “open their eyes towards themselves, [to] know how to distinguish between ‘true’ and ‘false’ in their own case” rather than kowtowing the inherited proscription (100). Knowers would be forced to justify the will to truth — to prove the value of truth itself, now that it has been stripped of its divine authorization (110). And it is not clear that such a justification exists. This final introspection is the culmination of “a two thousand year discipline in truth, which in the end forbids itself the lie involved in the belief in God,” thus debasing itself entirely (116).

Yet, for Nietzsche, this event, this self-overcoming of Christian truthfulness, is “hopeful” one (117). His call to self-awareness and his disdain for the ascetic ideal do not constitute an entreaty to understand reality more precisely, with more accuracy. He is instead concerned for the progress of mankind. Nietzsche views modernity as diseased and effeminate; dependent on meaning provided by the ascetic ideal, a self-denying delusion, that constricts the strong and beatifies the weak. He fears that modern man, through his embrace of the ascetic ideal, lives comfortably “at the expense of the future” (5). Self-awareness hastens the end of this sickened state by moving truthfulness closer to its ‘self-overcoming,’ which will throw the world into disarray, and perhaps, making life “worthier… of living” (80). When the old systems of morality have been weakened, something new can replace them. With the constraints of ascetic morality abandoned, the strong are unshackled and once again able to stamp their “own functional meaning onto” reality (51). The pending self-sacrifice of truth may destabilize the world, destroy nations, impose a new suffering upon the many, yet these are the conditions under which Nietzsche believes improvement can occur. Remember: “the forfeiture of meaning and purposiveness …. belongs to the conditions of true progress” (51).

Dependence, Freedom & the Structure of Obligation

ø

If human beings are entirely dependent upon one another, as Adam Smith suggests, then how is it possible for them to be free?

Does the state of freedom preclude dependence? Can one be free while unable to exist in isolation? Rousseau and Smith clash on this point because of their distinct views on human nature. For Smith, freedom is a result of fortuitous accidents; it is historically contingent, a product of selfish market forces rather than high-minded political intention. This contrasts with the Rousseauvian vision of a highly intentional social contract, where freedom is built upon a deliberate and considered conversion from self-interested individuals into a body politic. Dependence, in its various forms, is central to the stories that both authors tell about freedom. Rousseau condemns it as humanity’s original sin, while Smith embraces it as, not only natural, but also beneficial. However this apparent tension is, in fact, entirely superficial, stemming from the conflation of two distinct types of dependence, one centralized and the other distributed. Both types of dependence entail a reliance on others, yet the structure of these obligations is different. Centralized dependence, where multitudes are maintained by just a few, corrodes freedom and condemns men to slavery. On the other hand, distributed dependence, characterized by a complex web of mutual obligations between many distinct individuals, not only permits, but also promotes, freedom.
According to Rousseau, human beings, in their natural state, were free because, like animals, they relied upon no one but themselves. Each man, he envisioned, had no “greater need for another man than monkey or wolf has for another of its respective species.” (Rousseau 60). Entirely independent of everyone, Rousseau’s savages were endowed with natural freedom, a primitive type of liberty where man was free to do as he wished, to give in to any passing temptation, to live by himself on the fruits of his own labor (Rousseau 167).
Rousseau, seeing freedom as a property particular to natural man and dependence as the fulcrum that lifts man from this state, believed it to be a corrupting force. He asserts that dependence leads to degeneration, pointing to the physical differences that can be observed between domesticated animals and their wild counterparts: “The horse, the cat, the bull, even the ass … have a more robust constitution … in the forest than in our homes. … [I]t might be said that all our efforts at feeding them and treating them well only end in their degeneration.” (Rousseau 51). By providing a comfortable life for livestock, their natural vitality is diminished. And, he argues, the same must be true for human beings, if not too an even greater degree, as humans preserve comforts for themselves that they withhold from the animals that they domesticate (Rousseau 51).
Ultimately, though these examples of dependence are analogies that Rousseau draws upon to make his political point: “The bonds of servitude are formed merely from the mutual dependence of men and the reciprocal needs that unite them, it is impossible to enslave a man without having first put him in the position of being incapable of doing without another” (Rousseau 68). No one can be compelled to slavery another unless the alternative is to forfeit his life. But true dependence, where one is incapable of surviving without assistance, makes this type of coercion possible. Thus, dependence is the precondition of slavery.
Rousseau is not wrong in this assertion. However, his argument applies to a specific type of dependence. In The Wealth of Nations, Smith helps to draw the distinction between dependence that is “degenerative,” to use the Rousseauvian term, and that, which is both necessary and beneficial. By Smith’s account, feudal Europe was subdivided into territories, each dominated by a “great proprietor.” (Smith 440). By owning land, these proprietors controlled the entire surplus that it generated. Yet, without foreign commerce or finer manufactured goods, the bounty of the land could not be exchanged, only consumed: “If the surplus produced is sufficient to maintain a hundred men … [the proprietor] can make use of it in no other way” (Smith 440). The single use of the surplus meant that great proprietors were necessarily “surrounded with a multitude of retainers and dependents,” who were unable to provide anything in return (Smith 440). The proprietor’s land was already worked and additional labor was unnecessary. Thus, these dependents, maintained entirely by the proprietor’s largess, had to obey his command. Their state of dependence made them slaves to the lord who fed them. This is an example of the corrosive, centralized dependence articulated by Rousseau in his Discourse on The Origins of Inequality.
However, a shift from this state of subjugation, where all men exist either as tenants or dependents of a great proprietor, took place and produced commercial society: where a vast number of differentiated laborers are sustained by many unique customers. In other words, a transition from centralized dependence to distributed dependence. And, ultimately, this transformation can be traced back to the division of labor.
It is through the humanity’s shared instinct to barter that the division of labor originally occurs (Smith 16). Talents are not equally distributed across a population and so some individuals have greater “readiness and dexterity” others. (Smith 16) They are able to produce some output more efficiently than their peers. However, if each individual is entirely independent, their unique abilities would be underutilized. An individual can only consume a finite amount, so, if exchange was impossible, there would be no incentive to produce beyond what was required by a single person. The ability to exchange goods to mutual advantage gives rise to labor specialization or as Smith write: “the study of his own advantage naturally, or rather necessarily leads him to prefer that employment which is most advantageous” (Smith 482).
Smith opens The Wealth of Nations by outlining the steps required to produce a single pin. It is an anecdote which illustrates his understanding of the division of labor: what was once completed by the exertion of a single individual, now requires the work of dozens. The labor required to manufacture of a pin has been fragmented: “One man draws the wire, another straightens it, a third cuts it, a fourth points it, a fifth grinds it at the top” with the production line continuing ad infinitum (Smith 4). Each person who is required to produce a pin depends on the labor of every individual prior to his step in the process as well as all those who follow him; the former to furnish him with the raw material which he works upon and the latter to continue the process to its conclusion.
By dividing the process into steps, each worker’s task is reduced to “some one simple operation” (Smith 8). And as this single operation becomes the “sole employment of his life,” the worker unavoidably improves his ability (Smith 8). Many more men are required for this type of production, but the labor of each individual is less significant and the skill required by each step more trivial. Smith shows that pins can be produced more efficiently when the effort is divided, however the efficiency gains come with a necessary corollary: dependence increases as well. However, the resulting dependence is distributed evenly as each worker relies equally upon every other. This complex web of mutually advantageous dependence forms the basis of commercial society.
And commerce is what ultimately destroyed the feudal order. The great proprietors of Europe did not maintain their dependents out of kindness, but rather selfish opportunism. The surplus captured by proprietors could not be spent directly on their betterment, so instead it was converted into power over other men. The emergence of foreign trade and fine manufacturing created an alternate outlet for surplus production, by which proprietors could “consum[e] the whole value of the rents themselves” (Smith 444). All men are naturally self-interested, and so upon finding a means of consuming the surplus without sharing it, proprietors did so. This gradually eroded the foundations of the feudal order as “for the gratification of the … most sordid of all vanities, [proprietors] bartered their whole power and authority” (Smith 444-5). By trading in their surplus for expensive baubles instead of sharing it with their retainers, the cycle of dependence on “great proprietors” was interrupted.
In commercial society, unlike in the feudal order, dependence is highly distributed due to the division of labor. The wealthy remain wealthy, but their political power has decayed as now each worker “derives his subsistence from the employment, not of one, but of a hundred or a thousand different costumers.” (Smith 445). In commercial society as a whole, there is a huge degree of dependence in absolute terms, as the sustenance of every single worker requires might require transactions with hundreds of other individuals. But now even the wealthiest contribute “but a very small proportion… of [workers’] whole annual maintenance.” (Smith 445). The centralized dependence that characterized feudal institutions has been replaced and though the a single wealthy individual might contribute to the livelihood of many more workers than before, “they are all more or less independent of him, because generally they can be maintained without him.” (Smith 445).
So, if dependence can, in certain situations, be a wellspring of greater independence, is Rousseau’s understanding of the concept simply flawed? While Rousseau’s language doesn’t make the distinction between centralized dependence and distributed dependence explicit, the conceptual underpinnings of the social contract reveal that he accepted the division between the two.
In On the Social Contract, Rousseau states that there is a point when humanity as a whole can no longer exist without combining forces, and thus becoming mutually dependent. The obstacles standing in they way of progress become insurmountable if they are faced alone (Rousseau 163). So as independent existence is no longer feasible, Rousseau attempts to formulate a “form of association” that minimizes the risk of degeneration that comes with dependence (Rousseau 164). This effort results in the social contract.
The social contract is an ingenious form of association that “defends and protects” every member while each “nevertheless obeys only himself and remains as free as before. (Rousseau 164).” By surrendering one’s property and rights to the entire community, everyone is equal in condition, and because everyone has the same condition, “no one has an interest it making it more burdensome for the others” because they ultimately shoulder the increased load as well (Rousseau 164). In this process, man loses his natural freedom but gains “civil liberty and the proprietary ownership of all he possesses” (Rousseau 167). The ultimate effect is that the social contract aligns each individual’s self-interest with the public interest.
In his articulation of the social contract, Rousseau outlines the logic of freedom through distributed dependence, arguing that “in giving himself to all, each person gives himself to nobody” (Rousseau 164). Rousseau suggests that when the degree of interdependence is so extreme, it loses its coercive properties and becomes an engine of unification.
Rousseau and Smith both believe freedom to be attainable despite human beings’ dependence on one another and, in some sense, because of it. Dependence is not uniform; it comes in various shades and flavors based on the structure of obligations. And depending on the character of dependence, outcomes are different. Centralized dependence leads to slavery and degeneration, while distributed dependence promotes individual freedom and independence. Both markets and social contracts are mechanisms that foster distributed dependence. The first by contract, where each individual cedes his rights and property to the collective and thus all become equally dependent upon every other. The second by self interest: each individual, discovering his comparative advantage, rationally chooses to specializes his labor and, in doing so, becomes dependent on the multitude of other differentiated laborers who perform tasks better than he ever could. Each system of association dilutes the degree of reliance on any specific individual, instead spreading it in roughly equal proportion across society as a whole, each individual depending, reciprocally, upon every other. When dependence is well distributed, no man can be a slave. Though he relies upon a multitude of other — the butcher for his meat, the farmer for his grain — none of them can control him because they, in turn, depend on him.

 

Log in