You are viewing a read-only archive of the Blogs.Harvard network. Learn more.
Skip to content

OpenDNS and Firefox Search

07-Sep-08

Doc asks why opendns has broken the search from the address bar feature of his firefox. The problem is that the address bar fail over to google relies on the dns request failing, but opendns requests never fail. Instead, if an opendns server gets a request for a non-existent address, it display the opendns search/advertising page instead. That’s how they make their money. One of the many side effects of this behavior is that when you type ‘cotcaro’ into the firefox address bar, firefox first tries to lookup ‘cotcaro.com’ and ‘cotcaro.org’. On a normal dns server, those lookups would fail, and firefox would then try a search for ‘cotcaro’. But with opendns, the name lookup never fails. Instead, it returns the address for the opendns search/advertising page. So firefox doesn’t get the chance to fall through to the search.

I’d advise Doc not to use opendns in any case. In addition to creating ugly side effects like described above by breaking the dns protocol, it makes its money by selling information about its clients to other companies in the way that all search / advertising companies do:

We are affiliated with a variety of businesses and work closely with them in order to provide our services to users. We will only share personal information with affiliates to the extent that is necessary for such affiliates to provide the services. For example, when a website visitor searches on OpenDNS, the IP address and query are shared with OpenDNS’s advertising partners.

The given example may describe what almost every search engine does to make it money (search on google, google displays some ads for other companies, when you click on one of those ads, the company gets your ip address and the term you searched for). But the language in the clause allows opendns to sell any of your personal information to any of its customers “to the extent that is necessary .. to provide the services.”

Update: language edited slightly to make clear that opendns just seems to be selling information about its clients in the same way that other search / advertising companies do.

Midnight Piggybacking

14-Aug-08

So I’m sitting here at my excellent local Memphis honda repair shop getting Little Tokyo’s oil changed. In addition to being locally run, honest, and professional, the shop has wifi, so I can sit and work (or blog!) while getting my car fixed. The wifi wasn’t working today, so I asked the owner if he still offered it. The owner said that he does, but he only turns it on when asked now because someone has been “stealing” from him. Further questioning revealed that on three separate occasions, someone was working late at the shop and noticed a car idling outside the shop with a bright screen inside. Every time, when he turned off the wifi router, the car left.

Piggybacking someone else’s wifi is obviously nothing new. I’ve gone wardriving in a neighborhood in a pinch a few times (and even been accosted for sitting on a sidewalk in front of someone’s house once!). But in at least one case the car was idling in front of the shop from 12 midnight until 5 in the morning. I’m struggling to think of a reason for sitting on the router for so long so late at night other than the need for anonymity for some illicit activity. I suppose it might be a group of teenagers just looking for some private place to access facebook away from prying (and possibly surveiling) parents, but that seems a stretch. Individual anecdotes are obviously dangerous to draw conclusions from, but the fact that this is happening in Memphis at my local car shop makes me wonder how common it is. Memphis is far from the cutting edge of Internet activity.

I keep my wirless network at home open on the principle that I don’t trust the network to be secure with or without transport layer security and that I’m happy to share access with anyone who wants to use it. I’ve always judged the risk of someone using the access to do something I could be liable for to be small enough not to worry about it. This encounter makes me wonder whether I, like Bruce Schneier, should think harder about securing my home wireless network.

I also find it interesting that he was able to defend himself pretty effectively from what he viewed as an attack on his computers and network. As sophisticated as the wardriving attackers may have been, his simple defense of turning off the router until access is requested is pretty effective (though I strongly advised him to encrypt the network as well as turn it on on demand). Even more effectively, the owner noted the license plate of the car on at least one occasion. So now if the police show up at his door and try to arrest him for child pornography, he’ll have a license plate number to identify the users of his network. So midnight piggybacking as an anonymity technique could in many cases be less effective than just showing up at a local coffee shop in the middle of the day. the worst scenario in that case would be a physical identification which would in most cases be much more difficult to track back to a person than a license plate number.

Nigerian Searches for Spam

12-Aug-08

More google insights fun. Here’s the list of the top google searches from Nigeria:

Note that five of the top ten searches are for a tool called email extractor lite 1.4, which is a tool that pulls emails from a block of text. In other words, it is useful for harvesting email addresses for spam. I won’t link to it for fear of google juicing it, but here’s a screen shot:

This agrees with the perception of Nigerian as the source of the ubiquitous Nigerian Scam spam, but it is surprising in that it seems to suggests that a very large proportion of Nigerian Internet users are involved in spam production. I’m having a hard time coming up with an alternative explanation of this finding. If some botnet were running email extraction on lots of Nigerian computers, it wouldn’t be bothering with a google search for the tool (and would in fact just be doing the email extraction itself). One possible explanation is that email harvesting is contracted out to individuals who are left on their own to troll the Internet for pages with email addresses. Constant searches for the email extractor page would be consistent with not very technical folks getting paid for finding and harvesting email addresses.

Also note on the results page that the top rising search currently is Oceanic Bank, which seems to be a legitimate Nigerian Bank. But the web page for the bank includes a bright red Scam Alert that warns of widespread use of impostor Oceanic Bank sites for Nigerian scams.

Digital Cameras v. Nigeria

12-Aug-08

One of my guiding theories of the modern media / advertising landscape is that the extensive real time surveillance of consumers by online advertisers and content providers encourages the growth of content about digital cameras (the content about which is easily monetized) at the expense of hard news, especially international news about developing countries like Nigeria.

The following google insights chart of digital camera v. Nigeria searches over time strikes a blow against that theory:

Of course, this data say nothing about the amount of content produced about the respective topics, but the whole point of the google insights tool (which is targeted at advertisers) is to tell advertisers and content providers what sorts of content consumers are interested in. Content about digital cameras is likely still more profitable, since digital camera ad clicks presumably pay more than Nigeria ad clicks, but the decline in digital camera searches is still striking. It’s possible that this trend is merely the result of declining interest in digital cameras (which is surprising), but the fact that searches about Nigeria have not decreased over time is interesting in itself. Quick checks of similar comparisons show that consumer product content is more popular than hard news content, but that there is no accelerating trend in that direction.

Google Insights: Newspaper v. Blog v.Magazine

12-Aug-08

I’ve been playing around with the new Google Insights for Search, which is targeted to advertisers but is terribly interesting for anyone interested in media issues. Here’s a comparison of searches for newspaper, blog, and magazine:

Worldwide

U.S.

Nigeria

Leaving aside the obvious qualifications about the limitations of this metric, the fact that blogs have become more popular in relation to newspapers is obvious and only interesting to see it visualized so clearly. But the slower rise of blogs in the U.S. vs. worldwide is not obvious to me, nor is the vastly higher (and growing) popularity of newspapers in Nigeria (one might guess without any specific knowledge of Nigeria that the technically sophisticated folks online in Nigeria would be more likely to access blogs). It’s worth following the links to see further breakdowns of the data.

Much of my time will be lost (mostly productively) fiddling with this tool. The tool critically includes csv exports for all searches as well with terms of use that allow personal or research use, which will allow this data easily to be mashed up with other sources of data. One complaint is the lack of support for easily embedding the resulting graphs on other pages (I had to screen capture the above charts).

FlyClear Data Breach

08-Aug-08

FlyClear, the company that handles an express lane security clearance in some U.S. airports, recently lost control of a laptop that contained personal data used to verify the identity of subscribers. The company has repeatedly pointed out that no social security numbers or credit card numbers were included in the data as if that’s the only data that really matters:

The data in question on the laptop included a limited amount of the online applicant’s personal information, but did not include any credit information, including credit card numbers. And it did not include the applicant’s Social Security number.

Somehow, credit card numbers have become the standard for what constitutes identity theft. I would argue that stealing credit card numbers does not normally constitute identity theft in any meaningful sense — all the credit card number does is let the holder take money from a single account in a specific way. Calling credit card number theft identity theft is like calling physical key theft identity theft. The credit card is not used for generic identification but is instead only used for access to a specific resource, as is a house or car key.

Social security numbers are used for generic identification, though it’s a whole other conversation about how horrible they are for such a use (for instance, I’m constantly asked for my social security number as identity confirmation by organization to whom I never gave the number in the first place). In any case, the breached data included “names, addresses and birthdates for people applying to the program, as well as driver’s license, passport and green card information,” the combination of which is certainly as valuable for identification purposes as a simple social security number.

In fact, the purpose of the data on the laptop was to allow confirmation of identity without access to the network, so without evidence to the contrary, we can assume that the compromised data would allow an attacker to masquerade as one of the compromised identities. This could be bad for the owner of the identity, but it seems much, much worse for the overall security of the security clearance process, allowing an attacker with the data to sail through the minimized security clearance process identified as one of the compromised identities. I can’t find reference to this vulnerability in any of the releases by TSA or FlyClear or in any of the news coverage, but to the degree that we take the air travel security clearance process seriously, this problem seems to be very serious.

In a letter sent to its subscribers (and, according to David Weinberger, folks who did not know they had subscribed) , the company claims that the data was not compromised because there were no logins on the compromised laptop while it was lost. This is a very deceptive (or ignorant) statement, because it assumes that the only way to access the data on the laptop was to start up the laptop. In fact, were I to want the data on a laptop, I would grab the laptop, take out the hard drive, copy an image of the hard drive, reassemble it, and then replace it where I found it, hopefully without anyone noticing that it was gone. In this case, the absence of the laptop was noticed, but the lack of logins to the laptop says nothing about whether the data on the hard drive was accessed.

The company also claims that the personal data was protected by “two separate passwords.” It’s not clear (sotospeak) what systems used those two passwords. My guess is that at least one, if not both, of the passwords only protected access to the operating system login and not to the hard drive. Again, there’s no need to login to the operating system to access the data, and in fact a smart attacker will avoid logging in to the operating system to avoid the risk of damaging the data. It could be that one or both of the referenced passwords were used to encrypt the data on the hard drive; in the case the data would be protected even when accessed from another computer. But the company admits that the data was not in fact encrypted, so it seems more likely that the data itself was in the clear and easily accessed simply by copying it off the drive.

More generally, the response of FlyClear to the data breach takes the tone of most of the data breach announcements — that there’s much ado about mostly nothing but that the mere fact that FlyClear is making the announcement is evidence that you can trust them with your data:

We take the protection of your privacy extremely seriously at Clear. That’s why we announced on Tuesday that a laptop from our office at the San Francisco Airport containing a small part of some applicants’ pre-enrollment information (but not Social Security numbers or credit card information) recently went missing. … We are sorry that this theft of a computer containing a limited amount of applicant information occurred, and we apologize for the concern that the publicity surrounding our public announcement might have caused. But in an abundance of caution, both we and the Transportation Security Administration treated this unaccounted-for laptop as a serious potential breach.

Notice the emphasis on the small amount of data (though it seems to have contained data highly useful for identity theft), on the seriousness of their response despite that small amount of data, on the apology for the publicity, on the fact that their response to such a minor issue constitutes an “abundance of caution.” My reaction to reading a statement like this is that they do not in fact take data security seriously at all. If they did, they would not consider it an abundance of caution to send an announcement (two weeks after the fact) to folks whose data they had lost. If my friend lent me his driver’s license and and I lost it, I don’t think he would consider me telling him about the loss an abundance of caution. In fact, if I waited two weeks to tell him, he’d justifiably be very upset and never trust me with the driver’s license (or likely anything at all) again. Doubly so if I claimed to him that he should take the loss of the license and the fact that I reported it to him just two weeks later as a sign of my abundant trustworthiness.

Ernst & Young audit overlooks Phorm’s violation of its own privacy policy

25-Jul-08

I’ve been looking at deep packet inspection / targeted advertising company Phorm for the past couple of days and have found a clear and simple case of Phorm violating its own privacy policy in contradiction to Ernst & Young’s audit of the company’s systems.

Phorm has been energetically defending itself against complaints about the privacy risks of their systems. As part of its campaign to legitimize itself, Phorm prominently links to an audit completed by Ernst & Young at the end of last year. I eagerly followed the link the first time I saw it hoping for a report full of technical details about Phorm system, only to find that Ernst & Young’s statement within the audit consists of a single page says only that, in the opinion of Ernst & Young, Phorm follows its own privacy policy. No meaningful explanations of what tests they ran on the system. No technical information about the system at all. And certainly no discussion of whether the privacy policy addresses the larger privacy concerns of the community (as Phorm implies the audit does).

Even assuming that the scope of the audit is sufficient, what’s the point of producing such a document? I understand that the audit is produced and procured to reassure the large institutions (ISPs, government regulators, etc) with whom Phorm has to work, and that it has some weight for those actors. But it shouldn’t. Ernst & Young is paid (a presumably large amount of) money to produce this letter of reassurance by Phorm. It theoretically has to produce a thruthful report so that other such reports will be trusted by the audience of future customers (in other words, so that Ernst & Young can produce the same report for NebuAd and that report will reassure NebuAd’s institutional constituencies). But the report is completely opaque, so all we have to rely on is Ernst & Young’s reputation. For that reputation to be valid, though, there has to be a strong feedback mechanism that discredits Ernst & Young when it produces a faulty report. In practice, what’s that pushback? Is there any history of such audits being disproved to the disparagement of the auditing firm? In the face of only a vague threat of some sort reputation loss, the strong, direct incentive to produce positive reports to generate more business will win every time.

In fact, in a couple of hours of looking at the available technical information I found a significant breach of Phorm’s privacy policy missed by the audit: Phorm’s privacy policy claims that it will not disclose its Phorm IDs to any third parties, but a technical description of the system by Richard Clayton finds that Phorm does indeed share its IDs with web sites in a common usage scenario. Specifically, Phorm’s privacy policy claims that:

We will not disclose any randomly generated ID associated with a cookie to any third party, which means that none of this shared information can be used to identify individual users.

But in Richard Clayton’s excellent description of Phorm’s system, he finds that:

24. If, later on [after browsing from an Phorm enhanced ISP], the www.cnn.com website was to be visited via another ISP that was not using a Phorm system (or if subsequent accesses were made using the “https” protocol) then the cookie [with Phorm’s randomly generated id] would reach www.cnn.com.

25. Phorm believe that by placing their name (webwise) within the cookie they place within
the www.cnn.com domain, no clash – or other bad effects – can occur.

In other words, if you browse cnn.com from home, where your ISP is Phorm enhanced, and then later from Starbucks where it is not Phorm enhanced, cnn.com will be sent a cookie that includes your pseudonymous Phorm id. I assume that Phorm thinks this is not such an important data leak, since it considers its id to be completely anonymous. But that id serves a single, global, unique identifier for your web browsing session. It will, in other words, identify you as the same person to all web sites that you visited first at the Phorm enhanced ISP and then later from a non-Phorm ISP. Regardless of whether this data leak is significant privacy risk, Phorm’s privacy policy clear says that it will never pass its id on to any third party. The second item (#25 above) is critical because it verifies that Phorm knows about the data leak but is concerned only with not polluting the cookie namespace of the hosting site.

I credit Richard Clayton with finding and asking Phorm about this data leak (and especially with writing up his excellent report on the Phorm technology). But the policy violation is not an obscure border case. It will effect every Phorm tracked user who takes his laptop to Starbucks occasionally. I’m not the kind of uber-security-geek that Ernst & Young should be hiring for its audits, but this same question occurred to me while reading Clayton’s report — one of the core questions I was hoping the audit would help answer was what data Phorm was sending to the content publishers who host the Phorm served ads.

How did Ernst & Young not find this problem? I have a hard time accepting that an uber-security-geek would ever miss this sort of problem. Did they not test for this sort of vulnerability, concentrating instead on the process oriented AICPA privacy list? Did they just walk through the code verifying that it intends to do what the privacy policy says it does (yup, there is no ‘ip address’ field in the database. next question …)? Do they have a vulnerability attack process that encourages members of their team to break the audit system? We don’t know because they don’t tell us anything at all useful about how they conducted their audit.

For reference, here’s Ernst & Young’s entire contribution to the audit report:

We have examined Phorm, Inc.’s (“Phorm”) management assertion that during the period of June 1, 2007 through December 15, 2007 it:

  • Maintained effective controls over the privacy of personal information collected in its Phorm Service (Service) to provide reasonable assurance that the personal information was collected, used, retained, and disclosed in conformity with its commitments in its privacy policy and with criteria set forth in Generally Accepted Privacy Principles, issued by the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accountants (CICA), and
  • Complied with its commitments in the privacy policy.

This assertion is the responsibility of Phorm’s management. Our responsibility is to express an opinion based on our examination.

Our examination was conducted in accordance with attestation standards established by the AICPA and, accordingly, included (1) obtaining an understanding of Phorm’s controls over the privacy of personal information collected in the Service, (2) testing and evaluating the operating effectiveness of the controls, (3) testing compliance with Phorm’s commitments in its privacy policy, and (4) performing such other procedures as we considered necessary in the circumstances. We believe that our examination provides a reasonable basis for our opinion.

Because of inherent limitations in controls, error or fraud may occur and not be detected. Furthermore, the projection of any conclusions, based on our findings, to future periods is subject to the risk that the validity of such conclusions may be altered because of changes made to the Service or controls, the failure to make needed changes to the Service or controls, or a deterioration in the degree of effectiveness of the controls.

In our opinion, Phorm’s management assertion referred to above is fairly stated, in all material respects, in conformity with Phorm’s privacy policy and with the criteria set forth in the AICPA Generally Accepted Privacy Principles.

The rest of the audit consists of a couple of letters from the Phorm management, a copy of Phorm’s privacy policy, and a listing of the “AICPA Generally Accepted Privacy Principles.”

Passport Security

04-Jul-08

The state department released the results of an audit yesterday that found that large numbers of government workers (meaning both employees and contractors) have been regularly accessing the passport files of celebrities:

The 192 million passport files maintained by the State Department contain individuals’ passport applications, which include data such as Social Security numbers, physical descriptions, and names and places of birth of the applicants’ parents. Otherwise, the files provide limited information; they do not contain records of overseas travel or visa stamps from previous passports.

To test the extent of the snooping, investigators assembled a list of 150 famous Americans and checked how many times their files were accessed over a 5 1/2 -year period. Investigators found that the records of 127, or 85 percent, had been searched a total of more than 4,100 times.

The report said that “although an 85 percent hit rate appears to be excessive, the Department currently lacks criteria to determine whether this is actually an inordinately high rate.”

85%! “excessive” indeed! If you look at the criteria for celebrities (including the Fortune 50), it’s likely that the 15% of folks who didn’t meet this threshold (and note that the threshold is lots and lots of accesses to the files, rather than the more than one that should trigger an alert) are simply not of interest to the government employees.

What’s shocking about this breech is not so much the privacy of the celebrities (who have little privacy anyway), but the revelation that there seem to be no controls at all over the data other than very casual manual supervision. It’s almost certain that in addition to looking up celebrities workers have been looking up information on other folks — friends, family, lovers, colleagues, bowling league rivals — who are more relevant to their lives than celebrities. And many of the folks who have access to the data and have been guilty of the breeches are contractors. Given the shockingly lax control over the data, we have to worry about those contractors and their employers accessing the data for all sorts of unsavory business reasons (looking up data on competitors, on government supervisors, etc).

The larger lesson here is that valuable data collections like the passport database have potential value tremendously higher than their regulated power. That tension between the potential value and regulated value makes it inevitable that data will leak out from some of even the best secured of them in one way or another. When a data collection has no serious controls at all (as the passport records seems not to), such breeches will be certain and frequent.

Google Adwords Category Exclusion

24-Jun-08

Google recently added category exclusion to its adwords system, allowing advertisers to choose not to support content that deals with topics such as “death & tragedy” and “military & international conflict”. The new category exclusion feature allows an advertiser exclude from its content network any pages that belong to a specified set of topics or page types.

Here are description of the exclusion topics:

Conflict & tragedy:

Crime, police & emergency: Police blotters, news stories on fires, and emergency services resources

Death & tragedy: Obituaries, bereavement services, accounts of natural disasters, and accidents

Military & international Conflict: News about war, terrorism, and sensitive international relations

Edgy content:

Juvenile, gross & bizarre Content: Jokes, weird pictures, and videos of stunts

Profanity & rough language: Moderate use of profane language

Sexually suggestive content: Provocative pictures and text

The topic exclusion is the most interesting, because it directly supports an argument I’m making (in a paper I’m writing now on surveillance, google, and botnets) that google’s adwords system is a primary driver of a move away from stories about “death & tragedy” in far away places and toward stories about digital cameras. Notice that “consumer electronics” is not one of the exclusion topics.

Here are the page types:

Network types:

Parked domains are sites in Google’s AdSense for domains network. Users are brought to parked domain sites when they enter the URL of an undeveloped webpage into a browser’s address bar. There, they’ll see ads relevant to the terminology in the URL they entered. The AdSense for domains network is encompassed by both the content network and the search network. If you exclude this page type, you’ll exclude all parked domain sites, including the ones on the search network. Learn more.

Error pages are part of Google’s AdSense for errors network. Certain users are brought to error pages when they enter a search query or unregistered URL in a browser’s address bar. There, they’ll see ads relevant to the search query or URL they entered. Learn more.

User-generated content:

Forums are websites devoted to open discussion of a topic.

Social networks are websites offering an interactive network of friends with personal profiles.

Image-sharing pages allow users to upload and view images.

Video-sharing pages allow users to view uploaded videos.

These page types are interesting as well because they exert control over the presentation as well the substance of a given topic. If the commercial interests don’t like video sharing then by gosh there will be less video sharing.

For both the page types and the topics it would be helpful for Google to provide information about how specific pages are classified. Such knowledge could discourage content owners from publishing content that they know will trigger the “death & tragedy” topic, but the lack of knowledge about how the classification works could have the even worse effect of content producers being doubly careful not to produce any content that might be classified as an excluded topic. In either case, the topics are likely to have a strong effect on the kinds of content that gets published.

Two Spectacles

28-Mar-08

I’ve been pondering how the concept of spectacle fits in with surveillance. In particular, I’ve been bouncing around two different concepts of the spectacle, one by Michel Foucault and the other by Scott Bukatman.

Here’s an execution spectacle in Michel Foucault’s Discipline & Punish:

‘Finally, he was quartered,’ recounts the Gazette d’Amsterdam of 1 April 1757. ‘This last operation was very long, because the horses used were not accustomed to drawing; consequently, instead of four, six were needed; and when that did not suffice, they were forced, in order to cut off the wretch’s thighs, to sever the sinews and hack at the joints… (Foucault 75)

Here’s a science fiction spectacle (William Burrough’s death dwarf in Nova Express) in Scott Bukatman’s Terminal Identity:

“Images — millions of images — That what I eat — Cyclotron shit — Ever trying kicking that habit with apormorphine? — Now I got all the images of sex acts and torture ever took place anywhere and I can just blast it out and control you gooks right down to the molecule — I got orgasms — I got screams — I got all the images any hick poet ever shit out — My power’s coming — My power’s coming — My power’s coming. … And I got millions of images of Me, Me, Me meee.” (Bukatman 45)

Foucault’s Discipline & Punish describes an arc from disciplining society through the use of public executions as spectacle to disciplining society through the more complex interaction of an array of civilizing institutions (prisons, schools, hospitals, factories, etc) and their various agents (guards, judges, experts, teachers, etc). Foucault argues that the spectacles were effective insofar as they represented an imposition of the king’s body onto the body of the public, and at the same time provided space for the condemned to voice their frustrations against the monarchy. This system of disciplining the public began to break apart as the power of the people grew and consequently the king’s symbolic body lost power relative to the body of the people, which is to say that the increasingly powerful public was able to question the fairness of the terrifying executions and the one sided prosecutions that led to them.

What eventually replaced these spectacles, Foucault argues, was the modern set of institutions whose most important impact was to embed discipline into the social fabric itself, rather than imposing discipline bodily through bloody spectacle. Prisoners, pupils, patients, and workers (as examples) learned that they were being watched continuously and that their fates were judged by a set of scientific criteria which were themselves defined by the system of watching. Instead of investing all power in the prosecutor, the modern judicial system invests its power into the jury’s ability to objectively judge the truth of various pieces of witness and scientific testimony. The guilty man is condemned not because of the will of the king but because our objective system determines him to be guilty. If you don’t want to be judged guilty, you have to judge yourself by this objective criteria, rather than by the arbitrary decisions of the king. And if you want to succeed in school or at work, you have to measure up to objective criteria. But those objective criteria are themselves influenced by this process. Experts prune themselves for their work in court, social institutions design the tests that determine school success, and so on.

What’s interesting to note is that Foucault described in 1975 a move away from spectacle and to this more complex and subtle integration of the scientific and the social as a means of social discipline. In 1994, Scott Bukatman’s Terminal Identity captured a widely held consensus that television and other mass media had once again made spectacle the dominant mode of social discipline. Bukatman pulls together a set of social theorists, media thinkers, and science fiction authors to describe a “terminal space” in which a flood of image, audio, and text “blips” constitute a never ending spectacle through which society defines itself. This image of society as spectacle contrasts sharply with the image drawn just twenty year before by Foucault. And the spectacle has multiplied itself many times with the explosive growth of the Internet since 1994, including the growth of Internet pornography and YouTube beatings that make Burrough’s death dwarf (above) look prescient.

To figure out what this move away from and then back to spectacle means, we first have to figure out what “spectacle” means and whether Foucault and Bukatman are referring to the same thing at all. Foucault’s spectacle of execution and Bukatmans’ (and others’) spectacle of mass media both refer to a display of striking images. The public execution is striking largely because it is dramatically physical. One cannot watch a execution without a strong, physical reaction. Bukatman’s flood of images (and other media) is striking partly because each image is designed for emotional impact (buy this SUV if you want to dominate the road) but mostly because of the sheer number of images. It is striking to be shown many different images at once, even if each image is just as a single solid color. Foucault’s spectacle is striking because it is so strongly physical, whereas the modern media spectacle is completely virtual — it is a spectacle because of the sheer flood of input that cannot be reproduced bodily. Bukatman argues that the lack of physicality actually defines the media spectacle as such: “pure spectacle … [is] … a proliferation of of semiotic systems and simulations which increasingly serve to replace physical human experience and interaction.” (Bukatman 26) Most importantly, both of these spectacles are used a source of control over society, and the impact of each form of spectacle relies on the dramatic impact of the spectacle itself — this need for dramatic impact is why, for instance, modern executions by injection serve nothing like the role of the spectacular public executions that Foucault describes. In contrast to the slow diffusion of the social knowledge through institutions, spectacle derives its power from its ability to reach directly into the brain of its subjects and create an immediate reaction (who wants to argue over which textbook a school should use when you can just make a YouTube video and beam your truth directly into kids’ brains?).

Coming back to surveillance, both of these forms of spectacle (like institutional discipline) involve not just being watched, but watching as well. The spectacle of execution is an application of watching onto the public: not only does the execution enact the punishment of the kind on the body of the people, but the process of prosecution applies the eye of the king onto the people. By watching and judging the condemned, the king is making clear that he is watching the public, both symbolically and through his state apparatus. The watching and being watched of this process are necessarily entangled: one cannot have a public execution without a prosecution, and the prosecution has no social impact if it is not in turn watched.

The modern media spectacle provides an even more tangled relationship between watching and being watched. Much of the modern spectacle is advertising, which is about pushing images to consumers to get them to watch them. But advertisements are only useful if the advertiser knows who is watching them. Television ratings and commercials are necessary complements, as are clicks and web advertisements. An advertisement (and any spectacle used as a social lever) is only useful insofar as its impact can be measured, and knowledge of who is watching an ad is necessary to measure that impact. This tangled relationship between watching and being watched applies not only to the advertiser and the consumer, but quickly spreads out into the whole range of different actors involved. The content provider watches which ads are most profitable and watches which content brings consumers to its ads, the ad brokers like google watch both consumers and advertisers to determine which ads are most valuable and profitable, the various participants in the botnet economy watch (or pretend to watch) the ads to game profits while also watching google to determine how to avoid its click fraud filtering, search engine optimization (SEO) agents of various levels of legitimacy watch google to improve their customers’ positions in the google index, security professionals of various sorts watch the botnets to learn how to protect against them and watch users to determine if they are infect or likely to get infected, and on and on. Every one of these actors is both watching and being watched, and each one is a necessary growth of the system that begins with the simple display of an image intended to leverage some sort of social control. It is impossible to say which actors are merely being watched and which are watching, just as it is impossible to point to any activity that does not involve both watching and being watched by multiple actors.

What’s different between execution as spectacle and media as spectacle is that executions are hard to repeat, whereas each of the little images that together make up the media spectacle are easy to produce. Today, production and distribution of these images on the Internet has become virtually free, so anyone can produce these images, and they are ominpresent in the media (online, tv, etc). But this ease of production makes the effects of the spectacle much, much less clear. Foucault argues that the execution as discipline ended largely because it began to grow network effects that the king could not control, and that was just from a single producer of generally infrequent spectacles. The result of the proliferation of spectacle producers today is a hugely complex network, briefly sketched above, whose effects are mostly emergent and unpredictable. When anyone can leverage control through a spectacle, everyone does, but those effects all bounce off of one another in complex feedback loops. Foucault describes a similar effect in his institutional disciplinary society, but the effect in his case involves far fewer major players and is therefore much less complex. Today anyone can create a spectacle to influence society (buy a car, vote for my candidate, scream, laugh, cry).