How the cookie poisoned the Web

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to?

Blame the cookie.

Have you wondered why you have no more privacy on the Web than what other parties grant you (which is none at all), and that you can only opt in or out of choices that others provide—while the only controls you have over your privacy are to skulk around like a criminal (thank you, Edward Snowden and Russell Brand, for that analogy) or to stay offline completely?

Blame the cookie.

And have you paused to wonder why Europe’s GDPR regards you as a mere “data subject” while assuming that the only parties qualified to be “data controllers” and “data processors” are the sites and services of the world, leaving you with little more agency than those sites and services allow, or provide you?

Blame the cookie.

Or why California’s CCPA regards you as a mere “consumer” (not a producer, much less a complete human being), and only gives you the right to ask the sites and services of the world to give back data they have gathered about you, or not to “sell” that personal data, whatever the hell that means?

Blame the cookie.

There are more examples, but you get the point: this situation has become so established that it’s hard to imagine any other way for the Web to operate.

Now here’s another point: it didn’t have to be that way.

The World Wide Web that Tim Berners-Lee invented didn’t have cookies. It also didn’t have websites. It had pages one could publish or read, at any distance across the Internet.

This original Web was simple and peer-to-peer. It was meant to be personal as well, meaning an individual could publish with a server or read with a browser. One could also write pages easily with an HTML editor, which was also easy to invent and deploy.

It should help to recall that the Apache Web server, which has published most of the world’s Web pages across most the time the Web has been around, was meant originally to work as a personal server. That’s because the original design assumption was that anyone, from individuals to large enterprises, could have a server of their own, and publish whatever they wanted on it. The same went for people reading pages on the Web.

Back in the 90s my own website, searls.com, ran on a box under my desk. It could do that because, even though my connection was just dial-up speed, it was on full time over its own static IP address, which I easily rented from my ISP. In fact, that I had sixteen of those addresses, so I could operate another server in my office for storing and transferring articles and columns I wrote to Linux Journal. Every night a cron utility would push what I wrote to the magazine itself. Both servers ran Apache. And none of this was especially geeky. (I’m not a programmer and the only code I know is Morse.)

My point here is that the Web back then was still peer-to-peer and welcoming to individuals who wished to operate at full agency. It even stayed that way through the Age of Blogs in the early ’00s.

But gradually a poison disabled personal agency. That poison was the cookie.

Technically a cookie is a token—a string of text—left by one computer program with another, to help the two remember each other. These are used for many purposes in computing.

But computing for the Web got a special kind of cookie called the HTTP cookie. This, Wikipedia says (at that link)

…is a small piece of data stored on the user‘s computer by the web browser while browsing a website. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember pieces of information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.

It also says,

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with.

This, however, was not the original idea, which Lou Montulli came up with in 1994. Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.

Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.

Today that snowball is so large that nearly all personal agency on the Web happens within the separate silos of every website, and compromised by whatever countless cookies and other tracking methods are used to keep track of, and to follow, the individual.

This is why most of the great stuff you can do on the Web is by grace of Google, Apple, Facebook, Amazon, Twitter, WordPress and countless others, including those third parties.

Bruce Schneier calls this a feudal system:

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals.

Bruce wrote that in 2012, about the time we invested hope in Do Not Track, which was designed as a polite request one could turn on in a browser, and servers could obey.

Alas, the tracking-based online advertising business and its dependents in publishing dismissed Do Not Track with contempt.

Starting in 2013, we serfs fought back, by the hundreds of millions, blocking ads and tracking: the biggest boycott in world history. This, however, did nothing to stop what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity.

Today our poisoned minds can hardly imagine having native capacities of our own that can operate at scale across all the world’s websites and services. To have that ability would also be at odds with the methods and imperatives of personally targeted advertising, which requires cookies and other tracking methods. One of those imperatives is making money: $Trillions of it.

The business itself (aka adtech) is extremely complex and deeply corrupt: filled with fraud, botnets and malwareMost of the money spent on adtech also goes to intermediaries and not to the media you (as they like to say) consume. It’s a freaking fecosystem, and every participant’s dependence on it is extreme.

Take, for example, Vizio TVs. As Samuel Axon puts it in Ars TechnicaVizio TV buyers are becoming the product Vizio sells, not just its customers Vizio’s ads, streaming, and data business grew 133 percent year over year.

Without cookies and the cookie-like trackers by which Vizio and its third parties can target customers directly, that business wouldn’t be there.

As a measure of how far this poisoning has gone, dig this: FouAnalyticsPageXray says the Ars Technica story above comes to your browser with all this spyware you don’t ask for or expect when you click on that link:

Adserver Requests: 786
Tracking Requests: 532
Other Requests: 112

I’m also betting that nobody reporting for a Condé Nast publication will touch that third rail, which I have been challenging journalists to do in 139 posts, essays, columns and articles, starting in 2008.

(Please prove me wrong, @SamuelAxon—or any reporter other than Farhad Manjoo, who so far is the only journalist from a major publication I know to have bitten the robotic hand that feeds them. I also note that the hand in his case is The New York Times‘, and that it has backed off a great deal in the amount of tracking it does. Hats off for that.)

At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem. If it was, we would get back the agency we lost, and the regulations we’re writing would respect and encourage that agency as well.

But that’s not happening, in spite of all the positive privacy moves Apple, Brave, Mozilla, Consumer Reports, the EFF and others are making.

My hat’s off to all of them, but let’s face it: the poisoning is too far advanced. After fighting it for more than 22 years (dating from publishing The Cluetrain Manifesto in 1999), I’m moving on.

To here.



4 responses to “How the cookie poisoned the Web”

  1. […] Doc Searls Weblog · How the cookie poisoned the WebToday our poisoned minds can hardly imagine having native capacities of our own that can operate at scale across all the world’s websites and services. To have that ability would also be at odds with the methods and imperatives of personally targeted advertising, which requires cookies and other tracking methods. One of those imperatives is making money: $Trillions of it. […]

  2. Jeff Paul Botsford

    Jeff Paul Botsford

    Doc Searls Weblog · How the cookie poisoned the Web

  3. […] How the cookie poisoned the Web […]

Leave a Reply

Your email address will not be published. Required fields are marked *