history

You are currently browsing the archive for the history category.

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

Going west

Long ago a person dear to me disappeared for what would become eight years. When this happened I was given comfort and perspective by a professor of history whose study concentrated on the American South after the Civil War.

“You know what the most common record of young men was, after the Civil War?” he asked.

“You mean census records?”

“Yes, and church records, family histories, all that.”

“I don’t know.”

“Two words: Went west.”

He then explained that that, except for the natives here in the U.S., nearly all of our ancestors had gone west. Literally or metaphorically, voluntarily or not, they went west.

More importantly, most were not going back. Many, perhaps most, were hardly heard from again in the places they left. The break from the past in countless places was sadly complete for those left behind. All that remained were those two words: went west.

This fact, he said, is at the heart of American rootlessness.

“We are the least rooted civilization on Earth,” he said. “This is why we have the weakest family values in the world.”

This is also why he also thought political talk about “family values” was especially ironic. We may have those values, but they tend not to keep us from going west anyway.

This comes to mind because I just heard Harry Chapin‘s “Cat’s in the Cradle” for the first time in years, and it hurt to hear it. (Give it a whack and try not to be moved. Especially if you also know that Harry—a great songwriter—died in a horrible accident while still a young father.)

You don’t need to grow up in an unhappy family to go west anyway. That happened for me. My family was a very happy one, and when i got out of high school I was eager to go somewhere else anyway. Eventually I went all the way west, from New Jersey, then North Carolina, then Calfornia. After that, also Boston, New York and Bloomington, Indiana. There was westering in all those moves.

Now I’m back in California for a bit, missing all those places, and people in them.

There are reasons for everything, but in most cases those are just explanations. Saul Bellow explains the difference in Mr. Sammler’s Planet:

You had to be a crank to insist on being right. Being right was largely a matter of explanations. Intellectual man had become an explaining creature. Fathers to children, wives to husbands, lecturers to listeners, experts to laymen, colleagues to colleagues, doctors to patients, man to his own soul, explained. The roots of this, the causes of the other, the source of events, the history, the structure, the reasons why. For the most part, in one ear out the other. The soul wanted what it wanted. It had its own natural knowledge. It sat unhappily on superstructures of explanation, poor bird, not knowing which way to fly.

What explains the human diaspora better than our westering tendencies? That we tend to otherize and fight each other? That we are relentlessly ambulatory? Those are surely involved. But maybe there is nothing more human than to say “I gotta go,” without needing a reason beyond the urge alone.

The Web is a haystack.

This isn’t what Tim Berners-Lee had in mind when he invented the Web. Nor is it what Jerry Yang and David Filo had in mind when they invented Jerry and David’s Guide to the World Wide Web, which later became Yahoo. Jerry and David’s model for the Web was a library, and Yahoo was to be the first catalog for it. This made sense, given the prevailing conceptual frames for the Web at the time: real estate and publishing.

Both of those are still with us today. We frame the Web as real estate when we speak of “sites” with “locations” in “domains” with “addresses” you can “visit” and “browse”—then shift to publishing when we speak of “files” and “pages,” that we “author,” “edit,” “post,” “publish,” “syndicate” and store in “folders” within a “directory.” Both frames suggest durability, if not permanence. Again, kind of like a library.

But once we added personal movement (“surf,” “browse”) and a vehicle for it (the browser), the Web became a World Wide Free-for-all. Literally. Anyone could publish, change and remove whatever they pleased, whenever they pleased. The same went for organizations of every kind, all over the world. And everyone with a browser could find their way to and through all of those spaces and places, and enjoy whatever “content” publishers chose to put there. Thus the Web grew into billions of sites, pages, images, databases, videos, and other stuff, with most of it changing constantly.

The result was a heaving heap of fuck-all.*

How big is it? According to WorldWebSize.comGoogle currently indexes about 41 billion pages, and Bing about 9 billion. They also peaked together at about 68 billion pages in late 2019. The Web is surely larger than that, but that’s the practical limit because search engines are the practical way to find pieces of straw in that thing. Will the haystack be less of one when approached by other search engines, such as the new ad-less (subscription-funded) Neeva? Nope. Search engines do not give the Web a card catalog. They certify its nature as a haystack.

So that’s one practical limit. There are others, but they’re hard to see when the level of optionality on the Web is almost indescribably vast. But we can see a few limits by asking some questions:

  1. Why do you always have to accept websites’ terms? And why do you have no record of your own of what you accepted, or when‚ or anything?
  2. Why do you have no way to proffer your own terms, to which websites can agree?
  3. Why did Do Not Track, which was never more than a polite request not to be tracked off a website, get no respect from 99.x% of the world’s websites? And how the hell did Do Not Track turn into the Tracking Preference Expression at the W2C, where the standard never did get fully baked?
  4. Why, after Do Not Track failed, did hundreds of millions—or perhaps billions—of people start blocking ads, tracking or both, on the Web, amounting to the biggest boycott in world history? And then why did the advertising world, including nearly all advertisers, their agents, and their dependents in publishing, treat this as a problem rather than a clear and gigantic message from the marketplace?
  5. Why are the choices presented to you by websites called your choices, when all those choices are provided by them? And why don’t you give them choices?
  6. Why would Apple’s way of making you private on your phone be to “Ask App Not to Track,” rather than “Tell App Not to Track,” or “Prevent App From Tracking You“?
  7. Why does the GDPR call people “data subjects” rather than people, or human beings, and then assign the roles “data controller” and “data processor” only to other parties? (Yes, it does say a “data controller” can be a “natural person,” but more as a technicality than as a call for the development of agency on behalf of that person.)
  8. Why are nearly all of the billion results in a search for GDPR+compliance about how companies can obey the letter of that law while violating its spirit by continuing to track people through the giant loophole you see in every cookie notice?
  9. Why does the CCPA give you the right to ask to have back personal data others have gathered about you on the Web, rather than forbid its collection in the first place? (Imagine a law that assumes that all farmers’ horses are gone from their barns, but gives those farmers a right to demand horses back from those who took them. It’s kinda like that.)
  10. Why, 22 years after The Cluetrain Manifesto said, we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it. —is that statement (one I helped write!) still not true?
  11. Why, 9 years after Harvard Business Review Press published The Intention Economy: When Customers Take Charge, has that not happened? (Really, what are you in charge of in the marketplace that isn’t inside companies’ silos and platforms?)
  12. And, to sum up all the above, why does “free market” on the Web mean your choice of captor?

It’s easy to blame the cookie, which Lou Montulli invented in 1994 as a way for sites to remember their visitors by planting reminder files—cookies—in visitors’ browsers. Cookies also gave visitors a way to remember where they were when they last visited. For sites that require logins, cookies take care of that as well.

What matters, however, is not the cookie. What matters is why the cookie was necessary in the first place: the Web’s architecture. It’s called client-server, and is represented graphically like this:

client-server model

This architecture was born in the era of centralized mainframes, which “users” accessed through client devices called “dumb terminals”:

On the Web, as it was in the old mainframe world, we clients—mere users—are as subordinate to servers as are calves to cows:

(In fact I’ve been told that client-server was originally a euphemism for “slave-master.” Whether true or not, it makes sense.)

In the client-server paradigm, our agency—our ability to act with effect in the world—is restricted to what servers allow or provide for us. Our choices are what they provide. We are independent only to the degree that we can also be clients to other servers. In this paradigm, a free market is “your choice of captor.”

Want privacy? You have to ask for it. And, if you go to the trouble of doing that—which you have to do separately with every site and service you encounter (each a mainframe of its own)—your client doesn’t keep a record of what you “agreed” to. The server does. Good luck finding whatever it is the server or its third parties remember about that agreement.

Want to control how your data (or data about you) gets processed by the servers of the world? Good luck with that too. Again, Europe’s GDPR says “natural persons” are just “data subjects,” while “data controllers” and “data processors” are roles reserved for servers.

Want a shopping cart of your own to take from site to site? My wife asked for that in 1995. It’s still barely thinkable in 2021. Want a dashboard for your life where you can gather all your expenses, investments, property records, health information, calendars, contacts, and other personal information? She asked for that too, and we still don’t have it, except to the degree that large server operators (e.g. Google, Apple, Microsoft) give us pieces of it, hosted in their clouds, and rigged to keep you captive to their systems.

That’s why we don’t yet have an Internet of Things (IoT), but rather an Apple of Things, a Google of Things, and an Amazon of Things.

Is it possible to do stuff on the Web that isn’t client-server? Perhaps some techies among us can provide examples, but practically speaking, here’s what matters: If it’s not thinkable by the owners of the servers we depend on, it doesn’t get made.

From our position at the bottom of the Web’s haystack, it’s hard to imagine there might be a world where it’s possible for us to have full agency: to not be just users of clients enslaved to as many servers as we deal with every day.

But that world exists. It’s called the Internet, And it can support a helluva lot more than the Web—with many ways to interact other than those possible in through client-server alone.

Digital technology as we know it has only been around for a few decades, and the Internet for maybe half that time. Mobile computers that run apps and presume connectivity everywhere have only been with us for a decade or less. And all of those will be with us for many decades, centuries, or millennia to come. We are not going to stop living digital lives, any more than we are going to stop speaking, writing, or using mathematics. Digital technology and the Internet are granted wishes that won’t go back into the genie’s bottle.

Credit where due: the Web is excellent, but not boundlessly so. It has limits. Thanks to the client-server model, full personal agency is not a grace of life on the Web. Not until we have servers or agents of our own. (Yes, we could have our own servers back in Web1 days—my own Web and email servers lived under my desk and had their own static IP addresses from roughly 1995 until 2003—and a few alpha geeks still do. But since then we’ve mostly needed to live as digital serfs, by the graces of corporate overlords.)

So now it’s time to think and build outside the haystack.

Models for that do exist, and some have been around for a long time. Email is one example. While you can look at your email on the Web, or use a Web-based email service (such as Gmail), email itself is independent of those. My own searls.com email has been at servers in my home, on racks elsewhere, and in a hired cloud. I can move it anywhere I want. You can move yours as well, because the services we hire to host our personal email are substitutable. That’s just one way we can enjoy full agency on the Internet.

Some work toward the next Web, or beyond it, is happening at places such as DWeb Camp and Unfinished. My own work is happening right now in three overlapping places:

  1. ProjectVRM, which I started as a fellow of the Berkman Klein Center at Harvard in 2006, and which is graciously still hosted (with this blog) by the Center there. Our mailing list currently has more than 550 members. We also meet twice a year with the Internet Identity Workshop, which I co-founded, and still co-organize, with Kaliya Young and Phil Windley, in 2005). Immodestly speaking, IIW is the most leveraged conference I know.
  2. Customer Commons, where we are currently working on building out what’s called the Byway. Go there and follow along as we work to toward better answers to the questions above than you’ll get from inside the haystack. Customer Commons is a 501(c)3 nonprofit spun out of ProjectVRM.
  3. The Ostrom Workshop at Indiana University, where Joyce (my wife and fellow founder and board member of Customer Commons) and I are both visiting scholars. It is in that capacity that we are working on the Byway and leading a salon series titled Beyond the Web. Go to that link and sign up to attend. I look forward to seeing and talking with you there.

[Later…] More on the Web as a haystack is in FILE NOT FOUND: A generation that grew up with Google is forcing professors to rethink their lesson plans, by Monica Chin (@mcsquared96) in The Verge, and Students don’t know what files and folders are, professors say, by Jody MacGregor in PC Gamer, which sources Monica’s report.


*I originally had “heaving haystack of fuck-all” here, but some remember it as the more alliterative “heaving heap of fuck-all.” So I decided to swap them. If comments actually worked here†, I’d ask for a vote. But feel free to write me instead, at my first name at my last name dot com.

†Now they do. Thanks for your patience, everybody.

 

Since I’m done with fighting in the red ocean of the surveillance-dominated Web, I’ve decided, while busy working in the blue ocean (on what for now we’re calling i-commerce), to bring back, in this blog, some of the hundreds of things I’ve written over the last 30+ years. I’m calling it the Redux series. To qualify, these should still ring true today, or at least provide some history. This early one is still on the Web, here at BuzzPhraser.com. I’ve made only two small edits, regarding dates. (And thanks to Denise Caruso for reminding me that this thing started out on paper, very long ago.)


The original BuzzPhraser was created in 1990, or perhaps earlier, as a spreadsheet, then a HyperCard stack; and it quickly became one of the most-downloaded files on AOL and Compuserve. For years after that it languished, mostly because I didn’t want to re-write the software. But when the Web came along, I knew I had to find a way to re-create it. The means didn’t find that end, however, until Charles Roth grabbed the buzzwords by their serifs and made it happen, using a bit of clever Javascript. Once you start having fun with the new BuzzPhraser, I’m sure you’ll thank him as much as I do.

The story that follows was written for the original BuzzPhraser. I thought it would be fun to publish it unchanged.

—Doc, sometime in the late ’90s

BuzzPhrases are built with TechnoLatin, a non-language that replaces plain English nouns with vague but precise-sounding substitutes.  In TechnoLatin, a disk drive is a “data management solution.”  A network is a “workgroup productivity platform.”  A phone is a “telecommunications device”.

The virtue of TechnoLatin is that it describes just about anything technical.  The vice of TechnoLatin is that it really doesn’t mean anything.  This is because TechnoLatin is comprised of words that are either meaningless or have been reduced to that state by frequent use.  Like the blank tiles in Scrabble, you can put them anywhere, but they have no value.  The real value of TechnoLatin is that it sounds precise while what it says is vague as air.  And as easily inflated.

Thanks to TechnoLatin, today’s technology companies no longer make chips, boards, computers, monitors or printers.  They don’t even make products.  Today everybody makes “solutions” that are described as “interoperable,” “committed,” “architected,” “seamless” or whatever.  While these words sound specific, they describe almost nothing.  But where they fail as description they succeed as camouflage: they conceal meaning, vanish into surroundings and tend to go unnoticed.

Take the most over-used word in TechnoLatin today: solution.  What the hell does “solution” really mean?  Well, if you lift the camouflage, you see it usually means “product.”  Try this: every time you run across “solution” in a technology context, substitute “product.”  Note that the two are completely interchangeable.  The difference is, “product” actually means something, while “solution” does not.  In fact, the popularity of “solution” owes to its lack of specificity.  While it presumably suggests the relief of some “problem,” it really serves only to distance what it labels from the most frightening risk of specificity: the clarity of actual limits.

The fact is, most vendors of technology products don’t like to admit that their creations are limited in any way.  Surely, a new spreadsheet — the labor of many nerd/years — is something more than “just a spreadsheet.”  But what?  Lacking an available noun, it’s easy to build a suitable substitute with TechnoLatin.  Call it an “executive information matrix.”  Or a “productivity enhancement engine.”  In all seriousness, many companies spend months at this exercise.  Or even years.  It’s incredible.

There is also a narcotic appeal to buzzphrasing in TechnoLatin.  It makes the abuser feel as if he or she is really saying something, while in fact the practice only mystifies the listener or reader.  And since buzzphrasing is so popular, it gives the abuser a soothing sense of conformity, like teenagers get when they speak slang.  But, like slang, TechnoLatin feels better than it looks.  In truth, it looks suspicious.  And with good reason.  TechnoLatin often does not mean what it says, because the elaborate buzzphrases it builds are still only approximations.

But who cares? Buzzphrasing is epidemic.  You can’t get away from it.  Everybody does it.  There is one nice thing about Everybody, however: they’re a big market.

So, after studying this disease for many years, I decided, like any self-respecting doctor, to profit from the problem.  And, like any self-respecting Silicon Valley entrepreneur, I decided to do this with a new product for which there was absolutely no proven need, in complete faith that people would buy it.  Such is the nature of marketing in the technology business.

But, lacking the investment capital required to generate demand where none exists, I decided on a more generous approach: to give it away, in hope that even if I failed to halt the epidemic, at least I could get people to talk about it.

With this altruistic but slightly commercial goal in mind, I joined farces with Ray Miller of Turtlelips Services to create a product that would encourage and support the narcotic practice of buzzphrasing.  Being the brilliant programmer he is, Ray hacked it into a stack in less time than it took for me to write this prose.  And now here it is, free as flu, catching on all over the damn place.

What made BuzzPhraser possible as a product is that the practice of buzzphrasing actually has rules.  Like English, TechnoLatin is built around nouns.  It has adjectives to modify those nouns.  And adverbs to modify the adjectives.  It also has a class of nouns that modify other nouns — we call them “adnouns.”  And it has a nice assortment of hyphenated prefixes and suffixes (such as “multi-” and “-driven”) that we call “hyphixes.”

Since the TechnoLatin lexicon is filled with meaningless words in all those categories, the words that comprise TechnoLatin buzzphrases can be assembled in just about any number or order, held together as if by velcro.  These are the rules:

  • adverbs modify adjectives
  • adjectives modify adnouns, nouns or each other
  • adnouns modify nouns or other adnouns
  • nouns are modified by adnouns or adjectives
  • prefixes modify all adjectives
  • suffixes qualify all adnouns

Here is a diagram that shows how the rules work:

As with English, there are many exceptions.  But, as with programming, we don’t make any.  So cope with it.

With one adverb, one adjective, two adnouns, a noun and a prefix, you get “backwardly architected hyper-intelligent analysis inference leader.”  With an adjective and two nouns, you get “interactive leverage module.”  Put together buzzphrases of almost any shape and length:

  • “Breakthrough-capable technology market”
  • “Primarily distinguished optional contingency philosophy control power environment”
  • “Executive inference server”
  • “Evidently complete key business manipulation capacity method”
  • “Incrementally intelligent workgroup process topology vendor”

The amazing thing is that all of these sound, as we say in TechnoLatin, “virtually credible.”  And one nice thing about the computer business is — thanks largely to the brain-softening results of prolonged TechnoLatin abuse — “virtually credible” is exactly what it means in plain English: close enough.

BuzzPhraser makes “close enough” easy to reach by substituting guesswork for thinking.  Just keep hitting the button until the right buzzphrase comes along.  Then use that buzzphrase in faith that at least it sounds like you know what you’re saying.  And hey, in this business, isn’t that virtually credible?

Acknowledgements

Thanks to:

Stewart Alsop II, who published “Random Strings of TechnoLatin” along with the original Generic Description Table in both the Preceedings and Proceedings of Agenda 90; and who would like an e-mail front end that automatically discards any message with too many TechnoLatin words and buzzphrases.

Spencer F. Katt of PC Week, who devoted parts of two consecutive rumor columns to the Table, and posted it on the magazine’s CompuServe bulletin board, from which so many people copied it that I thought there might be something going on here.

Guy Kawasaki, who told me “this needs to be a product.”

Bob LeVitus, who told me “you ought to get this hacked into a stack.”

And Ray Miller, who did it.  Beautifully.

Doc Searls
Palo Alto, California
March 7, 1991

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to?

Blame the cookie.

Have you wondered why you have no more privacy on the Web than what other parties grant you (which is none at all), and that you can only opt in or out of choices that others provide—while the only controls you have over your privacy are to skulk around like a criminal (thank you, Edward Snowden and Russell Brand, for that analogy) or to stay offline completely?

Blame the cookie.

And have you paused to wonder why Europe’s GDPR regards you as a mere “data subject” while assuming that the only parties qualified to be “data controllers” and “data processors” are the sites and services of the world, leaving you with little more agency than those sites and services allow, or provide you?

Blame the cookie.

Or why California’s CCPA regards you as a mere “consumer” (not a producer, much less a complete human being), and only gives you the right to ask the sites and services of the world to give back data they have gathered about you, or not to “sell” that personal data, whatever the hell that means?

Blame the cookie.

There are more examples, but you get the point: this situation has become so established that it’s hard to imagine any other way for the Web to operate.

Now here’s another point: it didn’t have to be that way.

The World Wide Web that Tim Berners-Lee invented didn’t have cookies. It also didn’t have websites. It had pages one could publish or read, at any distance across the Internet.

This original Web was simple and peer-to-peer. It was meant to be personal as well, meaning an individual could publish with a server or read with a browser. One could also write pages easily with an HTML editor, which was also easy to invent and deploy.

It should help to recall that the Apache Web server, which has published most of the world’s Web pages across most the time the Web has been around, was meant originally to work as a personal server. That’s because the original design assumption was that anyone, from individuals to large enterprises, could have a server of their own, and publish whatever they wanted on it. The same went for people reading pages on the Web.

Back in the 90s my own website, searls.com, ran on a box under my desk. It could do that because, even though my connection was just dial-up speed, it was on full time over its own static IP address, which I easily rented from my ISP. In fact, that I had sixteen of those addresses, so I could operate another server in my office for storing and transferring articles and columns I wrote to Linux Journal. Every night a cron utility would push what I wrote to the magazine itself. Both servers ran Apache. And none of this was especially geeky. (I’m not a programmer and the only code I know is Morse.)

My point here is that the Web back then was still peer-to-peer and welcoming to individuals who wished to operate at full agency. It even stayed that way through the Age of Blogs in the early ’00s.

But gradually a poison disabled personal agency. That poison was the cookie.

Technically a cookie is a token—a string of text—left by one computer program with another, to help the two remember each other. These are used for many purposes in computing.

But computing for the Web got a special kind of cookie called the HTTP cookie. This, Wikipedia says (at that link)

…is a small piece of data stored on the user‘s computer by the web browser while browsing a website. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember pieces of information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.

It also says,

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with.

This, however, was not the original idea, which Lou Montulli came up with in 1994. Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.

Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.

Today that snowball is so large that nearly all personal agency on the Web happens within the separate silos of every website, and compromised by whatever countless cookies and other tracking methods are used to keep track of, and to follow, the individual.

This is why most of the great stuff you can do on the Web is by grace of Google, Apple, Facebook, Amazon, Twitter, WordPress and countless others, including those third parties.

Bruce Schneier calls this a feudal system:

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals.

Bruce wrote that in 2012, about the time we invested hope in Do Not Track, which was designed as a polite request one could turn on in a browser, and servers could obey.

Alas, the tracking-based online advertising business and its dependents in publishing dismissed Do Not Track with contempt.

Starting in 2013, we serfs fought back, by the hundreds of millions, blocking ads and tracking: the biggest boycott in world history. This, however, did nothing to stop what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity.

Today our poisoned minds can hardly imagine having native capacities of our own that can operate at scale across all the world’s websites and services. To have that ability would also be at odds with the methods and imperatives of personally targeted advertising, which requires cookies and other tracking methods. One of those imperatives is making money: $Trillions of it.

The business itself (aka adtech) is extremely complex and deeply corrupt: filled with fraud, botnets and malwareMost of the money spent on adtech also goes to intermediaries and not to the media you (as they like to say) consume. It’s a freaking fecosystem, and every participant’s dependence on it is extreme.

Take, for example, Vizio TVs. As Samuel Axon puts it in Ars TechnicaVizio TV buyers are becoming the product Vizio sells, not just its customers Vizio’s ads, streaming, and data business grew 133 percent year over year.

Without cookies and the cookie-like trackers by which Vizio and its third parties can target customers directly, that business wouldn’t be there.

As a measure of how far this poisoning has gone, dig this: FouAnalyticsPageXray says the Ars Technica story above comes to your browser with all this spyware you don’t ask for or expect when you click on that link:

Adserver Requests: 786
Tracking Requests: 532
Other Requests: 112

I’m also betting that nobody reporting for a Condé Nast publication will touch that third rail, which I have been challenging journalists to do in 139 posts, essays, columns and articles, starting in 2008.

(Please prove me wrong, @SamuelAxon—or any reporter other than Farhad Manjoo, who so far is the only journalist from a major publication I know to have bitten the robotic hand that feeds them. I also note that the hand in his case is The New York Times‘, and that it has backed off a great deal in the amount of tracking it does. Hats off for that.)

At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem. If it was, we would get back the agency we lost, and the regulations we’re writing would respect and encourage that agency as well.

But that’s not happening, in spite of all the positive privacy moves Apple, Brave, Mozilla, Consumer Reports, the EFF and others are making.

My hat’s off to all of them, but let’s face it: the poisoning is too far advanced. After fighting it for more than 22 years (dating from publishing The Cluetrain Manifesto in 1999), I’m moving on.

To here.

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, in 1974.

It has likewise been suggested that the 20th century actually began with the assassination of Archduke Ferdinand and the start of WWI, in 1914. While that and my other claims might be arguable, you might at least agree that there’s no need for historic shifts to align with two or more zeros on a calendar—and that in most cases they don’t.

So I’m here to suggest that the 21st century began in 2020 with the Covid-19 pandemic and the fall of Donald Trump. (And I mean that literally. Social media platforms were Trump’s man’s stage, and the whole of them dropped him, as if through a trap door, on the occasion of the storming of the U.S. Capitol by his supporters on January 6, 2021. Whether you liked that or not is beside the facticity of it.)

Things are not the same now. For example, over the coming years, we may never hug, shake hands, or comfortably sit next to strangers again.

But I’m bringing this up for another reason: I think the future we wrote about in The Cluetrain Manifesto, in World of Ends, in The Intention Economy, and in other optimistic expressions during the first two decades of the 21st Century may finally be ready to arrive.

At least that’s the feeling I get when I listen to an interview I did with Christian Einfeldt (@einfeldt) at a San Diego tech conference in April, 2004—and that I just discovered recently in the Internet Archive. The interview was for a film to be called “Digital Tipping Point.” Here are its eleven parts, all just a few minutes long:

01 https://archive.org/details/e-dv038_doc_…
02 https://archive.org/details/e-dv039_doc_…
03 https://archive.org/details/e-dv038_doc_…
04 https://archive.org/details/e-dv038_doc_…
05 https://archive.org/details/e-dv038_doc_…
06 https://archive.org/details/e-dv038_doc_…
07 https://archive.org/details/e-dv038_doc_…
08 https://archive.org/details/e-dv038_doc_…
09 https://archive.org/details/e-dv038_doc_…
10 https://archive.org/details/e-dv039_doc_…
11 https://archive.org/details/e-dv039_doc_…

The title is a riff on Malcolm Gladwell‘s book The Tipping Point, which came out in 2000, same year as The Cluetrain Manifesto. The tipping point I sensed four years later was, I now believe, a foreshadow of now, and only suggested by the successes of the open source movement and independent personal publishing in the form of blogs, both of which I was high on at the time.

What followed in the decade after the interview were the rise of social networks, of smart mobile phones and of what we now call Big Tech. While I don’t expect those to end in 2021, I do expect that we will finally see  the rise of personal agency and of constructive social movements, which I felt swelling in 2004.

Of course, I could be wrong about that. But I am sure that we are now experiencing the millennial shift we expected when civilization’s odometer rolled past 2000.

Northern Red-Tail Hawk

On Quora the question went, If you went from an IQ of 135+ to 100, how would it feel?

Here’s how I answered::::

I went through that as a kid, and it was no fun.

In Kindergarten, my IQ score was at the top of the bell curve, and they put me in the smart kid class. By 8th grade my IQ score was down at the middle of the bell curve, my grades sucked, and my other standardized test scores (e.g. the Iowa) were terrible. So the school system shunted me from the “academic” track (aimed at college) to the “general” one (aimed at “trades”).

To the school I was a failure. Not a complete one, but enough of one for the school to give up on aiming me toward college. So, instead of sending me on to a normal high school, they wanted to send me to a “vocational-technical” school where boys learned to operate machinery and girls learned “secretarial” skills.

But in fact the school failed me, as it did countless other kids who adapted poorly to industrialized education: the same industrial system that still has people believing IQ tests are a measure of anything other than how well somebody answers a bunch puzzle questions on a given day.

Fortunately, my parents believed in me, even though the school had given up. I also believed in myself, no matter what the school thought. Like Walt Whitman, I believed “I was never measured, and never will be measured.” Walt also gifted everyone with these perfect lines (from Song of Myself):

I know I am solid and sound.
To me the converging objects of the universe
perpetually flow.

All are written to me,
and I must get what the writing means…
I know this orbit of mine cannot be swept
by a carpenter’s compass,

I know that I am august,
I do not trouble my spirit to vindicate itself
or be understood.
I see that the elementary laws never apologize.

Whitman argued for the genius in each of us that moves in its own orbit and cannot be encompassed by industrial measures, such as standardized tests that serve an institution that would rather treat students like rats in their mazes than support the boundless appetite for knowledge with which each of us is born—and that we keep if it doesn’t get hammered out of us by normalizing systems.

It amazes me that half a century since I escaped from compulsory schooling’s dehumanizing wringer, the system is largely unchanged. It might even be worse. (“Study says standardized testing is overwhelming nation’s public schools,” writes The Washington Post.)

To detox ourselves from belief in industrialized education, the great teacher John Taylor Gatto gives us The Seven Lesson Schoolteacher, which summarizes what he was actually paid to teach:

  1. Confusion — “Everything I teach is out of context. I teach the un-relating of everything. I teach disconnections. I teach too much: the orbiting of planets, the law of large numbers, slavery, adjectives, architectural drawing, dance, gymnasium, choral singing, assemblies, surprise guests, fire drills, computer languages, parents’ nights, staff-development days, pull-out programs, guidance with strangers my students may never see again, standardized tests, age-segregation unlike anything seen in the outside world….What do any of these things have to do with each other?”
  2. Class position — “I teach that students must stay in the class where they belong. I don’t know who decides my kids belong there but that’s not my business. The children are numbered so that if any get away they can be returned to the right class. Over the years the variety of ways children are numbered by schools has increased dramatically, until it is hard to see the human beings plainly under the weight of numbers they carry. Numbering children is a big and very profitable undertaking, though what the strategy is designed to accomplish is elusive. I don’t even know why parents would, without a fight, allow it to be done to their kids. In any case, again, that’s not my business. My job is to make them like it, being locked in together with children who bear numbers like their own.”
  3. Indifference — “I teach children not to care about anything too much, even though they want to make it appear that they do. How I do this is very subtle. I do it by demanding that they become totally involved in my lessons, jumping up and down in their seats with anticipation, competing vigorously with each other for my favor. It’s heartwarming when they do that; it impresses everyone, even me. When I’m at my best I plan lessons very carefully in order to produce this show of enthusiasm. But when the bell rings I insist that they stop whatever it is that we’ve been working on and proceed quickly to the next work station. They must turn on and off like a light switch. Nothing important is ever finished in my class, nor in any other class I know of. Students never have a complete experience except on the installment plan. Indeed, the lesson of the bells is that no work is worth finishing, so why care too deeply about anything?
  4. Emotional dependency — “By stars and red checks, smiles and frowns, prizes, honors and disgraces I teach kids to surrender their will to the predestined chain of command. Rights may be granted or withheld by any authority without appeal, because rights do not exist inside a school — not even the right of free speech, as the Supreme Court has ruled — unless school authorities say they do. As a schoolteacher, I intervene in many personal decisions, issuing a pass for those I deem legitimate, or initiating a disciplinary confrontation for behavior that threatens my control. Individuality is constantly trying to assert itself among children and teenagers, so my judgments come thick and fast. Individuality is a contradiction of class theory, a curse to all systems of classification.”
  5. Intellectual dependency — “Good people wait for a teacher to tell them what to do. It is the most important lesson, that we must wait for other people, better trained than ourselves, to make the meanings of our lives. The expert makes all the important choices; only I, the teacher, can determine what you must study, or rather, only the people who pay me can make those decisions which I then enforce… This power to control what children will think lets me separate successful students from failures very easily.
  6. Provisional self-esteem — “Our world wouldn’t survive a flood of confident people very long, so I teach that your self-respect should depend on expert opinion. My kids are constantly evaluated and judged. A monthly report, impressive in its provision, is sent into students’ homes to signal approval or to mark exactly, down to a single percentage point, how dissatisfied with their children parents should be. The ecology of “good” schooling depends upon perpetuating dissatisfaction just as much as the commercial economy depends on the same fertilizer.
  7. No place to hide — “I teach children they are always watched, that each is under constant surveillance by myself and my colleagues. There are no private spaces for children, there is no private time. Class change lasts three hundred seconds to keep promiscuous fraternization at low levels. Students are encouraged to tattle on each other or even to tattle on their own parents. Of course, I encourage parents to file their own child’s waywardness too. A family trained to snitch on itself isn’t likely to conceal any dangerous secrets. I assign a type of extended schooling called “homework,” so that the effect of surveillance, if not that surveillance itself, travels into private households, where students might otherwise use free time to learn something unauthorized from a father or mother, by exploration, or by apprenticing to some wise person in the neighborhood. Disloyalty to the idea of schooling is a Devil always ready to find work for idle hands. The meaning of constant surveillance and denial of privacy is that no one can be trusted, that privacy is not legitimate.”

Gatto won multiple teaching awards because he refused to teach any of those lessons. I succeeded in life by refusing to learn them as well.

All of us can succeed by forgetting those seven lessons—especially the one teaching that your own intelligence can be measured by anything other than what you do with it.

You are not a number. You are a person like no other. Be that, and refuse to contain your soul inside any institutional framework.

More Whitman:

Long enough have you dreamed contemptible dreams.
Now I wash the gum from your eyes.
You must habit yourself to the dazzle of the light and of every moment of your life.

Long have you timidly waited,
holding a plank by the shore.
Now I will you to be a bold swimmer,
To jump off in the midst of the sea, and rise again,
and nod to me and shout,
and laughingly dash your hair.

I am the teacher of athletes.
He that by me spreads a wider breast than my own
proves the width of my own.
He most honors my style
who learns under it to destroy the teacher.

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

I concentrate toward them that are nigh.
I wait on the door-slab.

Who has done his day’s work
and will soonest be through with his supper?
Who wishes to walk with me.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

Be that hawk.

For many decades, one of the landmark radio stations in Washington, DC was WMAL-AM (now re-branded WSPN), at 630 on (what in pre-digital times we called) the dial. As AM listening faded, so did WMAL, which moved its talk format to 105.9 FM in Woodbridge and its signal to a less ideal location, far out to the northwest of town.

They made the latter move because the 75 acres of land under the station’s four towers in Bethesda had become far more valuable than the signal. So, like many other station owners with valuable real estate under legacy transmitter sites, Cumulus Mediasold sold the old site for $74 million. Nice haul.

I’ve written at some length about this here and here in 2015, and here in 2016. I’ve also covered the whole topic of radio and its decline here and elsewhere.

I only bring the whole mess up today because it’s a five-year story that ended this morning, when WMAL’s towers were demolished. The Washington Post wrote about it here, and provided the video from which I pulled the screen-grab above. Pedestrians.org also has a much more complete video on YouTube, here. WRC-TV, channel 4, has a chopper view (best I’ve seen yet) here. Spake the Post,

When the four orange and white steel towers first soared over Bethesda in 1941, they stood in a field surrounded by sparse suburbs emerging just north of where the Capital Beltway didn’t yet exist. Reaching 400 feet, they beamed the voices of WMAL 630 AM talk radio across the nation’s capital for 77 years.

As the area grew, the 75 acres of open land surrounding the towers became a de facto park for runners, dog owners and generations of teenagers who recall sneaking smokes and beer at “field parties.”

Shortly after 9 a.m. Wednesday, the towers came down in four quick controlled explosions to make way for a new subdivision of 309 homes, taking with them a remarkably large piece of privately owned — but publicly accessible — green space. The developer, Toll Brothers, said construction is scheduled to begin in 2021.

Local radio buffs say the Washington region will lose a piece of history. Residents say they’ll lose a public play space that close-in suburbs have too little of.

After seeing those towers fall, I posted this to a private discussion among broadcast engineers (a role I once played, briefly and inexpertly, many years ago):

It’s like watching a public execution.

I’m sure that’s how many of who have spent our lives looking at and maintaining these things feel at a sight like this.

It doesn’t matter that the AM band is a century old, and that nearly all listening today is to other media. We know how these towers make waves that spread like ripples across the land and echo off invisible mirrors in the night sky. We know from experience how the inverse square law works, how nulls and lobes are formed, how oceans and prairie soils make small signals large and how rocky mountains and crappy soils are like mud to a strong signal’s wheels. We know how and why it is good to know these things, because we can see an invisible world where other people only hear songs, talk and noise.

We also know that, in time, all these towers are going away, or repurposed to hold up antennas sending and receiving radio frequencies better suited for carrying data.

We know that everything ends, and in that respect AM radio is no different than any other medium.

What matters isn’t whether it ends with a bang (such as here with WMAL’s classic towers) or with a whimper (as with so many other stations going dark or shrinking away in lesser facilities). It’s that there’s still some good work and fun in the time this old friend still has left.

In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. John Wesley Powell recorded the greatest example of gone rock in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado River, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock, the layers of which were cocked at an angle from the flatnesses of every layer above. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between when the basement rock was beveled flat, against its own grain, and when the floors of rock above it were successively laid down. He didn’t know how much time had passed between basement and flooring, and could hardly guess.

The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.

In the decades since Powell made his notes, the same gap has been found all over the world and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.

All of those absent records end abruptly with the Cambrian Explosion, which began about 541 million years ago. That’s when the Cambrian period arrived and with it an amplitude of history, written in stone.

Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing guess is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well. Every continent was a Greenland or an Antarctica.

The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, bulldozing their ways seaward through mountains and across hills and plains. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, where plate tectonics could hide the evidence. (A fact little known outside of geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled up as debris on continental edges here and there. Example: the Bay Area of California is an ocean floor that wasn’t subducted into a trench.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.

I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish because it lacks even a glacier’s talent for accumulation.

The World Wide Web is also the World Wide Whiteboard.

Think about it: there is nothing about a bit that lends itself to persistence, other than the media it is written on. Form follows function; and most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs, and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities that may have happened during an interval about which nothing can ever be known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that was born in the late 20th century.

This theory actually comes from my wife, who has long claimed that future historians will look at our digital age as an invisible one because it sucks so royally at archiving itself.

Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid-state memory—are volatile by nature?

My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can only be read by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will these floppies or my mothballed PCs, all of which are likely bricked by decades of un-use.

As for other media, the prospect isn’t any better.

At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of MiniDV and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections (e.g. FireWire) no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? The one after that? I still hope, but the odds are against it.

Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded our mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, and her whole generation is now gone.

The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassette tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG, or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms, and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation. I assume my old open reel and cassette tapes are okay, but I can’t tell right now because both my Sony TCWE-475 cassette deck (high end in its day) and my Akai 202D-SS open-reel deck (a quadrophonic model from the early ’70s) are in need of work, since some of their rubber parts have rotted.

The same goes for my photographs. My printed photos—countless thousands of them dating from the late 1800s to 2004—are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photos are spread across a mess of duplicated backup drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year apiece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). I doubt any of those will survive after those entities stop getting paid their yearly fees. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.

As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).

It should also be sobering to recognize that nobody truly “owns” a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.

So the historic progression I see, and try to illustrate in the photo at the top of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds… that go away. Everything digital is snow falling and disappearing on the waters of time.

Will there ever be a way to save for the very long term what we ironically call our digital “assets?” Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Great Digital Unconformity, when everything was forgotten?

I can’t think of any technical questions more serious than those two.


The original version of this post appeared in the March 2019 issue of Linux Journal.

In The Web and the New Reality, which I posted on December 1, 1995 (and again a few days ago), I called that date “Reality 1.995.12,” and made twelve predictions. In this post I’ll visit how those have played out over the quarter century since then.

1. As more customers come into direct contact with suppliers, markets for suppliers will change from target populations to conversations.

Well, both. While there are many more direct conversations between demand and supply than there were in the pre-Internet world, we are more targeted than ever, now personally and not just as populations. This has turned into a gigantic problem that many of us have been talking about for a decade or more, to sadly insufficient effect.

2. Travel, ticket, advertising and PR agencies will all find new ways to add value, or they will be subtracted from market relationships that no longer require them.

I don’t recall why I grouped those four things, so let’s break them apart:

  • Little travel agencies went to hell. Giant Net-based ones thrived. See here.
  • Tickets are now almost all digital. I don’t know what a modern ticket agency does, if if any exist.
  • Advertising agencies went digital and became malignant. I’ve written about that a lot, here. All of those writings could be compressed to a pull quote from Separating Advertising’s Wheat and Chaff: “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”
  • PR agencies, far as I know (and I haven’t looked very far) are about the same.

3. Within companies, marketing communications will change from peripheral activities to core competencies.New media will flourish on the Web, and old media will learn to live with the Web and take advantage of it.

If we count the ascendance of the Chief Marketing Officer (CMO) as a success, this was a bulls-eye. However, most CMOs are all about “digital,” by which they generally mean direct response marketing. And if you didn’t skip to this item you know what I think about that.

4. Retail space will complement cyber space. Customer and technical service will change dramatically, as 800 numbers yield to URLs and hard copy documents yield to soft copy versions of the same thing… but in browsable, searchable forms.

Yep. All that happened.

5. Shipping services of all kinds will bloom. So will fulfillment services. So will ticket and entertainment sales services.

That too.

The web’s search engines will become the new yellow pages for the whole world. Your fingers will still do the walking, but they won’t get stained with ink. Same goes for the white pages. Also the blue ones.

And that.

6. The scope of the first person plural will enlarge to include the whole world. “We” may mean everybody on the globe, or any coherent group that inhabits it, regardless of location. Each of us will swing from group to group like monkeys through trees.

Oh yeah.

7. National borders will change from barricades and toll booths into speed bumps and welcome mats.

Mixed success. When I wrote this, nearly all Internet access was through telcos, so getting online away from home still required a local phone number. That’s pretty much gone. But the Internet itself is being broken into pieces. See here

8. The game will be over for what teacher John Taylor Gatto labels “the narcotic we call television.” Also for the industrial relic of compulsory education. Both will be as dead as the mainframe business. In other words: still trucking, but not as the anchoring norms they used to be.

That hasn’t happened; but self-education, home-schooling and online study of all kinds are thriving.

9. Big Business will become as anachronistic as Big Government, because institutional mass will lose leverage without losing inertia.

Well, this happened. So, no.

10. Domination will fail where partnering succeeds, simply because partners with positive sums will combine to outproduce winners and losers with zero sums.

Here’s what I meant by that.
I think more has happened than hasn’t. But, visiting the particulars requires a whole ‘nuther post.

11. Right will make might.

Nope. And this one might never happen. Hey, in 25 years one tends to become wiser.

12. And might will be mighty different.

That’s true, and in some ways that depresses me.

So, on the whole, not bad.

« Older entries