The choice above is one I pose at the end of a 20-minute audioblog I recorded for today, here it is in .mp4:
And, if that fails, here it is in .mp3:
The graphic represents the metaphor I use to frame that choice.
You are currently browsing the archive for the Future category.
The choice above is one I pose at the end of a 20-minute audioblog I recorded for today, here it is in .mp4:
And, if that fails, here it is in .mp3:
The graphic represents the metaphor I use to frame that choice.
There is latency to everything. Pain, for example. Nerve impulses from pain sensors travel at about two feet per second. That’s why we wait for the pain when we stub a toe. The crack of a bat on a playing field takes half a second before we hear it in the watching crowd. The sunlight we see on Earth is eight minutes old. Most of this doesn’t matter to us, or if it does we adjust to it.
Likewise with how we adjust to the inverse square law. That law is why the farther away something is, the smaller it looks or the fainter it sounds. How much smaller or fainter is something we intuit more than we calculate. What matters is that we understand the law with our bodies. In fact we understand pretty much everything with our bodies.
All our deepest, most unconscious metaphors start with our bodies. That’s why we grasp, catch, toss around, or throw away an idea. It’s also why nearly all our prepositions pertain to location or movement. Over, under, around, through, with, beside, within, alongside, on, off, above and below only make sense to us because we have experienced them with our bodies.
So::: How are we to make full sense of the Web, or the Internet, where we are hardly embodied at all?
We may say we are on the Web, because we need it to make sense to us as embodied beings. Yet we are only looking at a manifestation of it.
The “it” is the hypertext protocol (http) that Tim Berners-Lee thought up in 1990 so high energy physicists, scattered about the world, could look at documents together. That protocol ran on another one: TCP/IP. Together they were mannered talk among computers about how to show the same document across any connection over any collection of networks between any two end points, regardless of who owned or controlled those networks. In doing so, Tim rubbed a bottle of the world’s disparate networks. Out popped the genie we call the Web, ready to grant boundless wishes that only began with document sharing.
This was a miracle beyond the scale of loaves and fish: one so new and so odd that the movie Blade Runner, which imagined in 1982 that Los Angeles in 2019 would feature floating cars, off-world colonies and human replicants, failed to foresee a future when anyone could meet with anyone else, or any group, anywhere in the world, on wish-granting slabs they could put on their desks, laps, walls or hold in their hands. (Instead Blade Runner imagined there would still be pay phones and computers with vacuum tubes for screens.)
This week I attended Web Science 20 on my personal slab in California, instead of what was planned originally: in a conference at the University of Southampton in the UK. It was still a conference, but now a virtual one, comprised of many people on many slabs, all over the world, each with no sense of distance any more meaningful than those imposed by the inconvenience of time zones.
Joyce (my wife, who is also the source of much wisdom for which her husband gets the credit) says our experience on the Web is one of absent distance and gravity—and that this experience is still so new to us that we have only begun to make full sense of it as embodied creatures. We’ll adjust, she says, much as astronauts adjust to the absence of gravity; but it will take more time than we’ve had so far. We may become expert at using the likes of Zoom, but that doesn’t mean we operate in full comprehension of the new digital environment we co-occupy.
My own part in WebSci20 was talking with five good people, plus others asking questions in a chat, during the closing panel of the conference. (That’s us, at the top of this post.) The title of our session was The Future of Web Science. To prep for that session I wrote the first draft of what follows: a series of thoughts I hoped to bring up in the session, and some of which I actually did.
The first of thought is the one I just introduced: The Web, like the Net it runs on, is both new and utterly vexing toward understanding in terms we’ve developed for making sense of embodied existence.
Here are some more.
The Web is a whiteboard.
In the beginning we thought of the Web as something of a library, mostly because it was comprised of sites with addresses and pages that were authored, published, syndicated, browsed and read. A universal resource locator, better known as a URL, would lead us through what an operating system calls a path or a directory, much as a card catalog did before library systems went digital. It also helped that we understood the Web as real estate, with sites and domains that one owned and others could visit.
The metaphor of the Web as a library, though useful, also misdirects our attention and understanding away from its nature as collection of temporary manifestations. Because, for all we attempt to give the Web a sense of permanence, it is evanescent, temporary, ephemeral. We write and publish there as we might on snow, sand or a whiteboard. Even the websites we are said to “own” are in fact only rented. Fail to pay the registrar and off it goes.
The Web is not what’s on it.
It is not Google, or Facebook, dot-anything or dot-anybody. It is the manifestation of documents and other non-stuff we call “content,” presented to us in browsers and whatever else we invent to see and deal with what the hypertext protocol makes possible. Here is how David Weinberger and I put it in World of Ends, more than seventeen years ago:
1. The Internet isn’t complicated
2. The Internet isn’t a thing. It’s an agreement.
3. The Internet is stupid.
4. Adding value to the Internet lowers its value.
5. All the Internet’s value grows on its edges.
6. Money moves to the suburbs.
7. The end of the world? Nah, the world of ends.
8. The Internet’s three virtues:
a. No one owns it
b. Everyone can use it
c. Anyone can improve it
9. If the Internet is so simple, why have so many been so boneheaded about it?
10. Some mistakes we can stop making already
That was a follow-up of sorts to The Cluetrain Manifesto, which we co-wrote with two other guys four years earlier. We followed up both five years ago with an appendix to Cluetrain called New Clues. While I doubt we’d say any of that stuff the same ways today, the heart of it beats the same.
The Web is free.
The online advertising industry likes to claim the “free Internet” is a grace of advertising that is “relevant,” “personalized,” “interest-based,” “interactive” and other adjectives that misdirect us away from what those forms of advertising actually do, which is track us like marked animals.
That claim, of course, is bullshit. Here’s what Harry Frankfurt says about that in his canonical work, On Bullshit (Cambridge University Press, 1988): “The realms of advertising and public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.” Boiled down, bullshit is what Wikipedia (at the moment, itsef being evanescent) calls “speech intended to persuade without regard for truth.” Another distinction: “The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care if what they say is true or false, but rather only cares whether their listener is persuaded.”
Consider for a moment Win Bigly: Persuasion in a World Where Facts Don’t Matter, a 2017 book by Scott Adams that explains, among other things, how a certain U.S. tycoon got his ass elected President. The world Scott’s talks about is the Web.
Nothing in the history of invention is more supportive of bullshit than the Web. Nor is anything more supportive of truth-telling, education and damned near everything else one can do in the civilized world. And we’re only beginning to discover and make sense of all those possibilities.
We’re all digital now
Meaning not just physical. This is what’s new, not just to human experience, but to human existence.
Marshall McLuhan calls our technologies, including our media, extensions of our bodily selves. Consider how, when you ride a bike or drive a car, those are my wheels and my brakes. Our senses extend outward to suffuse our tools and other technologies, making them parts of our larger selves. Michael Polanyi called this process indwelling.
Think about how, although we are not really on or through the Web, we do dwell in it when we read, write, speak, watch and perform there. That is what I am doing right now, while I type what I see on a screen in San Marino, California, as a machine, presumably in Cambridge, Massachusetts, records my keystrokes and presents them back to me, and now you are reading it, somewhere else in (or on, or choose your preposition) the world. Dwell may be the best verb for what each of us are doing in the non-here we all co-occupy in this novel (to the physical world) non-place and times.
McLuhan also said media revolutions are formal causes. Meaning that they form us. (He got that one from Aristotle.) In different ways we were formed and re-formed by speech, writing, printing, and radio and television broadcasting.
I submit that we are far more formed by digital technologies, and especially by the Internet and the Web, than by any other prior technical revolution. (A friend calls our current revolution “the biggest thing since oxygenation.”)
But this is hard to see because, as McLuhan puts it, every one of these major revolutions becomes a ground on which everything else dances as figures. But it is essential to recognize that the figures are not the ground. This, I suggest, is the biggest challenge for Web Science.
It’s damned hard to study ground-level formal causes such as digital tech, the Net and the Web. Because what they are technically is not what they do formally. They are rising tides that float all boats, in oblivity to the boats themselves.
I could say more, and I’m sure I will; but I want to get this much out there before the panel.
I posted this essay in my own pre-blog, Reality 2.0, on December 1, 1995. I think maybe now, in this long moment after we’ve hit a pause button on our future, we can start working on making good the unfulfilled promises that first gleamed in our future a quarter century ago.
The import of the Internet is so obvious and extreme that it actually defies valuation: witness the stock market, which values Netscape so far above that company’s real assets and earnings that its P/E ratio verges on the infinite.
Whatever we’re driving toward, it is very different from anchoring certainties that have grounded us for generations, if not for the duration of our species. It seems we are on the cusp of a new and radically different reality. Let’s call it Reality 2.0.
The label has a millenial quality, and a technical one as well. If Reality 2.0 is Reality 2.000, this month we’re in Reality 1.995.12.
With only a few revisions left before Reality 2.0 arrives, we’re in a good position to start seeing what awaits. Here are just a few of the things this writer is starting to see…
The Web is the board for a new game Phil Salin called “Polyopoly.” As Phil described it, Polyopoly is the opposite of Monopoly. The idea is not to win a fight over scarce real estate, but to create a farmer’s market for the boundless fruits of the human mind.
It’s too bad Phil didn’t live to see the web become what he (before anyone, I believe) hoped to create with AMIX: “the first efficient marketplace for information.” The result of such a marketplace, Phil said, would be polyopoly.
In Monopoly, what mattered were the three Ls of real estate: “location, location and location.”
On the web, location means almost squat.
What matters on the web are the three Cs: content, connections and convenience. These are what make your home page a door the world beats a path to when it looks for the better mouse trap that only you sell. They give your webfront estate its real value.
If commercial interests have their way with the Web, we can also add a fourth C: cost. But how high can costs go in a polyopolistic economy? Not very. Because polyopoly creates…
The goods of Polyopoly and Monopoly are as different as love and lug nuts. Information is made by minds, not factories; and it tends to make itself abundant, not scarce. Moreover, scarce information tends to be worthless information.
Information may be bankable, but traditional banking, which secures and contains scarce commodities (or their numerical representations) does not respect the nature of information.
Because information abhors scarcity. It loves to reproduce, to travel, to multiply. Its natural habitats are wires and airwaves and disks and CDs and forums and books and magazines and web pages and hot links and chats over cappuccinos at Starbucks. This nature lends itself to polyopoly.
Polyopoly’s rules are hard to figure because the economy we are building with it is still new, and our vocabulary for describing it is sparse.
This is why we march into the Information Age hobbled by industrial metaphors. The “information highway” is one example. Here we use the language of freight forwarding to describe the movement of music, love, gossip, jokes, ideas and other communicable forms of knowledge that grow and change as they move from mind to mind.
We can at least say that knowledge, even in its communicable forms, is not reducible to data. Nor is the stuff we call “intellectual property.” A song and a bank account do not propagate the same ways. But we are inclined to say they do (and should), because we describe both with the same industrial terms.
All of which is why there is no more important work in this new economy than coining the new terms we use to describe it.
The best place to start looking for help is at the dawn of the Industrial Age. Because this was when the Age of Reason began. Nobody knew more about the polyopoly game — or played it — better than those champions of reason from whose thinking our modern republics are derived: Thomas Paine, Thomas Jefferson and Benjamin Franklin.
As Jon Katz says in “The Age of Paine” (Wired, May 1995 ), Thomas Paine was the the “moral father of the Internet.” Paine said “my country is the world,” and sought as little compensation as possible for his work, because he wanted it to be inexpensive and widely read. Paine’s thinking still shapes the politics of the U.S., England and France, all of which he called home.
Thomas Jefferson wrote the first rule of Polyopoly: “He who receives an idea from me receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”
He also left a live bomb for modern intellectual property law: “Inventions then cannot, in nature, be a subject of property.” The best look at the burning fuse is John Perry Barlow’s excellent essay “The Economy of Ideas,” in the March 1994 issue of Wired. (I see that Jon Katz repeats it in his paean to Paine. Hey, if someone puts it to song, who gets the rights?)
If Paine was the moral father of the Internet, Ben Franklin’s paternity is apparent in Silicon Valley. Today he’d fit right in, inventing hot products, surfing the Web and spreading his wit and wisdom like a Johnny Cyberseed. Hell, he even has the right haircut.
Franklin left school at 10 and was barely 15 when he ran his brother’s newspaper, writing most of its content and getting quoted all over Boston. He was a self-taught scientist and inventor while still working as a writer and publisher. He also found time to discover electricity, create the world’s first postal service, invent a heap of handy products and serve as a politician and diplomat.
Franklin’s biggest obsession was time. He scheduled and planned constantly. He even wrote his famous epitaph when he was 22, six decades before he died. “The work shall not be lost,” it reads, “for it will (as he believed) appear once more in a new and more elegant edition, revised and edited by the author.”
One feels the ghost of Franklin today, editing the web.
Combine Jefferson and Franklin and you get the two magnetic poles that tug at every polyopoly player: information that only gets more abundant, and time that only gets more scarce.
As Alain Couder of Groupe Bull puts it, “we treat time as a constant in all these formulas — revolutions per minute, instructions per second — yet we experience time as something that constantly decreases.”
After all, we’re born with an unknown sum of time, and we need to spend it all before we die. The notion of “saving” it is absurd. Time can only be spent.
So: to play Polyopoly well, we need to waste as little time as possible. This is not easy in a world where the sum of information verges on the infinite.
Which is why I think Esther Dyson might be our best polyopoly player.
“There’s too much noise out there anyway,” she says in ‘Esther Dyson on DaveNet‘ (12/1/94). “The new wave is not value added, it’s garbage-subtracted.”
Here’s a measure of how much garbage she subtracts from her own life: her apartment doesn’t even have a phone.
Can she play this game, or what?
I wouldn’t bother to ask Esther if she watches television, or listens to the radio. I wouldn’t ask my wife, either. To her, television is exactly what Fred Allen called it forty years ago: “chewing gum for the eyes.” Ours heats up only for natural disasters and San Jose Sharks games.
Dean Landsman, a sharp media observer from the broadcast industry, tells me that John Gresham books are cutting into time that readers would otherwise spend watching television. And that’s just the beginning of a tide that will swell as every medium’s clients weigh more carefully what they do with their time.
Which is why it won’t be long before those clients wad up their television time and stick it under their computer. “Media will eat media,” Dean says.
The computer is looking a lot hungrier than the rest of the devices out there. Next to connected computing, television is AM radio.
Fasten your seat belts.
Think of the Industrial world — the world of Big Business and Big Government — as a modern Roman Empire.
Now think of Bill Gates as Attilla the Hun.
Because that’s exactly how Bill looks to the Romans who still see the web, and everything else in the world, as a monopoly board. No wonder Bill doesn’t have a senator in his pocket (as Mark Stahlman told us in ‘Off to the Slaughter House,’ (DaveNet, 3/14/94).
Sadly for the the Romans, their empire is inhabited almost entirely by Huns, all working away on their PCs. Most of those Huns don’t have a problem with Bill. After all, Bill does a fine job of empowering his people, and they keep electing him with their checkbooks, credit cards and purchase orders.
Which is why, when they go forth to tame the web, these tough-talking Captains of Industry and Leaders of Government look like animated mannequins in Armani Suits: clothes with no emperor. Their content is emulation. They drone about serving customers and building architectures and setting standards and being open and competing on level playing fields. But their game is still control, no matter what else they call it.
Bill may be our emperor, but ruling Huns is not the same as ruling Romans. You have to be naked as a fetus and nearly as innocent. Because polyopoly does not reward the dark tricks that used to work for industry, government and organized crime. Those tricks worked in a world where darkness had leverage, where you could fool some of the people some of the time, and that was enough.
But polyopoly is a positive-sum game. Its goods are not produced by huge industries that control the world, but by smart industries that enable the world’s inhabitants. Like the PC business that thrives on it, information grows up from individuals, not down from institutions. Its economy thrives on abundance rather than scarcity. Success goes to enablers, not controllers. And you don’t enable people by fooling them. Or by manipulating them. Or by muscling them.
In fact, you don’t even play to win. As Craig Burton of The Burton Group puts it, “the goal isn’t win/win, it’s play/play.”
This is why Bill does not “control” his Huns the way IBM controlled its Romans. Microsoft plays by winning support, where IBM won by dominating the play. Just because Microsoft now holds a controlling position does not mean that a controlling mentality got them there. What I’ve seen from IBM and Apple looks far more Monopoly-minded and controlling than anything I’ve seen from Microsoft.
Does this mean that Bill’s manners aren’t a bit Roman at times? No. Just that the support Microsoft enjoys is a lot more voluntary on the part of its customers, users and partners. It also means that Microsoft has succeeded by playing Polyopoly extremely well. When it tries to play Monopoly instead, the Huns don’t like it. Bill doesn’t need the Feds to tell him when that happens. The Huns tell him soon enough.
No matter how Roman Bill’s fantasies might become, he knows his position is hardly more substantial than a conversation. In fact, it IS a conversation.
I would bet that Microsoft is engaged in more conversations, more of the time, with more customers and partners, than any other company in the world. Like or hate their work, the company connects. I submit that this, as much as anything else, accounts for its success.
In the Industrial Age, a market was a target population. Goods rolled down a “value chain” that worked like a conveyor belt. Raw materials rolled into one end and finished products rolled out the other. Customers bought the product or didn’t, and customer feedback was limited mostly to the money it spent.
To encourage customer spending, “messages” were “targeted” at populations, through advertising, PR and other activities. The main purpose of these one-way communications was to stimulate sales. That model is obsolete. What works best to day is what Normann & Ramirez (Harvard Business Review, June/July 1993) call a “value constellation” of relationships that include customers, partners, suppliers, resellers, consultants, contractors and all kinds of people.
The Web is the star field within which constellations of companies, products and markets gather themselves. And what binds them together, in each case, are conversations.
What we’re creating here is a new economy — an information economy.
Behind the marble columns of big business and big government, this new economy stands in the lobby like a big black slab. The primates who work behind those columns don’t know what this thing is, but they do know it’s important and good to own. The problem is, they can’t own it. Nobody can. Because it defies the core value in all economies based on physical goods: scarcity.
Scarcity ruled the stone hearts and metal souls of every zero-sum value system that ever worked — usually by producing equal quantities of gold and gore. And for dozens of millennia, we suffered with it. If Tribe A crushed Tribe B, it was too bad for Tribe B. Victors got the spoils.
This win/lose model has been in decline for some time. Victors who used to get spoils now just get responsibilities. Cooperation and partnership are now more productive than competition and domination. Why bomb your enemy when you can get him on the phone and do business with him? Why take sides when the members of “us” and “them” constantly change?
The hard evidence is starting to come in. A recent Wharton Impact report said, “Firms which specified their objectives as ‘beating our competitors’ or ‘gaining market share’ earned substantially lower profits over the period.” We’re reading stories about women-owned businesses doing better, on the whole, because women are better at communicating and less inclined to waste energy by playing sports and war games in their marketplaces.
From the customer’s perspective, what we call “competition” is really a form of cooperation that produces abundant choices. Markets are created by addition and multiplication, not just by subtraction and division.
In my old Mac IIci, I can see chips and components from at least 11 different companies and 8 different countries. Is this evidence of war among Apple’s suppliers? Do component vendors succeed by killing each other and limiting choices for their customers? Did Apple’s engineers say, “Gee, let’s help Hitachi kill Philips on this one?” Were they cheering for one “side” or another? The answer should be obvious.
But it isn’t, for two reasons. One is that the “Dominator Model,” as anthropologist (and holocaust survivor) Riane Eisler calls it, has been around for 20,000 years, and until recently has reliably produced spoils for victors. The other is that conflict always makes great copy. To see how seductive conflict-based thinking is, try to find a hot business story that isn’t filled with sports and war metaphors. It isn’t easy.
Bound by the language of conflict, most of us still believe that free enterprise runs on competition between “sides” driven by urges to dominate, and that the interests of those “sides” are naturally opposed.
To get to the truth here, just ask this: which has produced more — the U.S. vs. Japan, or the U.S. + Japan? One produced World War II and a lot of bad news. The other produced countless marvels — from cars to consumer electronics — on which the whole world depends.
Now ask this: which has produced more — Apple vs. Microsoft or Apple + Microsoft? One profited nobody but the lawyers, and the other gave us personal computing as we know it today.
What brings us to Reality 2.0 is the Plus Paradigm.
The Plus Paradigm says that our world is a positive construction, and that the best games produce positive sums for everybody. It recognizes the power of information and the value of abundance. (Think about it: the best information may have the highest power to abound, and its value may vary as the inverse of its scarcity.)
Over the last several years, mostly through discussions with client companies that are struggling with changes that invalidate long-held assumptions, I have built table of old (Reality 1.0) vs. new (Reality 2.0) paradigms. The difference between these two realities, one client remarked, is that the paradigm on the right is starting to work better than the paradigm on the left.
|Paradigm||Reality 1.0||Reality 2.0|
|Means to ends||Domination||Partnership|
|Cause of progress||Competition||Collaboration|
|Center of interest||Personal||Social|
|Concept of systems||Closed||Open|
|Source of leverage||Monopoly||Polyopoly|
|Scope of self-interest||Self/Nation||Self/World|
|Source of power||Might||Right|
|Source of value||Scarcity||Abundance|
|Stage of growth||Child (selfish)||Adult (social)|
|Reference valuables||Metal, Money||Life, Time|
|Purpose of boundaries||Protection||Limitation|
Changes across the paradigms show up as positive “reality shifts.” The shift is from OR logic to AND logic, from Vs. to +:
|Reality 1.0||Reality 2.0|
|man vs nature||man + nature|
|Labor vs management||Labor + management|
|Public vs private||Public + private|
|Men vs women||Men + women|
|Us vs them||Us + them|
|Majority vs minority||Majority + minority|
|Party vs party||Party + party|
|Urban vs rural||Urban + rural|
|Black vs white||Black + white|
|Business vs govt.||Business + govt.|
The Plus Paradigm comprehends the world as a positive construction, and sees that the best games produce positive sums for everybody. It recognizes the power of information and the value of abundance. (Think about it: the best information may have the highest power to abound, and its value may vary as the inverse of its scarcity.)
For more about this whole way of thinking, see Bernie DeKoven’s ideas about “the ME/WE” at his “virtual playground.”]
This may sound sappy, but information works like love: when you give it away, you still get to keep it. And when you give it back, it grows.
Which has always been the case. But in Reality 2.0, it should become a lot more obvious.
This is the Ostrom Memorial Lecture I gave on 9 October of last year for the Ostrom Workshop at Indiana University. Here is the video. (The intro starts at 8 minutes in, and my part starts just after 11 minutes in.) I usually speak off the cuff, but this time I wrote it out, originally in outline form*, which is germane to my current collaborations with Dave Winer, father of outlining software (and, in related ways, of blogging and podcasting). So here ya go.
The movie Blade Runner was released in 1982; and was set in a future Los Angeles. Anyone here know when in the future Blade Runner is set? I mean, exactly?
The year was 2019. More precisely, next month: November.
In Blade Runner’s 2019, Los Angeles is a dark and rainy hellscape with buildings the size of mountains, flying cars, and human replicants working on off-world colonies. It also has pay phones and low-def computer screens that are vacuum tubes.
Missing is a communication system that can put everyone in the world at zero distance from everyone else, in disembodied form, at almost no cost—a system that lives on little slabs in people’s pockets and purses, and on laptop computers far more powerful than any computer, of any size, from 1982.
In other words, this communication system—the Internet—was less thinkable in 1982 than flying cars, replicants and off-world colonies. Rewind the world to 1982, and the future Internet would appear a miracle dwarfing the likes of loaves and fish.
In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.
As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.
There is also only one of it. As with the universe, it has no other examples.
As a source of abundance, the closest thing to an example the Internet might have is the periodic table. And the Internet might be even more elemental than that: so elemental that it is easy to overlook the simple fact that it is the largest goose ever to lay golden eggs.
It can, however, be misunderstood, and that’s why it’s in trouble.
The trouble it’s in is with human nature: the one that sees more value in the goose’s eggs than in the goose itself.
See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet. I’ll describe nine of them.
The first enclosure is service provisioning, for example with asymmetric connection speeds. On cable connections you may have up to 400 megabits per second downstream, but still only 10 megabits per second—one fortieth of that—upstream. (By the way this is exactly what Spectrum, formerly Time Warner Cable, provides with its most expensive home service to customers in New York City.)
They do that to maximize consumption while minimizing production by those customers. You can consume all the video you want, and think you’re getting great service. But meanwhile this asymmetrical provisioning prevents production at your end. Want to put out a broadcast or a podcast from your house, to run your own email server, or to store your own video or other personal data in your own personal “cloud”? Forget it.
The Internet was designed to support infinite production by anybody of anything. But cable TV companies don’t want you to have that that power. So you don’t. The home Internet you get from your cable company is nice to have, but it’s not the whole Internet. It’s an enclosed subset of capabilities biased by and for the cable company and large upstream producers of “content.”
So, it’s golden eggs for them, but none for you. Also missing are all the golden eggs you might make possible for those companies as an active producer rather than as a passive consumer.
The second enclosure is through 5G wireless service, currently promoted by phone companies as a new generation of Internet service. The companies deploying 5G promise greater speeds and lower lag times over wireless connections; but is also clear that they want to build in as many choke points as they like, all so you can be billed for as many uses as possible.
You want gaming? Here’s our gaming package. You want cloud storage? Here’s our cloud storage package. Each of these uses will carry terms and conditions that allow some uses and prevent others. Again, this is a phone company enclosure. No cable companies are deploying 5G. They’re fine with their own enclosure.
The third enclosure is government censorship. The most familiar example is China’s. In China’s closed Internet you will find no Google, Facebook, Twitter, Instagram or Reddit. No Pandora, Spotify, Slack or Dropbox. What you will find is pervasive surveillance of everyone and everything—and ranking of people in its Social Credit System.
By March of this year, China had already punished 23 million people with low social credit scores by banning them from traveling. Control of speech has also spread to U.S. companies such as the NBA and ESPN, which are now censoring themselves as well, bowing to the wishes of the Chinese government and its own captive business partners.
The fourth enclosure is the advertising-supported commercial Internet. This is led by Google and Facebook, but also includes all the websites and services that depend on tracking-based advertising. This form of advertising, known as adtech, has in the last decade become pretty much the only kind of advertising online.
Today there are very few major websites left that don’t participate in what Shoshana Zuboff calls surveillance capitalism, and Brett Frischmann and Evan Selinger call, in their book by that title, Re-engineering Humanity. Surveillance of individuals online is now so deep and widespread that nearly every news organization is either unaware of it or afraid to talk about it—in part because the advertising they run is aimed by it.
That’s why you’ll read endless stories about how bad Facebook and Google are, and how awful it is that we’re all being tracked everywhere like marked animals; but almost nothing about how the sites publishing stories about tracking also participate in exactly the same business—and far more surreptitiously. Reporting on their own involvement in the surveillance business is a third rail they won’t grab.
I know of only one magazine that took and shook that third rail, especially in the last year and a half. That magazine was Linux Journal, where I worked for 24 years and was serving as editor-in-chief when it was killed by its owner in August. At least indirectly that was because we didn’t participate in the surveillance economy.
The fifth enclosure is protectionism. In Europe, for example, your privacy is protected by laws meant to restrict personal data use by companies online. As a result in Europe, you won’t see the Los Angeles Times or the Washington Post in your browsers, because those publishers don’t want to cope with what’s required by the EU’s laws.
While they are partly to blame—because they wish to remain in the reader-tracking business—the laws are themselves terribly flawed—for example by urging every website to put up a “cookie notice” on pages greeting readers. In most cases clicking “accept” to the site’s cookies only gives the site permission to continue doing exactly the kind of tracking the laws are meant to prevent.
So, while the purpose of these laws is to make the Internet safer, in effect they also make its useful space smaller.
The sixth enclosure is what The Guardian calls “digital colonialism.” The biggest example of that is Facebook.org, originally called “Free Basics” and “Internet.org”
This is a China-like subset of the Internet, offered for free by Facebook in less developed parts of the world. It consists of a fully enclosed Web, only a few dozen sites wide, each hand-picked by Facebook. The rest of the Internet isn’t there.
The seventh enclosure is the forgotten past. Today the World Wide Web, which began as a kind of growing archive—a public set of published goods we could browse as if it were a library—is being lost. Forgotten. That’s because search engines are increasingly biased to index and find pages from the present and recent past, and by following the tracks of monitored browsers. It’s forgetting what’s old. Archival goods are starting to disappear, like snow on the water.
Why? Ask the algorithm.
Of course, you can’t. That brings us to our eighth enclosure: algorithmic opacity.
Consider for a moment how important power plants are, and how carefully governed they are as well. Every solar, wind, nuclear, hydro and fossil fuel power production system in the world is subject to inspection by whole classes of degreed and trained professionals.
There is nothing of the sort for the giant search engine and social networks of the world. Google and Facebook both operate dozens of data centers, each the size of many Walmart stores. Yet the inner workings of those data centers are nearly absent of government oversight.
This owes partly to the speed of change in what these centers do, but more to the simple fact that what they do is unknowable, by design. You can’t look at rows of computers with blinking lights in many acres of racks and have the first idea of what’s going on in there.
I would love to see research, for example, on that last enclosure I listed: on how well search engines continue to index old websites. Or to do anything. The whole business is as opaque as a bowling ball with no holes.
I’m not even sure you can find anyone at Google who can explain exactly why its index does one thing or another, for any one person or another. In fact, I doubt Facebook is capable of explaining why any given individual sees any given ad. They aren’t designed for that. And the algorithm itself isn’t designed to explain itself, perhaps even to the employees responsible for it.
Or so I suppose.
In the interest of moving forward with research on these topics, I invite anyone at Google, Facebook, Bing or Amazon to help researchers at institutions such as the Ostrom Workshop, and to explain exactly what’s going on inside their systems, and to provide testable and verifiable ways to research those goings-on.
The ninth and worst enclosure is the one inside our heads. Because, if we think the Internet is something we use by grace of Apple, Amazon, Facebook, Google and “providers” such as phone and cable companies, we’re only helping all those companies contain the Internet’s usefulness inside their walled gardens.
Not understanding the Internet can result in problems similar to ones we suffer by not understanding common pool resources such as the atmosphere, the oceans, and the Earth itself.
But there is a difference between common pool resources in the natural world, and the uncommon commons we have with the Internet.
See, while we all know that common-pool resources are in fact not limitless—even when they seem that way—we don’t have the same knowledge of the Internet, because its nature as a limitless non-thing is non-obvious.
For example, we know common pool resources in the natural world risk tragic outcomes if our use of them is ungoverned, either by good sense or governance systems with global reach. But we don’t know that the Internet is limitless by design, or that the only thing potentially tragic about it is how we restrict access to it and use of it, by enclosures such as the nine I just listed.
So my thesis here is this: if we can deeply and fully understand what the Internet is, why it is fully important, and why it is in danger of enclosure, we can also understand why, ten years after Lin Ostrom won a Nobel prize for her work on the commons, that work may be exactly what we need to save the Internet as a boundless commons that can support countless others.
We’ll begin with what makes the Internet possible: a protocol.
A protocol is a code of etiquette for diplomatic exchanges between computers. A form of handshake.
What the Internet’s protocol does is give all the world’s digital devices and networks a handshake agreement about how to share data between any point A and any point B in the world, across any intermediary networks.
When you send an email, or look at a website, anywhere in the world, the route the shared data takes can run through any number of networks between the two. You might connect from Bloomington to Denver through Chicago, Tokyo and Mexico City. Then, two minutes later, through Toronto and Miami. Some packets within your data flows may also be dropped along the way, but the whole session will flow just fine because the errors get noticed and the data re-sent and re-assembled on the fly.
Oddly, none of this is especially complicated at the technical level, because what I just described is pretty much all the Internet does. It doesn’t concern itself with what’s inside the data traffic it routes, who is at the ends of the connections, or what their purposes are—any more than gravity cares about what it attracts.
Beyond the sunk costs of its physical infrastructure, and the operational costs of keeping the networks themselves standing up, the Internet has no first costs at its protocol level, and it adds no costs along the way. It also has no billing system.
In all these ways the Internet is, literally, neutral. It also doesn’t need regulators or lawmakers to make it neutral. That’s just its nature.
The Internet’s protocol called is called TCP/IP, and by using it, all the networks of the world subordinate their own selfish purposes.
This is what makes the Internet’s protocol generous and supportive to an absolute degree toward every purpose to which it is put. It is a rising tide that lifts all boats.
In retrospect we might say the big networks within the Internet—those run by phone and cable companies, governments and universities—agreed to participate in the Internet because it was so obviously useful that there was no reason not to.
But the rising-tide nature of the Internet was not obvious to all of them at first. In retrospect, they didn’t realize that the Internet was a Trojan Horse, wheeled through their gates by geeks who looked harmless but in fact were bringing the world a technical miracle.
I can support that claim by noting that even though phone and cable companies of the world now make trillions of dollars because of it, they never would have invented it.
Two reasons for that. One is because it was too damn simple. The other is because they would have started with billing. And not just billing you and me. They would have wanted to bill each other, and not use something invented by another company.
A measure of the Internet’s miraculous nature is that actually billing each other would have been so costly and complicated that what they do with each other, to facilitate the movement of data to, from, and across their networks, is called peering. In other words, they charge each other nothing.
Even today it is hard for the world’s phone and cable companies—and even its governments, which have always been partners of a sort—to realize that the Internet became the world-wide way to communicate because it didn’t start with billing.
Again, all TCP/IP says is that this is a way for computers, networks, and everything connected to them, to get along. And it succeeded, producing instant worldwide peace among otherwise competing providers of networks and services. It made every network operator involved win a vast positive-sum game almost none of them knew they were playing. And most of them still don’t.
You know that old joke in which the big fish says to the little fish, “Hi guys, how’s the water?” and one of the little fish says to the other “What’s water?” In 2005, David Foster Wallace gave a legendary commencement address at Kenyon College that I highly recommend, titled “This is water.”
I suspect that, if Wallace were around today, he’d address his point to our digital world.
Those of you who already know me are aware that my wife Joyce is as much a companion and collaborator of mine as Vincent Ostrom was of Lin. I bring this up because much of of this talk is hers, including this pair of insights about the Internet: that it has no distance, and also no gravity.
Think about it: when you are on the Internet with another person—for example if you are in a chat or an online conference—there is no functional distance between you and the other person. One of you may be in Chicago and the other in Bangalore. But if the Internet is working, distance is gone. Gravity is also gone. Your face may be right-side-up on the other person’s screen, but it is absent of gravity. The space you both occupy is the other person’s two-dimensional rectangle. Even if we come up with holographic representations of ourselves, we are still incorporeal “on” the Internet. (I say “on” because we need prepositions to make sense of how things are positioned in the world. Yet our limited set of physical-world prepositions—over, under around, through, beside, within and the rest—misdirect our attention away from our disembodied state in the digital one.)
Familiar as that disembodied state may be to all of us by now, it is still new to human experience and inadequately informed by our experience as embodied creatures. It is also hard for us to see both what our limitations are, and how limitless we are at the same time.
Joyce points out that we are also highly adaptive creatures, meaning that eventually we’ll figure out what it means to live where there is no distance or gravity, much as astronauts learn to live as weightless beings in space.
But in the meantime, we’re having a hard time seeing the nature and limits of what’s good and what’s bad in this new environment. And that has to do, at least in part, on forms of enclosure in that world—and how we are exploited within private spaces where we hardly know we are trapped.
In The Medium is the Massage, Marshall McLuhan says every new medium, every new technology, “works us over completely.” Those are his words: works us over completely. Such as now, with digital technology, and the Internet.
I was talking recently with a friend about where our current digital transition ranks among all the other transitions in history that each have a formal cause. Was becoming ditital the biggest thing since the industrial revolution? Since movable type? Writing? Speech?
No, he said. “It’s the biggest thing since oxygenation.”
In case you weren’t there, or weren’t paying attention in geology class, oxygenation happened about 2.5 billion years ago. Which brings us to our next topic:
Journalism is just one example of a trusted institution that is highly troubled in the digital world.
It worked fine in a physical world where truth-tellers who dig into topics and reported on them with minimized prejudice were relatively scarce yet easy to find, and to trust. But in a world flooded with information and opinion—a world where everyone can be a reporter, a publisher, a producer, a broadcaster, where the “news cycle” has the lifespan of a joke, and where news and gossip have become almost indistinguishable while being routed algorithmically to amplify prejudice and homophily, journalism has become an anachronism: still important, but all but drowning in a flood of biased “content” paid for by surveillance-led adtech.
People are still hungry for good information, of course, but our appetites are too easily fed by browsing through the surfeit of “content” on the Internet, which we can easily share by text, email or social media. Even if we do the best we can to share trustworthy facts and other substances that sound like truth, we remain suspended in a techno-social environment we mostly generate and re-generate ourselves. Kind of like our ancestral life forms made sense of the seas they oxygenated, long ago.
The academy is another institution that’s troubled in our digital time. After all, education on the Internet is easy to find. Good educational materials are easy to produce and share. For example, take Kahn Academy, which started with one guy tutoring his cousin though online videos.
Authority must still be earned, but there are now countless non-institutional ways to earn it. Credentials still matter, but less than they used to, and not in the same ways. Ad hoc education works in ways that can be cheap or free, while institutions of higher education remain very expensive. What happens when the market for knowledge and know-how starts moving past requirements for advanced degrees that might take students decades of their lives to pay off?
For one example of that risk already at work, take computer programming.
Which do you think matters more to a potential employer of programmers—a degree in computer science or a short but productive track record? For example, by contributing code to the Linux operating system?
To put this in perspective, Linux and operating systems like it are inside nearly every smart thing that connects to the Internet, including TVs, door locks, the world’s search engines, social network, laptops and mobile phones. Nothing could be more essential to computing life.
At the heart of Linux is what’s called the kernel. For code to get into the kernel, it has to pass muster with other programmers who have already proven their worth, and then through testing and debugging. If you’re looking for a terrific programmer, everyone contributing to the Linux kernel is well-proven. And there are thousands of them.
Now here’s the thing. It not only doesn’t matter whether or not those people have degrees in computer science, or even if they’ve had any formal training. What matters, for our purposes here, is that, to a remarkable degree, many of them don’t have either. Or perhaps most of them.
I know a little about this because, in the course of my work at Linux Journal, I would sometimes ask groups of alpha Linux programmers where they learned to code. Almost none told me “school.” Most were self-taught or learned from each other.
My point here is that the degree to which the world’s most essential and consequential operating system depends on the formal education of its makers is roughly zero.
See, the problem for educational institutions in the digital world is that most were built to leverage scarcity: scarce authority, scarce materials, scarce workspace, scarce time, scarce credentials, scarce reputation, scarce anchors of trust. To a highly functional degree we still need and depend on what only educational institutions can provide, but that degree is a lot lower than it used to be, a lot more varied among disciplines, and it risks continuing to decline as time goes on.
It might help at this point to see gravity in some ways as a problem the Internet solves. Because gravity is top-down. It fosters hierarchy and bell curves, sometimes where we need neither.
In the first decade of our new millenium, Elinor Ostrom and Charlotte Hess—already operating in our new digital age—extended the commons category to include knowledge, calling it a complex ecosystem that operates as a common: a shared resource subject to social dilemmas.
They looked at ease of access to digital forms of knowledge and easy new ways to store, access and share knowledge as a common. They also looked at the nature of knowledge and its qualities of non-rivalry and non-excludability, which were both unlike what characterizes a natural commons, with its scarcities of rivalrous and excludable goods.
A knowledge commons, they said, is characterized by abundance. This is one way what Yochai Benkler calls Commons Based Peer Production on the Internet is both easy and rampant, giving us, among many other things, both the free software and open source movements in code development and sharing, plus the Internet and the Web.
Commons Based Peer Production also demonstrates how collaboration and non-material incentives can produce better quality products, and less social friction in the course of production.
I’ve given Linux as one example of Commons Based Peer Production. Others are Wikipedia and the Internet Archive. We’re also seeing it within the academy, for example with Indiana University’s own open archives, making research more accessible and scholarship more rich and productive.
Every one of those examples comports with Lin Ostrom’s design principles:
But there is also a crisis with Commons Based Peer Production on the Internet today.
Programmers who ten or fifteen years ago would not participate in enclosing their own environments are doing exactly that, for example with 5G, which is designed to put the phone companies in charge of what we can do on the Internet.
The 5G-enclosed Internet might be faster and more handy in many ways, the range of freedoms for each of us there will be bounded by the commercial interests of the phone companies and their partners, and subject to none of Lin’s rules for governing a commons.
Consider this: every one of the nine enclosures I listed at the beginning of this talk are enabled by programmers who either forgot or never learned about the freedom and openness that made the free and open Internet possible. They are employed in the golden egg gathering business—not in one that appreciates the goose that lays those eggs, and which their predecessors gave to us all.
But this isn’t the end of the world. We’re still at the beginning. And a good model for how to begin is—
It is significant that all the commons the Ostroms and their colleagues researched in depth were local. Their work established beyond any doubt the importance of local knowledge and local control.
I believe demonstrating this in the digital world is our best chance of saving our digital world from the nine forms of enclosure I listed at the top of this talk.
It’s our best chance because there is no substitute for reality. We may be digital beings now, as well as physical ones. There are great advantages, even in the digital world, to operating in the here-and-now physical world, where all our prepositions still work, and our metaphors still apply.
Back to Joyce again.
In the mid ‘90s, when the Internet was freshly manifest on our home computers, I was mansplaining to Joyce how this Internet thing was finally the global village long promised by tech.
Her response was, “The sweet spot of the Internet is local.” She said that’s because local is where the physical and the virtual intersect. It’s where you can’t fake reality, because you can see and feel and shake hands with it.
She also said the first thing the Internet would obsolesce would be classified ads in newspapers. That’s because the Internet would be a better place than classifieds for parents to find a crib some neighbor down the street might have for sale. Then Craigslist came along and did exactly that.
We had an instructive experience with how the real world and the Internet work together helpfully at the local level about a year and a half ago. That’s when a giant rainstorm fell on the mountains behind Santa Barbara, where we live, and the town next door, called Montecito. This was also right after the Thomas Fire—largest at the time in recorded California history—had burned all the vegetation away, and there was a maximum risk of what geologists call a “debris flow.”
The result was the biggest debris flow in the history of the region: a flash flood of rock and mud that flowed across Montecito like lava from a volcano. Nearly two hundred homes were destroyed, and twenty-three people were killed. Two of them were never found, because it’s hard to find victims buried under what turned out to be at least twenty thousand truckloads of boulders and mud.
Right afterwards, all of Montecito was evacuated, and very little news got out while emergency and rescue workers did their jobs. Our local news media did an excellent job of covering this event as a story. But I also noticed that not much was being said about the geology involved.
So, since I was familiar with debris flows out of the mountains above Los Angeles, where they have infrastructure that’s ready to handle this kind of thing, I put up a post on my blog titled “Making sense of what happened to Montecito.” In that post I shared facts about the geology involved, and also published the only list on the Web of all the addresses of homes that had been destroyed. Visits to my blog jumped from dozens a day to dozens of thousands. Lots of readers also helped improve what I wrote and re-wrote.
All of this happened over the Internet, but it pertained to a real-world local crisis.
Now here’s the thing. What I did there wasn’t writing a story. I didn’t do it for the money, and my blog is a noncommercial one anyway. I did it to help my neighbors. I did it by not being a bystander.
I also did it in the context of a knowledge commons.
Specifically, I was respectful of boundaries of responsibility; notably those of local authorities—rescue workers, law enforcement, reporters from local media, city and county workers preparing reports, and so on. I gave much credit where it was due and didn’t step on the toes of others helping out as well.
An interesting fact about journalism there at the time was the absence of fake news. Sure, there was plenty of fingers pointing in blog comments and in social media. But it was marginalized away from the fact-reporting that mattered most. There was a very productive ecosystem of information, made possible by the Internet in everyone’s midst. And by everyone, I mean lots of very different people.
We are learning creatures by nature. We can’t help it. And we don’t learn by freight forwarding
By that, I mean what I am doing here, and what we do with each other when we talk or teach, is not delivering a commodity called information, as if we were forwarding freight. Something much more transformational is taking place, and this is profoundly relevant to the knowledge commons we share.
Consider the word information. It’s a noun derived from the verb to inform, which in turn is derived from the verb to form. When you tell me something I don’t know, you don’t just deliver a sum of information to me. You form me. As a walking sum of all I know, I am changed by that.
This means we are all authors of each other.
In that sense, the word authority belongs to the right we give others to author us: to form us.
Now look at how much more of that can happen on our planet, thanks to the Internet, with its absence of distance and gravity.
And think about how that changes every commons we participate in, as both physical and digital beings. And how much we need guidance to keep from screwing up the commons we have, or forming the ones we don’t, or forming might have in the future—if we don’t screw things up.
A rule in technology is that what can be done will be done—until we find out what shouldn’t be done. Humans have done this with every new technology and practice from speech to stone tools to nuclear power.
We are there now with the Internet. In fact, many of those enclosures I listed are well-intended efforts to limit dangerous uses of the Internet.
And now we are at a point where some of those too are a danger.
What might be the best way to look at the Internet and its uses most sensibly?
I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons, and subject to both research and guidance informed by Lin Ostrom’s rules.
And I hope that guides our study.
There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long. And I look forward to working with many of us here on answers to these and many other questions.
Ostrom, Elinor. Governing the Commons. Cambridge University Press, 1990
Ostrom, Elinor and Hess, Charlotte, editors. Understanding Knowledge as a Commons:
From Theory to Practice, MIT Press, 2011
Full text online: https://wtf.tw/ref/hess_ostrom_2007.pdf
Paul D. Aligica and Vlad Tarko, “Polycentricity: From Polanyi to Ostrom, and Beyond” https://asp.mercatus.org/system/files/Polycentricity.pdf
Elinor Ostrom, “Coping With Tragedies of the Commons,” 1998 https://pdfs.semanticscholar.org/7c6e/92906bcf0e590e6541eaa41ad0cd92e13671.pdf
Lee Anne Fennell, “Ostrom’s Law: Property rights in the commons,” March 3, 2011
Christopher W. Savage, “Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy.” Stanford Law School, 2019. https://law.stanford.edu/wp-content/uploads/2019/01/Savage_20190129-1.pdf
*I wrote it using—or struggling in—the godawful Outline view in Word. Since I succeeded (most don’t, because they can’t or won’t, with good reason), I’ll brag on succeeding at the subhead level:
As I’m writing this, in Febrary, 2020, Dave Winer is working on what he calls writing on rails. That’s what he gave the pre-Internet world with MORE several decades ago, and I’m helping him with now with the Internet-native kind, as a user. He explains that here. (MORE was, for me, like writing on rails. It’ll be great to go back—or forward—to that again.)
Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):
As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs.* The whole thing is beyond sad and awful.
The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.
I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the most difficult she has ever faced, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.
Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)
Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):
Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.
I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:
I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.
Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.
Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:
Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)
Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.
Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.
It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:
From bottom to top, here’s what it says:
About this set of debris flows in particular:
For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)
Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:
See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:
Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.
But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.
Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.
Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.
For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:
This is deep in the heart of Santa Barbara.
The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.
All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.
To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:
What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.
Here is an image that helps, from that same link:
Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.
I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.
6 April, 2020
*I was told later, by a rescue worker who was on the case, that it was possible that both victims’ bodies had washed all the way to the ocean, and thus will never be found.
In this Edhat story, Ed Keller visits a recently found prior debris flow. An excerpt:
The mud and boulders from a prehistoric debris flow, the second-to-last major flow in Montecito, have been discovered by a UCSB geologist at the Bonnymede condominiums and Hammond’s Meadow, just east of the Coral Casino.
The flow may have occurred between 1,000 and 2,000 years ago, said Ed Keller, a professor of earth science at the university. He’s calling it the “penultimate event.” It came down a channel of Montecito Creek and was likely larger on that creek than during the disaster of Jan. 9, 2018, Keller said. Of 23 people who perished on Jan. 9, 17 died along Montecito Creek.
The long interval between the two events means that the probability of another catastrophic debris flow occurring in Montecito in the next 1,000 years is very low, Keller said.
“It’s reassuring,” he said, “They’re still pretty rare events, if you consider you need a wildfire first and then an intense rainfall. But smaller debris flows could occur, and you could still get a big flash flood. If people are given a warning to evacuate, they should heed it.”
In The Adpocalypse: What it Means, Vlogbrother Hank Green issues a humorous lament on the impending demise of online advertising. Please devote the next 3:54 of your life to watching that video, so you catch all his points and I don’t need to repeat them here.
Got them? Good.
All of Hank’s points are well-argued and make complete sense. They are also valid mostly inside the bowels of the Google beast where his video work has thrived for the duration, as well as inside the broadcast model that Google sort-of emulates. (That’s the one where “content creators” and “brands” live in some kind of partly-real and partly-imagined symbiosis.)
While I like and respect what the brothers are trying to do commercially inside Google’s belly, I also expect them, and countless other “content creators” will get partly or completely expelled after Google finishes digesting that market, and obeys its appetite for lucrative new markets that obsolesce its current one.
We can see that appetite at work now that Google Contributor screams agreement with ad blockers (which Google is also joining) and their half-billion human operators that advertising has negative value. This is at odds with the business model that has long sustained both YouTube and “content creators” who make money there.
So it now appears that being a B2B creature that sells eyeballs to advertisers is Google’s larval stage, and that Google intends to emerge from its chrysalis as a B2C creature that sells content directly to human customers. (And stays hedged with search advertising, which is really more about query-based notifications than advertising, and doesn’t require unwelcome surveillance that will get whacked by the GDPR anyway a year from now.)
Google will do this two ways: 1) through Contributor (an “ad removal pass” you buy) and 2) through subscriptions to YouTube TV (a $35/month cable TV replacement) and/or YouTube Red ($9.99/month for “uninterrupted music, ad-free videos, and more”).
Contributor is a way for Google to raise its share of the adtech duopoly it comprises with Facebook. The two paid video offerings are ways for Google to maximize its wedge of a subscription pie also sliced up by Apple, Amazon, Netflix, HBO, ShowTime, all the ISPs and every publication you can name—and to do that before we all hit Peak Subscription. (Which I’m sure most of us can see coming. I haven’t written about it yet, but I have touched hard on it here and here.)
I hope the Vlogbrothers make money from YouTube Red once they’re behind that paywall. Or that they can sell their inventory outside all the silos, like some other creators do. Maybe they’ll luck out if EmanciPay or some other new and open customer-based way of paying for creative goods works out. Whether or not that happens, one or more of the new blockchain/distributed ledger/token systems will provide countless new ways that stuff will get offered and paid for in the world’s markets. Brave Payments is already pioneering in that space. (Get the Brave browser and give it a try.)
It helps to recognize that the larger context (in fact the largest one) is the Internet, not the Web (which sits on top of the Net), and not apps (which are all basically on loan from their makers and the distribution systems of Apple and Google). The Internet cannot be contained in, or reduced to, the feudal castles of Facebook and Google, which mostly live on the Web. Those are all provisional and temporary. Money made by and within them is an evanescent grace.
All the Net does is connect end points and pass data between them through any available path. This locates us on a second world alongside the physical one, where the distance between everything it connects rounds to zero. This is new to human experience and at least as transformative as language, writing, printing and electricity—and no less essential than any of those, meaning it isn’t going to go away, no matter how well the ISPs, governments and corporate giants succeed in gobbling up and spinctering business and populations inside their digestive tracts.
The Net is any-to-any, by any means, by design of its base protocols. This opens countless possibilities we have barely begun to explore, much less build out. It is also an experience for humanity that is not going to get un-experienced if some other base protocols replace the ones we have now.
I am convinced that we will find new ways in our connected environment to pay for goods and services, and to signal each other much more securely, efficiently and effectively than we do now. I am also convinced we will do all that in a two-party way rather than in the three-party ways that require platforms and bureaucracies. If this sounds like anarchy, well, maybe: yeah. I dunno. We already have something like that in many disrupted industries. (Some wise stuff got written about this by David Graeber in The Utopia of Rules.)
Not a day goes by that my mind isn’t blown by the new things happening that have not yet cohered into an ecosystem but still look like they can create and sustain many forms of economic and social life, new and old. I haven’t seen anything like this in tech since the late ’90s. And if that sounds like another bubble starting to form, yes it is. You see it clearly in the ICO market right now. (Look at what’s lined up so far. Wholly shit.)
But this one is bigger. It’s also going to bring down everybody whose business is guesswork filled with fraud and malware.
If you’re betting on which giants survive, hold Amazon and Apple. Short those other two.
The NYTimes says the Mandarins of language are demoting the Internet to a common noun. It is to be just “internet” from now on. Reasons:
Thomas Kent, The A.P.’s standards editor, said the change mirrored the way the word was used in dictionaries, newspapers, tech publications and everyday life.
In our view, it’s become wholly generic, like ‘electricity or the ‘telephone,’ ” he said. “It was never trademarked. It’s not based on any proper noun. The best reason for capitalizing it in the past may have been that the word was new. But at one point, I’ve heard, ‘phonograph’ was capitalized.”
But we never called electricity “the Electricity.” And “the telephone” referred to a single thing of which there billions of individual examples.
What was it about “the Internet” that made us want to capitalize it in the first place? Is usage alone reason enough to stop respecting that?
Some of my tech friends say the “Internet” we’ve had for all these years is just one prototype: the first and best-known of many other possible ones.
All due respect, but: bah.
There is only one Internet just like there is only one Universe. There are other examples of neither.
Formalizing the lower-case “internet,” for whatever reason, dismisses what’s transcendent and singular about the Internet we have: a whole that is more, and other, than a sum of parts.
I know it looks like the Net is devolving into many separate systems, isolated and silo’d to some degree. We see that with messaging, for example. Hundreds of different ones, most of them incompatible, on purpose. We have specialized mobile systems that provide variously open vs. sphinctered access (such as T-Mobile’s “binge” allowance for some content sources but not others), zero-rated not-quite-internets (such as Facebook’s Free Basics) and countries such as China, where many domains and uses are locked out.
Would we enjoy a common network by any name today if the Internet had been lower-case from the start?
Would makers or operators of any of the parts that comprise the Internet’s whole feel any fealty to what at least ought to be the common properties of that whole? Or would they have made sure that their parts only got along, at most, with partners’ parts? Would the first considerations by those operators not have been billing and tariffs agreed to by national regulators?
Would the world experience absent distance and cost across a The Giant Zero in its midst were it not for the Internet’s founding design, which left out billing proprietary routing on purpose?
Would we have anything resembling the Internet of today if designing and building it had been left up to phone and cable companies? Or to governments (even respecting the roles government activities did play in creating the Net we do have)?
I think the answer to all of those would be no.
In The Compuserve of Things, Phil Windley begins, “On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?”
Would he, or anybody, ask such questions, or aspire to such purposes, were it not for the respect many of us pay to the upper-cased-ness of “the Internet?”
How does demoting Internet from proper to common noun not risk (or perhaps even assure) its continued devolution to a collection of closed and isolated parts that lack properties (e.g. openness and commonality) possessed only by the whole?
I don’t know. But I think these kinds of questions are important to ask, now that the keepers of usage standards have demoted what the Net’s creators made — and ignore why they made it.
If you care at all about this, please dig Archive.org‘s Locking the Web open: a Call for a Distributed Web, Brewster Kahle’s post by the same title, covering more ground, and the Decentralized Web Summit, taking place on June 8-9. (I’ll be there in spirit. Alas, I have other commitments on the East Coast.)
Flickr is far from perfect, but it is also by far the best online service for serious photographers. At a time when the center of photographic gravity is drifting form arts & archives to selfies & social, Flickr remains both retro and contemporary in the best possible ways: a museum-grade treasure it would hurt terribly to lose.
Flickr was created and lovingly nurtured by Stewart Butterfield and Caterina Fake, from its creation in 2004 through its acquisition by Yahoo in 2005 and until their departure in 2008. Since then it’s had ups and downs. The latest down was the departure of Bernardo Hernandez in 2015.
I don’t even know who, if anybody, runs it now. It’s sinking in the ratings. According to Petapixel, it’s probably up for sale. Writes Michael Zhang, “In the hands of a good owner, Flickr could thrive and live on as a dominant photo sharing option. In the hands of a bad one, it could go the way of MySpace and other once-powerful Internet services that have withered away from neglect and lack of innovation.”
Naturally, the natives are restless. (Me too. I currently have 62,527 photos parked and curated there. They’ve had over ten million views and run about 5,000 views per day. I suppose it’s possible that nobody is more exposed in this thing than I am.)
So I’m hoping a big and successful photography-loving company will pick it up. I volunteer Adobe. It has the photo editing tools most used by Flickr contributors, and I expect it would do a better job of taking care of both the service and its customers than would Apple, Facebook, Google, Microsoft or other possible candidates.
Less likely, but more desirable, is some kind of community ownership. Anybody up for a kickstarter?
[Later…] I’m trying out 500px. Seems better than Flickr in some respects so far. Hmm… Is it possible to suck every one of my photos, including metadata, out of Flickr by its API and bring it over to 500px?
I’ve always loved AM radio. But it’s not a requited love. AM radios these days are harder to get, and tend to suck. The band is thick with electronic noise from things that compute (a sum of devices that rounds to everything). AM stations are falling like old trees all over the band, and all over the world, and most of those that remain spout one-sided talk or speak in foreign languages. Even sports programming, once a mainstay on AM, is migrating to FM.
To put it kindly, AM radio is the opposite of new. It’s the steam locomotive of broadcasting.
Case in point: you won’t find an AM radio in a Tesla Model X. You also won’t find it in other electric cars, such as the BMW i3. One reason is that AM reception is trashed by electric noise, and these are electric cars. Another is that the best AM reception requires a whip antenna outside the car: the longer the better. But these days car makers hide antennas in windows and little shark fins on the roof. Another is that car makers have been cheaping out on the chips used in their AM radios for years, and the ones in home and portable radios are even worse.
Demand for AM has been waning for decades anyway. AM doesn’t sound as good as FM or digital streams on laptops and mobile things. (Well, it can sound good with HD Radio, but that’s been a non-starter on both the transmitting and receiving sides for many years.) About the only formats left on AM that get ratings in the U.S. are sports and news. But, like I just said, sports is moving to FM too—even though signal coverage on FM in some markets, relatively speaking, sucks. (Compare WFAN/660am and 101.9fm, which simulcast.)
On the whole, AM stations barely show in the ratings. In Raleigh-Durham, WPTF/680 ruled the “the book” for decades, and is now the top of the bottom-feeders, with just a 1.0% share. KGO/810, which was #1 for a lifetime in the Bay Area, is now #19 with a 2.0% share. Much of KGO’s talent has been fired, and there’s a Facebook page for disgruntled fans. Not that it matters.
In Europe, AM is being clear-cut like a diseased forest. Norway ended AM broadcasting a while back, and will soon kill FM too. Germany killed all AM broadcasting at end of last year, just a few days ago. The American AFN (Armed Forces Network), which I used to love listening to over its 150,000-watt signal on 873Khz from Frankfurt, is also completely gone on AM in Germany. All transmitters are down. The legendary Marnach transmitter of Radio Luxembourg, “planet Earth’s biggest commercial radio station,” also shut down when 2016 arrived, and its towers will soon be down too.
Europe’s other AM band, LW or longwave, is also being abandoned. The advantage of longwave is coverage. Signals on longwave spread over enormous territories, and transmitters can run two million watts strong. But listening has gone steadily down, and longwave is even more vulnerable to electrical noise than AM/MW. And running megawatt transmitters is expensive. So now Germany’s monster signal at 153KHz is gone, and France’s at 162KHz (one of 2 million watt ones) is due to go down later this year. And this report says all that’s keeping BBC’s landmark Radio 4 signal going on 198KHz is a collection of giant vacuum tubes that are no longer made. Brazil is moving from AM to FM as well. For an almost daily report on the demise of AM broadcasting around the world, read MediumWave News.
FM isn’t safe either. The UK is slowly phasing out both AM and FM, while phasing in Digital Audio Broadasting. Norway is the DAB pioneer and will soon follow suit, and kill off FM. No other countries have announced the same plans, but the demographics of radio listening are shifting from FM to online anyway, just as they shifted from AM to FM in past decades. Not surprisingly, streaming stats are going up and up. So is podcasting. (Here are Pew’s stats from a year ago.)
Sure, there’s still plenty of over-the-air listening. But ask any college kid if he or she listens to over-the-air radio. Most (in my experience anyway) say no, or very little. They might listen in a car, but their primary device for listening — and watching video, which is radio with pictures — is their phone or tablet. So the Internet today is doing to FM what FM has been doing to AM for decades. Only faster.
Oh, and then there’s the real estate issue. AM/MW and LW transmission requires a lot of land. As stations lose value, the land under many transmitters is worth more. (We saw this last year with WMAL/630 in Washington, which I covered here.) FM and TV transmission requires height, which is why their transmitters crowd the tops of buildings and mountains. The FCC is also now auctioning off TV frequencies, since nearly everybody is now watching TV on cable, satellite or computing devices. At some point it simply becomes cheaper and easier for radio stations, groups and networks to operate servers than to pay electricity and rent for transmitters.
This doesn’t mean radio goes away. It just goes online, where it will stay. It’ll suck that you can’t get stations where there isn’t cellular or wi-fi coverage, but that matters less than this: there are many fewer limits to broadcasting and listening online, obsolescing the “station” metaphor, along with its need for channels and frequencies. Those are just URLs now.
On the Internet band, anybody can stream or podcast to the whole world. The only content limitations are those set by (or for) rights-holders to music and video content. If you’ve ever wondered why there’s very little music on podcasts (they’re almost all talk), it’s because “clearing rights” for popular — or any — recorded music for podcasting ranges from awful to impossible. Streaming is easier, but no bargain. To get a sense of how complex streaming is, copyright-wise, dig David Oxenford’s Broadcast Law Blog. If all you want to do is talk, however, feel free, because you are. (A rough rule: talk is cheap, music is expensive.)
The key thing is that radio will remain what it has been from the start: the most intimate broadcast medium ever created. And it might become even more intimate than ever, once it’s clear and easy to everyone that anyone can do it. So rock on.
(Cross posted from this at Facebook)
In Snow on the Water I wrote about the ‘low threshold of death” for what media folks call “content” — which always seemed to me like another word for packing material. But its common parlance now.
For example, a couple days ago I heard a guy on WEEI, my fave sports station in Boston, yell “Coming up! Twenty-five straight minutes of content!”
Still, it’s all gone like snow on the water, melting at the speed of short term memory decay. Unless it’s in a podcast. And then, even if it’s saved, it’ll still get flushed or 404’d in the fullness of time.
So I think about content death a lot.
Back around the turn of the millennium, John Perry Barlow said “I didn’t start hearing the word ‘content’ until the container business felt threatened.” Same here. But the container business now looks more like plumbing than freight forwarding. Everything flows. But to where?
My Facebook timeline, standing in the vertical, looks like a core sample of glacier ice, drilled back to 1947, the year I showed up. Memory, while it lasts, is of old stuff which in the physical world would rot, dry, disintegrate, vanish or lithify from the bottom up.
But here we are on the Web, which was designed as a way to share documents, not to save them. It presumed a directory structure, inherited from Unix (e.g. domain.something/folder/folder/file.html). Amazingly, it’s still there. Whatever longevity “content” enjoys on the Web is largely owed to that structure, I believe.
But in practice most of what we pile onto the top of the Web is packed into silos such as Facebook. What happens to everything we put there if Facebook goes away? Bear in mind that Facebook isn’t even yet a decade old. It may be huge, but it’s no more permanent than a sand dune. Nothing on the Web is.
Everything on the Web, silo’d or not, flows outward from its sources like icebergs from glaciers, melting at rates of their own.
Anyway, just wanted to share some thoughts on digital mortality this morning.
As you were. Or weren’t. Or will be. Or not.
Bonus link: Locking the Web open.