infrastructure

You are currently browsing the archive for the infrastructure category.

This is the Ostrom Memorial Lecture I gave on 9 October of last year for the Ostrom Workshop at Indiana University. Here is the video. (The intro starts at 8 minutes in, and my part starts just after 11 minutes in.) I usually speak off the cuff, but this time I wrote it out, originally in outline form*, which is germane to my current collaborations with Dave Winer, father of outlining software (and, in related ways, of blogging and podcasting). So here ya go.

Intro

The movie Blade Runner was released in 1982; and was set in a future Los Angeles. Anyone here know when in the future Blade Runner is set? I mean, exactly?

The year was 2019. More precisely, next month: November.

In Blade Runner’s 2019, Los Angeles is a dark and rainy hellscape with buildings the size of mountains, flying cars, and human replicants working on off-world colonies. It also has pay phones and low-def computer screens that are vacuum tubes.

Missing is a communication system that can put everyone in the world at zero distance from everyone else, in disembodied form, at almost no cost—a system that lives on little slabs in people’s pockets and purses, and on laptop computers far more powerful than any computer, of any size, from 1982.

In other words, this communication system—the Internet—was less thinkable in 1982 than flying cars, replicants and off-world colonies. Rewind the world to 1982, and the future Internet would appear a miracle dwarfing the likes of loaves and fish.

In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.

As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.

There is also only one of it. As with the universe, it has no other examples.

As a source of abundance, the closest thing to an example the Internet might have is the periodic table. And the Internet might be even more elemental than that: so elemental that it is easy to overlook the simple fact that it is the largest goose ever to lay golden eggs.

It can, however, be misunderstood, and that’s why it’s in trouble.

The trouble it’s in is with human nature: the one that sees more value in the goose’s eggs than in the goose itself.

See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet. I’ll describe nine of them.

Enclosures

The first enclosure is service provisioning, for example with asymmetric connection speeds. On cable connections you may have up to 400 megabits per second downstream, but still only 10 megabits per second—one fortieth of that—upstream. (By the way this is exactly what Spectrum, formerly Time Warner Cable, provides with its most expensive home service to customers in New York City.)

They do that to maximize consumption while minimizing production by those customers. You can consume all the video you want, and think you’re getting great service. But meanwhile this asymmetrical provisioning prevents production at your end. Want to put out a broadcast or a podcast from your house, to run your own email server, or to store your own video or other personal data in your own personal “cloud”? Forget it.

The Internet was designed to support infinite production by anybody of anything. But cable TV companies don’t want you to have that that power. So you don’t. The home Internet you get from your cable company is nice to have, but it’s not the whole Internet. It’s an enclosed subset of capabilities biased by and for the cable company and large upstream producers of “content.”

So, it’s golden eggs for them, but none for you. Also missing are all the golden eggs you might make possible for those companies as an active producer rather than as a passive consumer.

The second enclosure is through 5G wireless service, currently promoted by phone companies as a new generation of Internet service. The companies deploying 5G promise greater speeds and lower lag times over wireless connections; but is also clear that they want to build in as many choke points as they like, all so you can be billed for as many uses as possible.

You want gaming? Here’s our gaming package. You want cloud storage? Here’s our cloud storage package. Each of these uses will carry terms and conditions that allow some uses and prevent others. Again, this is a phone company enclosure. No cable companies are deploying 5G. They’re fine with their own enclosure.

The third enclosure is government censorship. The most familiar example is China’s. In China’s closed Internet you will find no Google, Facebook, Twitter, Instagram or Reddit. No Pandora, Spotify, Slack or Dropbox. What you will find is pervasive surveillance of everyone and everything—and ranking of people in its Social Credit System.

By March of this year, China had already punished 23 million people with low social credit scores by banning them from traveling. Control of speech has also spread to U.S. companies such as the NBA and ESPN, which are now censoring themselves as well, bowing to the wishes of the Chinese government and its own captive business partners.

The fourth enclosure is the advertising-supported commercial Internet. This is led by Google and Facebook, but also includes all the websites and services that depend on tracking-based advertising. This form of advertising, known as adtech, has in the last decade become pretty much the only kind of advertising online.

Today there are very few major websites left that don’t participate in what Shoshana Zuboff calls surveillance capitalism, and Brett Frischmann and Evan Selinger call, in their book by that title, Re-engineering Humanity. Surveillance of individuals online is now so deep and widespread that nearly every news organization is either unaware of it or afraid to talk about it—in part because the advertising they run is aimed by it.

That’s why you’ll read endless stories about how bad Facebook and Google are, and how awful it is that we’re all being tracked everywhere like marked animals; but almost nothing about how the sites publishing stories about tracking also participate in exactly the same business—and far more surreptitiously. Reporting on their own involvement in the surveillance business is a third rail they won’t grab.

I know of only one magazine that took and shook that third rail, especially in the last year and a half.  That magazine was Linux Journal, where I worked for 24 years and was serving as editor-in-chief when it was killed by its owner in August. At least indirectly that was because we didn’t participate in the surveillance economy.

The fifth enclosure is protectionism. In Europe, for example, your privacy is protected by laws meant to restrict personal data use by companies online. As a result in Europe, you won’t see the Los Angeles Times or the Washington Post in your browsers, because those publishers don’t want to cope with what’s required by the EU’s laws.

While they are partly to blame—because they wish to remain in the reader-tracking business—the laws are themselves terribly flawed—for example by urging every website to put up a “cookie notice” on pages greeting readers. In most cases clicking “accept” to the site’s cookies only gives the site permission to continue doing exactly the kind of tracking the laws are meant to prevent.

So, while the purpose of these laws is to make the Internet safer, in effect they also make its useful space smaller.

The sixth enclosure is what The Guardian calls “digital colonialism.” The biggest example of that is  Facebook.org, originally called “Free Basics” and “Internet.org”

This is a China-like subset of the Internet, offered for free by Facebook in less developed parts of the world. It consists of a fully enclosed Web, only a few dozen sites wide, each hand-picked by Facebook. The rest of the Internet isn’t there.

The seventh enclosure is the forgotten past. Today the World Wide Web, which began as a kind of growing archive—a public set of published goods we could browse as if it were a library—is being lost. Forgotten. That’s because search engines are increasingly biased to index and find pages from the present and recent past, and by following the tracks of monitored browsers. It’s forgetting what’s old. Archival goods are starting to disappear, like snow on the water.

Why? Ask the algorithm.

Of course, you can’t. That brings us to our eighth enclosure: algorithmic opacity.

Consider for a moment how important power plants are, and how carefully governed they are as well. Every solar, wind, nuclear, hydro and fossil fuel power production system in the world is subject to inspection by whole classes of degreed and trained professionals.

There is nothing of the sort for the giant search engine and social networks of the world. Google and Facebook both operate dozens of data centers, each the size of many Walmart stores. Yet the inner workings of those data centers are nearly absent of government oversight.

This owes partly to the speed of change in what these centers do, but more to the simple fact that what they do is unknowable, by design. You can’t look at rows of computers with blinking lights in many acres of racks and have the first idea of what’s going on in there.

I would love to see research, for example, on that last enclosure I listed: on how well search engines continue to index old websites. Or to do anything. The whole business is as opaque as a bowling ball with no holes.

I’m not even sure you can find anyone at Google who can explain exactly why its index does one thing or another, for any one person or another. In fact, I doubt Facebook is capable of explaining why any given individual sees any given ad. They aren’t designed for that. And the algorithm itself isn’t designed to explain itself, perhaps even to the employees responsible for it.

Or so I suppose.

In the interest of moving forward with research on these topics, I invite anyone at Google, Facebook, Bing or Amazon to help researchers at institutions such as the Ostrom Workshop, and to explain exactly what’s going on inside their systems, and to provide testable and verifiable ways to research those goings-on.

The ninth and worst enclosure is the one inside our heads. Because, if we think the Internet is something we use by grace of Apple, Amazon, Facebook, Google and “providers” such as phone and cable companies, we’re only helping all those companies contain the Internet’s usefulness inside their walled gardens.

Not understanding the Internet can result in problems similar to ones we suffer by not understanding common pool resources such as the atmosphere, the oceans, and the Earth itself.

But there is a difference between common pool resources in the natural world, and the uncommon commons we have with the Internet.

See, while we all know that common-pool resources are in fact not limitless—even when they seem that way—we don’t have the same knowledge of the Internet, because its nature as a limitless non-thing is non-obvious.

For example, we know common pool resources in the natural world risk tragic outcomes if our use of them is ungoverned, either by good sense or governance systems with global reach. But we don’t know that the Internet is limitless by design, or that the only thing potentially tragic about it is how we restrict access to it and use of it, by enclosures such as the nine I just listed.

So my thesis here is this: if we can deeply and fully understand what the Internet is, why it is fully important, and why it is in danger of enclosure, we can also understand why, ten years after Lin Ostrom won a Nobel prize for her work on the commons, that work may be exactly what we need to save the Internet as a boundless commons that can support countless others.

The Internet

We’ll begin with what makes the Internet possible: a protocol.

A protocol is a code of etiquette for diplomatic exchanges between computers. A form of handshake.

What the Internet’s protocol does is give all the world’s digital devices and networks a handshake agreement about how to share data between any point A and any point B in the world, across any intermediary networks.

When you send an email, or look at a website, anywhere in the world, the route the shared data takes can run through any number of networks between the two. You might connect from Bloomington to Denver through Chicago, Tokyo and Mexico City. Then, two minutes later, through Toronto and Miami. Some packets within your data flows may also be dropped along the way, but the whole session will flow just fine because the errors get noticed and the data re-sent and re-assembled on the fly.

Oddly, none of this is especially complicated at the technical level, because what I just described is pretty much all the Internet does. It doesn’t concern itself with what’s inside the data traffic it routes, who is at the ends of the connections, or what their purposes are—any more than gravity cares about what it attracts.

Beyond the sunk costs of its physical infrastructure, and the operational costs of keeping the networks themselves standing up, the Internet has no first costs at its protocol level, and it adds no costs along the way. It also has no billing system.

In all these ways the Internet is, literally, neutral. It also doesn’t need regulators or lawmakers to make it neutral. That’s just its nature.

The Internet’s protocol called is called TCP/IP, and by using it, all the networks of the world subordinate their own selfish purposes.

This is what makes the Internet’s protocol generous and supportive to an absolute degree toward every purpose to which it is put. It is a rising tide that lifts all boats.

In retrospect we might say the big networks within the Internet—those run by phone and cable companies, governments and universities—agreed to participate in the Internet because it was so obviously useful that there was no reason not to.

But the rising-tide nature of the Internet was not obvious to all of them at first. In retrospect, they didn’t realize that the Internet was a Trojan Horse, wheeled through their gates by geeks who looked harmless but in fact were bringing the world a technical miracle.

I can support that claim by noting that even though phone and cable companies of the world now make trillions of dollars because of it, they never would have invented it.

Two reasons for that. One is because it was too damn simple. The other is because they would have started with billing. And not just billing you and me. They would have wanted to bill each other, and not use something invented by another company.

A measure of the Internet’s miraculous nature is that actually billing each other would have been so costly and complicated that what they do with each other, to facilitate the movement of data to, from, and across their networks, is called peering. In other words, they charge each other nothing.

Even today it is hard for the world’s phone and cable companies—and even its governments, which have always been partners of a sort—to realize that the Internet became the world-wide way to communicate because it didn’t start with billing.

Again, all TCP/IP says is that this is a way for computers, networks, and everything connected to them, to get along. And it succeeded, producing instant worldwide peace among otherwise competing providers of networks and services. It made every network operator involved win a vast positive-sum game almost none of them knew they were playing. And most of them still don’t.

You know that old joke in which the big fish says to the little fish, “Hi guys, how’s the water?” and one of the little fish says to the other “What’s water?” In 2005, David Foster Wallace gave a legendary commencement address at Kenyon College that I highly recommend, titled “This is water.”

I suspect that, if Wallace were around today, he’d address his point to our digital world.

Human experience

Those of you who already know me are aware that my wife Joyce is as much a companion and collaborator of mine as Vincent Ostrom was of Lin. I bring this up because much of of this talk is hers, including this pair of insights about the Internet: that it has no distance, and also no gravity.

Think about it: when you are on the Internet with another person—for example if you are in a chat or an online conference—there is no functional distance between you and the other person. One of you may be in Chicago and the other in Bangalore. But if the Internet is working, distance is gone. Gravity is also gone. Your face may be right-side-up on the other person’s screen, but it is absent of gravity. The space you both occupy is the other person’s two-dimensional rectangle. Even if we come up with holographic representations of ourselves, we are still incorporeal “on” the Internet. (I say “on” because we need prepositions to make sense of how things are positioned in the world. Yet our limited set of physical-world prepositions—over, under around, through, beside, within and the rest—misdirect our attention away from our disembodied state in the digital one.)

Familiar as that disembodied state may be to all of us by now, it is still new to human experience and inadequately informed by our experience as embodied creatures. It is also hard for us to see both what our limitations are, and how limitless we are at the same time.

Joyce points out that we are also highly adaptive creatures, meaning that eventually we’ll figure out what it means to live where there is no distance or gravity, much as astronauts learn to live as weightless beings in space.

But in the meantime, we’re having a hard time seeing the nature and limits of what’s good and what’s bad in this new environment. And that has to do, at least in part, on forms of enclosure in that world—and how we are exploited within private spaces where we hardly know we are trapped.

In The Medium is the Massage, Marshall McLuhan says every new medium, every new technology, “works us over completely.” Those are his words: works us over completely. Such as now, with digital technology, and the Internet.

I was talking recently with a friend about where our current digital transition ranks among all the other transitions in history that each have a formal cause. Was becoming ditital the biggest thing since the industrial revolution? Since movable type? Writing? Speech?

No, he said. “It’s the biggest thing since oxygenation.”

In case you weren’t there, or weren’t paying attention in geology class, oxygenation happened about 2.5 billion years ago. Which brings us to our next topic:

Institutions

Journalism is just one example of a trusted institution that is highly troubled in the digital world.

It worked fine in a physical world where truth-tellers who dig into topics and reported on them with minimized prejudice were relatively scarce yet easy to find, and to trust. But in a world flooded with information and opinion—a world where everyone can be a reporter, a publisher, a producer, a broadcaster, where the “news cycle” has the lifespan of a joke, and where news and gossip have become almost indistinguishable while being routed algorithmically to amplify prejudice and homophily, journalism has become an anachronism: still important, but all but drowning in a flood of biased “content” paid for by surveillance-led adtech.

People are still hungry for good information, of course, but our appetites are too easily fed by browsing through the surfeit of “content” on the Internet, which we can easily share by text, email or social media. Even if we do the best we can to share trustworthy facts and other substances that sound like truth, we remain suspended in a techno-social environment we mostly generate and re-generate ourselves. Kind of like our ancestral life forms made sense of the seas they oxygenated, long ago.

The academy is another institution that’s troubled in our digital time. After all, education on the Internet is easy to find. Good educational materials are easy to produce and share. For example, take Kahn Academy, which started with one guy tutoring his cousin though online videos.

Authority must still be earned, but there are now countless non-institutional ways to earn it. Credentials still matter, but less than they used to, and not in the same ways. Ad hoc education works in ways that can be cheap or free, while institutions of higher education remain very expensive. What happens when the market for knowledge and know-how starts moving past requirements for advanced degrees that might take students decades of their lives to pay off?

For one example of that risk already at work, take computer programming.

Which do you think matters more to a potential employer of programmers—a degree in computer science or a short but productive track record? For example, by contributing code to the Linux operating system?

To put this in perspective, Linux and operating systems like it are inside nearly every smart thing that connects to the Internet, including TVs, door locks, the world’s search engines, social network, laptops and mobile phones. Nothing could be more essential to computing life.

At the heart of Linux is what’s called the kernel. For code to get into the kernel, it has to pass muster with other programmers who have already proven their worth, and then through testing and debugging. If you’re looking for a terrific programmer, everyone contributing to the Linux kernel is well-proven. And there are thousands of them.

Now here’s the thing. It not only doesn’t matter whether or not those people have degrees in computer science, or even if they’ve had any formal training. What matters, for our purposes here, is that, to a remarkable degree, many of them don’t have either. Or perhaps most of them.

I know a little about this because, in the course of my work at Linux Journal, I would sometimes ask groups of alpha Linux programmers where they learned to code. Almost none told me “school.” Most were self-taught or learned from each other.

My point here is that the degree to which the world’s most essential and consequential operating system depends on the formal education of its makers is roughly zero.

See, the problem for educational institutions in the digital world is that most were built to leverage scarcity: scarce authority, scarce materials, scarce workspace, scarce time, scarce credentials, scarce reputation, scarce anchors of trust. To a highly functional degree we still need and depend on what only educational institutions can provide, but that degree is a lot lower than it used to be, a lot more varied among disciplines, and it risks continuing to decline as time goes on.

It might help at this point to see gravity in some ways as a problem the Internet solves. Because gravity is top-down. It fosters hierarchy and bell curves, sometimes where we need neither.

Absence of gravity instead fosters heterarchy and polycentrism. And, as we know, at the Ostrom Workshop perhaps better than anywhere, commons are good examples of heterarchy and polycentrism at work.

Knowledge Commons

In the first decade of our new millenium, Elinor Ostrom and Charlotte Hess—already operating in our new digital age—extended the commons category to include knowledge, calling it a complex ecosystem that operates as a common: a shared resource subject to social dilemmas.

They looked at ease of access to digital forms of knowledge and easy new ways to store, access and share knowledge as a common. They also looked at the nature of knowledge and its qualities of non-rivalry and non-excludability, which were both unlike what characterizes a natural commons, with its scarcities of rivalrous and excludable goods.

A knowledge commons, they said, is characterized by abundance. This is one way what Yochai Benkler calls Commons Based Peer Production on the Internet is both easy and rampant, giving us, among many other things, both the free software and open source movements in code development and sharing, plus the Internet and the Web.

Commons Based Peer Production also demonstrates how collaboration and non-material incentives can produce better quality products, and less social friction in the course of production.

I’ve given Linux as one example of Commons Based Peer Production. Others are Wikipedia and the Internet Archive. We’re also seeing it within the academy, for example with Indiana University’s own open archives, making research more accessible and scholarship more rich and productive.

Every one of those examples comports with Lin Ostrom’s design principles:

  1. clearly defined group boundaries;
  2. rules governing use of common goods within local needs and conditions;
  3. participation in modifying rules by those affected by the rules;
  4. accessible and low cost ways to resolve disputes;
  5. developing a system, carried out by community members, for monitoring members’ behavior;
  6. graduated sanctions for rule violators;
  7. and governing responsibility in nested tiers from the lowest level up to the entire interconnected system.

But there is also a crisis with Commons Based Peer Production on the Internet today.

Programmers who ten or fifteen years ago would not participate in enclosing their own environments are doing exactly that, for example with 5G, which is designed to put the phone companies in charge of what we can do on the Internet.

The 5G-enclosed Internet might be faster and more handy in many ways, the range of freedoms for each of us there will be bounded by the commercial interests of the phone companies and their partners, and subject to none of Lin’s rules for governing a commons.

Consider this: every one of the nine enclosures I listed at the beginning of this talk are enabled by programmers who either forgot or never learned about the freedom and openness that made the free and open Internet possible. They are employed in the golden egg gathering business—not in one that appreciates the goose that lays those eggs, and which their predecessors gave to us all.

But this isn’t the end of the world. We’re still at the beginning. And a good model for how to begin is—

The physical world

It is significant that all the commons the Ostroms and their colleagues researched in depth were local. Their work established beyond any doubt the importance of local knowledge and local control.

I believe demonstrating this in the digital world is our best chance of saving our digital world from the nine forms of enclosure I listed at the top of this talk.

It’s our best chance because there is no substitute for reality. We may be digital beings now, as well as physical ones. There are great advantages, even in the digital world, to operating in the here-and-now physical world, where all our prepositions still work, and our metaphors still apply.

Back to Joyce again.

In the mid ‘90s, when the Internet was freshly manifest on our home computers, I was mansplaining to Joyce how this Internet thing was finally the global village long promised by tech.

Her response was, “The sweet spot of the Internet is local.” She said that’s because local is where the physical and the virtual intersect. It’s where you can’t fake reality, because you can see and feel and shake hands with it.

She also said the first thing the Internet would obsolesce would be classified ads in newspapers. That’s because the Internet would be a better place than classifieds for parents to find a crib some neighbor down the street might have for sale. Then Craigslist came along and did exactly that.

We had an instructive experience with how the real world and the Internet work together helpfully at the local level about a year and a half ago. That’s when a giant rainstorm fell on the mountains behind Santa Barbara, where we live, and the town next door, called Montecito. This was also right after the Thomas Fire—largest at the time in recorded California history—had burned all the vegetation away, and there was a maximum risk of what geologists call a “debris flow.”

The result was the biggest debris flow in the history of the region: a flash flood of rock and mud that flowed across Montecito like lava from a volcano. Nearly two hundred homes were destroyed, and twenty-three people were killed. Two of them were never found, because it’s hard to find victims buried under what turned out to be at least twenty thousand truckloads of boulders and mud.

Right afterwards, all of Montecito was evacuated, and very little news got out while emergency and rescue workers did their jobs. Our local news media did an excellent job of covering this event as a story. But I also noticed that not much was being said about the geology involved.

So, since I was familiar with debris flows out of the mountains above Los Angeles, where they have infrastructure that’s ready to handle this kind of thing, I put up a post on my blog titled “Making sense of what happened to Montecito.” In that post I shared facts about the geology involved, and also published the only list on the Web of all the addresses of homes that had been destroyed. Visits to my blog jumped from dozens a day to dozens of thousands. Lots of readers also helped improve what I wrote and re-wrote.

All of this happened over the Internet, but it pertained to a real-world local crisis.

Now here’s the thing. What I did there wasn’t writing a story. I didn’t do it for the money, and my blog is a noncommercial one anyway. I did it to help my neighbors. I did it by not being a bystander.

I also did it in the context of a knowledge commons.

Specifically, I was respectful of boundaries of responsibility; notably those of local authorities—rescue workers, law enforcement, reporters from local media, city and county workers preparing reports, and so on. I gave much credit where it was due and didn’t step on the toes of others helping out as well.

An interesting fact about journalism there at the time was the absence of fake news. Sure, there was plenty of fingers pointing in blog comments and in social media. But it was marginalized away from the fact-reporting that mattered most. There was a very productive ecosystem of information, made possible by the Internet in everyone’s midst. And by everyone, I mean lots of very different people.

Humanity

We are learning creatures by nature. We can’t help it. And we don’t learn by freight forwarding

By that, I mean what I am doing here, and what we do with each other when we talk or teach, is not delivering a commodity called information, as if we were forwarding freight. Something much more transformational is taking place, and this is profoundly relevant to the knowledge commons we share.

Consider the word information. It’s a noun derived from the verb to inform, which in turn is derived from the verb to form. When you tell me something I don’t know, you don’t just deliver a sum of information to me. You form me. As a walking sum of all I know, I am changed by that.

This means we are all authors of each other.

In that sense, the word authority belongs to the right we give others to author us: to form us.

Now look at how much more of that can happen on our planet, thanks to the Internet, with its absence of distance and gravity.

And think about how that changes every commons we participate in, as both physical and digital beings. And how much we need guidance to keep from screwing up the commons we have, or forming the ones we don’t, or forming might have in the future—if we don’t screw things up.

A rule in technology is that what can be done will be done—until we find out what shouldn’t be done. Humans have done this with every new technology and practice from speech to stone tools to nuclear power.

We are there now with the Internet. In fact, many of those enclosures I listed are well-intended efforts to limit dangerous uses of the Internet.

And now we are at a point where some of those too are a danger.

What might be the best way to look at the Internet and its uses most sensibly?

I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons, and subject to both research and guidance informed by Lin Ostrom’s rules.

And I hope that guides our study.

There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long. And I look forward to working with many of us here on answers to these and many other questions.

Thank you. 

Sources

Ostrom, Elinor. Governing the Commons. Cambridge University Press, 1990

Ostrom, Elinor and Hess, Charlotte, editors. Understanding Knowledge as a Commons:
From Theory to Practice, MIT Press, 2011
https://mitpress.mit.edu/books/understanding-knowledge-commons
Full text online: https://wtf.tw/ref/hess_ostrom_2007.pdf

Paul D. Aligica and Vlad Tarko, “Polycentricity: From Polanyi to Ostrom, and Beyond” https://asp.mercatus.org/system/files/Polycentricity.pdf

Elinor Ostrom, “Coping With Tragedies of the Commons,” 1998 https://pdfs.semanticscholar.org/7c6e/92906bcf0e590e6541eaa41ad0cd92e13671.pdf

Lee Anne Fennell, “Ostrom’s Law: Property rights in the commons,” March 3, 2011
https://www.thecommonsjournal.org/articles/10.18352/ijc.252/

Christopher W. Savage, “Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy.” Stanford Law School, 2019. https://law.stanford.edu/wp-content/uploads/2019/01/Savage_20190129-1.pdf

 

________________

*I wrote it using—or struggling in—the godawful Outline view in Word. Since I succeeded (most don’t, because they can’t or won’t, with good reason), I’ll brag on succeeding at the subhead level:

As I’m writing this, in Febrary, 2020, Dave Winer is working on what he calls writing on rails. That’s what he gave the pre-Internet world with MORE several decades ago, and I’m helping him with now with the Internet-native kind, as a user. He explains that here. (MORE was, for me, like writing on rails. It’ll be great to go back—or forward—to that again.)

A Route of Evanescence,
With a revolving Wheel –
A Resonance of Emerald
A Rush of Cochineal –
And every Blossom on the Bush
Adjusts it’s tumbled Head –
The Mail from Tunis – probably,
An easy Morning’s Ride –

—Emily Dickinson
(via The Poetry Foundation)

While that poem is apparently about a hummingbird, it’s the one that comes first to my mind when I contemplate the form of evanescence that’s rooted in the nature of the Internet, where all of us are here right now, as I’m writing and you’re reading this.

Because, let’s face it: the Internet is no more about anything “on” it than air is about noise, speech or anything at all. Like air, sunlight, gravity and other useful graces of nature, the Internet is good for whatever can be done with it.

Same with the Web. While the Web was born as a way to share documents at a distance (via the Internet), it was never a library, even though we borrowed the language of real estate and publishing (domains and sites with pages one could author, edit, publish, syndicate, visit and browse) to describe it. While the metaphorical framing in all those words suggests durability and permanence, they belie the inherently evanescent nature of all we call content.

Think about the words memorystorageupload, and download. All suggest that content in digital form has substance at least resembling the physical kind. But it doesn’t. It’s a representation, in a pattern of ones and zeros, recorded on a medium for as long the responsible party wishes to keep it there, or the medium survives. All those states are volatile, and none guarantee that those ones and zeroes will last.

I’ve been producing digital content for the Web since the early 90s, and for much of that time I was lulled into thinking of the digital tech as something at least possibly permanent. But then my son Allen pointed out a distinction between the static Web of purposefully durable content and what he called the live Web. That was in 2003, when blogs were just beginning to become a thing. Since then the live Web has become the main Web, and people have come to see content as writing or projections on a World Wide Whiteboard. Tweets, shares, shots and posts are mostly of momentary value. Snapchat succeeded as a whiteboard where people could share “moments” that erased themselves after one view. (It does much more now, but evanescence remains its root.)

But, being both (relatively) old and (seriously) old-school about saving stuff that matters, I’ve been especially concerned with how we can archive, curate and preserve as much as possible of what’s produced for the digital world.

Last week, for example, I was involved in the effort to return Linux Journal to the Web’s shelves. (The magazine and site, which lived from April 1994 to August 2019, was briefly down, and with it all my own writing there, going back to 1996. That corpus is about a third of my writing in the published world.) Earlier, when it looked like Flickr might go down, I worried aloud about what would become of my many-dozen-thousand photos there. SmugMug saved it (Yay!); but there is no guarantee that any Website will persist forever, in any form. In fact, the way to bet is on the mortality of everything there. (Perspective: earlier today, over at doc.blog, I posted a brief think piece about the mortality of our planet, and the youth of the Universe.)

But the evanescent nature of digital memory shouldn’t stop us from thinking about how to take better care of what of the Net and the Web we wish to see remembered for the world. This is why it’s good to be in conversation on the topic with Brewster Kahle (of archive.org), Dave Winer and other like-minded folk. I welcome your thoughts as well.

In a press release, Amazon explained why it backed out of its plan to open a new headquarters in New York City:

For Amazon, the commitment to build a new headquarters requires positive, collaborative relationships with state and local elected officials who will be supportive over the long-term. While polls show that 70% of New Yorkers support our plans and investment, a number of state and local politicians have made it clear that they oppose our presence and will not work with us to build the type of relationships that are required to go forward with the project we and many others envisioned in Long Island City.

So, even if the economics were good, the politics were bad.

The hmm for me is why not New Jersey? Given the enormous economic and political overhead of operating in New York, I’m wondering why Amazon didn’t consider New Jersey first. Or if it’s thinking about it now.

New Jersey is cheaper and (so I gather) friendlier, at least tax-wise. It also has the country’s largest port (one that used to be in New York, bristling Manhattan’s shoreline with piers and wharves, making look like a giant paramecium) and is a massive warehousing and freight forwarding hub. In fact Amazon already has a bunch of facilities there (perhaps including its own little port on Arthur Kill). I believe there are also many more places to build on the New Jersey side. (The photo above, shot on approach to Newark Airport, looks at New York across some of those build-able areas.)

And maybe that’s the plan anyway, without the fanfare.

As it happens, I’m in the midst of reading Robert Caro‘s The Power Broker: Robert Moses and the Fall of New York. (Which is massive. There’s a nice summary in The Guardian here.) This helps me appreciate the power of urban planning, and how thoughtful and steel-boned opposition to some of it can be fully useful. One example of that is Jane Jacobs’ thwarting of Moses’ plan to run a freeway through Greeenwich Village. He had earlier done the same through The Bronx, with the Cross Bronx Expressway. While that road today is an essential stretch of the northeast transport corridor, at the time it was fully destructive to urban life in that part of the city—and in many ways still is.

So I try to see both sides of an issue such as this. What’s constructive and what’s destructive in urban planning are always hard to pull apart.

For an example close to home, I often wonder if it’s good that Fort Lee is now almost nothing but high-rises? This is the town my grandfather helped build (he was the head carpenter for D.W. Griffith when Fort Lee was the first Hollywood), where my father grew up climbing the Palisades for fun, and where he later put his skills to work as cable rigger, helping build the George Washington Bridge. The Victorian house Grandpa built for his family on Hoyt Avenue, and where my family lived when I was born, stood about as close to a giant new glass box called The Modern as I am from the kitchen in the apartment I’m writing this, a few blocks away from The Bridge on the other side of the Hudson. It’s paved now, by a road called Bruce Reynolds Boulevard. Remember Bridgegate? That happened right where our family home stood, in a pleasant neighborhood of which nothing remains.

Was the disappearance of that ‘hood a bad thing? Not by now, long after the neighborhood was erased and nearly everyone who lived has died or has long since moved on. Thousands more live there now than ever did when it was a grid of nice homes on quiet, tree-lined streets.

All urban developments are omelettes made of broken eggs. If you’re an egg, you’ve got reason to complain. If you’re a cook, you’d better make a damn fine omelette.

I came up with that law in the last millennium and it applied until Chevy discontinued the Cavalier in 2005. Now it should say, “You’re going to get whatever they’ve got.”

The difference is that every car rental agency in days of yore tended to get their cars from a single car maker, and now they don’t. Back then, if an agency’s relationship was with General Motors, which most of them seemed to be, the lot would have more of GM’s worst car than of any other kind of car. Now the car you rent truly is whatever. In the last year we’ve rented at least one Kia, Hyundai, Chevy, Nissan, Volkswagen, Ford and Toyota, and that’s just off the top of my head. (By far the best was a Chevy Impala. I actually loved it. So, naturally, it’s being discontinued.)

All of that, of course, applies only in the U.S. I know less about car rental verities in Europe, since I haven’t rented a car there since (let’s see…) 2011.

Anyway, when I looked up doc searls chevy cavalier to find whatever I’d written about my felicitous Fourth Law, the results included this, from my blog in 2004…

Five years later, the train pulls into Madison Avenue

ADJUSTING TO THE REALITY OF A CONSUMER-CONTROLLED MARKET, by Scott Donathon in Advertising Age. An excerpt:

Larry Light, global chief marketing officer at McDonald’s, once again publicly declared the death of the broadcast-centric ad model: “Mass marketing today is a mass mistake.” McDonald’s used to spend two-thirds of its ad budget on network prime time; that figure is now down to less than one-third.

General Motors’ Roger Adams, noting the automaker’s experimentation with less-intrusive forms of marketing, said, “The consumer wants to be in control, and we want to put them in control.” Echoed Saatchi & Saatchi chief Kevin Roberts, “The consumer now has absolute power.”

“It is not your goddamn brand,” he told marketers.

This consumer empowerment is at the heart of everything. End users are now in control of how, whether and where they consume information and entertainment. Whatever they don’t want to interact with is gone. That upends the intrusive model the advertising business has been sustained by for decades.

This is still fucked, of course. Advertising is one thing. Customer relationships are another.

“Consumer empowerment” is an oxymoron. Try telling McDonalds you want a hamburger that doesn’t taste like a horse hoof. Or try telling General Motors that nobody other than rental car agencies wants to buy a Chevy Cavalier or a Chevy Classic; or that it’s time, after 60 years of making crap fixtures and upholstery, to put an extra ten bucks (or whatever it costs) into trunk rugs that don’t seem like the company works to make them look and feel like shit. Feel that “absolute power?” Or like you’re yelling at the pyramids?

Real demand-side empowerment will come when it’s possible for any customer to have a meaningful — and truly valued — conversation with people in actual power on the supply side. And those conversations turn into relationships. And those relationships guide the company.

I’ll believe it when I see it.

Meanwhile the decline of old-fashioned brand advertising on network TV (which now amounts to a smaller percentage of all TV in any case) sounds more to me like budget rationalization than meaningful change where it counts.

Thanks to Terry for the pointer.

Three things about that.

First, my original blog (which ran from 1999 to 2007) is still up, thanks to Jake Savin and Dave Winer, at http://weblog.searls.com. (Adjust your pointers. It’ll help Google and Bing forget the old address.)

Second, I’ve been told by rental car people that the big American car makers actually got tired of hurting their brands by making shitty cars and scraping them off on rental agencies. So now the agencies mostly populate their lots surplus cars that don’t make it to dealers for various reasons. They also let their cars pile up 50k miles or more before selling them off. Also, the quality of cars in general is much higher than it used to be, and the experience of operating them is much more uniform—meaning blah in nearly identical ways.

Third, I’ve changed my mind on brand advertising since I wrote that. Two reasons. One is that brand advertising sponsors the media it runs on, which is a valuable thing. The other is that brand advertising really does make a brand familiar, which is transcendently valuable to the brand itself. There is no way personalized and/or behavioral advertising can do the same. Perhaps as much as $2trillion has been spent on tracking-based digital advertising, and not one brand known to the world has been made by it.

And one more thing: since we don’t commute, and we don’t need a car most of the time, we now favor renting cars over owning them. Much simpler and much cheaper. And the cars we rent tend to be nicer than the used cars we’ve owned and mostly driven into the ground. You never know what you’re going to get, but generally they’re not bad, and not our problem if something goes wrong with one, which almost never happens.

 

I want to point to three great posts.

First is Larry Lessig‘s Podcasting and the Slow Democracy Movement. A pull quote:

The architecture of the podcast is the precise antidote for the flaws of the present. It is deep where now is shallow. It is insulated from ads where now is completely vulnerable. It is a chance for thinking and reflection; it has an attention span an order of magnitude greater than the Tweet. It is an opportunity for serious (and playful) engagement. It is healthy eating for a brain-scape that now gorges on fast food.

If 2016 was the Twitter election — fast food, empty calorie content driving blood pressure but little thinking — then 2020 must be the podcast election — nutrient-rich, from every political perspective. Not sound bites driven by algorithms, but reflective and engaged humans doing what humans still do best: thinking with empathy about ideals that could make us better — as humans, not ad-generating machines.

There is hope here. We need to feed it.

I found that through a Radio Open Source email pointing to the show’s latest podcast, The New Normal. I haven’t heard that one yet; but I am eager to, because I suspect the “new normal” may be neither. And, as I might not with Twitter, I am foregoing judgement until I do hear it. The host is also Chris Lydon, a friend whose podcast pionering owes to collaboration with Dave Winer, who invented the form of RSS used by nearly all the world’s podcasters, and who wrote my third recommended post, Working Together, in 2019. That one is addressed to Chris and everyone else bringing tools and material to the barns we’re raising together. The title says it all, but read it anyway.

Work is how we feed the hope Larry talks about.

 

The original pioneer in space-based telephony isn’t @ElonMusk (though he deserves enormous credit for his work in the field, the latest example of which is SpaceX‘s 7,518-satellite Starlink network, and which has been making news lately). It’s the people behind the Iridium satellite constellation, the most driven and notorious of which was Ed Staiano.

Much has been written about Iridium’s history, and Ed’s role in driving its satellites into space, most of it negative toward Ed. But I’ve always thought that was at least partly unfair. Watching the flow of news about Iridium at the time it was moving from ground to sky, it was clear to me that Iridium would have remained on the ground if Ed wasn’t a tough bastard about making it fly.

My ad agency, Hodskins Simone & Searls, worked for Ed when he was at Motorola in pre-Iridium days. He was indeed a tough taskmaster: almost legendarily demanding and impatient with fools. But I never had a problem with him. And I believe it’s a testimony to Ed’s vision and persistence that Iridium is still up there, doing the job he wanted it to do, and paving the way for Elon and others (including @IridiumComm itself).

So hats off, Ed. Hope you’re doing well.

Tags: ,

I had a bunch of errands to run today, but also a lot of calls. And, when I finally got up from my desk around 4pm with plans to head out in the car, I found five inches of snow already on the apartment deck. Another five would come after that. So driving was clearly a bad idea.

When I stepped out on the street, I saw it was impossible. Cars were stuck, even on our side street.

So I decided to walk down to the nearest dollar store, a few blocks north on Broadway, which is also downhill in this part of town, to check out the ‘hood and pick up some deck lights to replace the ones that had burned out awhile back.

What I found on Broadway was total gridlock, because too many cars and trucks couldn’t move. Tires all over spun in place, saying “zzzZZZZzzzZZZ.” After I picked up a couple 5-foot lengths of holiday lights for $1 each at the dollar store, I walked back up past the same stuck length of cars and trucks I saw on the way down. A cop car and an ambulance would occasionally fire up their sirens, but it made no difference. Everything was halted.

When I got back, I put the lights on the deck and later shot the scene above. It’s 10pm now, and rains have turned the scene to slush.

I do hope kids got to sled in the snow anyway. Bonus links: Snow difference and Wintry mixing.

Tags:

We live in two worlds now: the natural one where we have bodies that obey the laws of gravity and space/time, and the virtual one where there is no gravity or distance (though there is time).

In other words, we are now digital as well as physical beings, and this is new to a human experience where, so far, we are examined and manipulated like laboratory animals by giant entities that are out of everybody’s control—including theirs.

The collateral effects are countless and boundless.

Take journalism, for example. That’s what I did in a TEDx talk I gave last month in Santa Barbara:

I next visited several adjacent territories with a collection of brilliant folk at the Ostrom Workshop on Smart Cities. (Which was live-streamed, but I’m not sure is archived yet. Need to check.)

Among those folk was Brett Frischmann, whose canonical work on infrastructure I covered here, and who in Re-Engineering Humanity (with Evan Selinger) explains exactly how giants in the digital infrastructure business are hacking the shit out of us—a topic I also visit in Engineers vs. Re-Engineering (my August editorial in Linux Journal).

Now also comes Bruce Schneier, with his perfectly titled book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, which Farhad Manjoo in The New York Times sources in A Future Where Everything Becomes a Computer Is as Creepy as You Feared. Pull-quote: “In our government-can’t-do-anything-ever society, I don’t see any reining in of the corporate trends.”

In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, a monumental work due out in January (and for which I’ve seen some advance galleys) Shoshana Zuboff makes both cases (and several more) at impressive length and depth.

Privacy plays in all of these, because we don’t have it yet in the digital world. Or not much of it, anyway.

In reverse chronological order, here’s just some what I’ve said on the topic:

So here we are: naked in the virtual world, just like we were in the natural one before we invented clothing and shelter.

And that’s the challenge: to equip ourselves to live private and safe lives, and not just public and endangered ones, in our new virtual world.

Some of us have taken up that challenge too: with ProjectVRM, with Customer Commons, and with allied efforts listed here.

And I’m optimistic about our prospects.

I’ll also be detailing that optimism in the midst of a speech titled “Why adtech sucks and needs to be killed” next Wednesday (October 17th) at An Evening with Advertising Heretics in NYC. Being at the Anne L. Bernstein Theater on West 50th, it’s my off-Broadway debut. The price is a whopping $10.

 

 

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring readers’ bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of aiming “interest-based” advertising at those same readers, wherever those readers’ eyeballs may appear—or reappear in the case of “retargeted” advertising.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach, standard, experience or audit trail), and no blood valving by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data?

Answer: nobody knows, because the whole adtech “ecosystem” is a four-dimensional shell game with hundreds of players

or, in the case of “martech,” thousands:

For one among many views of what’s going on, here’s a compressed screen shot of what Privacy Badger showed going on in my browser behind Zeynep’s op-ed in the Times:

[Added later…] @ehsanakhgari tweets pointage to WhoTracksMe’s page on the NYTimes, which shows this:

And here’s more irony: a screen shot of the home page of RedMorph, another privacy protection extension:

That quote is from Free Tools to Keep Those Creepy Online Ads From Watching You, by Brian X. Chen and Natasha Singer, and published on 17 February 2016 in the Times.

The same irony applies to countless other correct and important reportage on the Facebook/Cambridge Analytica mess by other writers and pubs. Take, for example, Cambridge Analytica, Facebook, and the Revelations of Open Secrets, by Sue Halpern in yesterday’s New Yorker. Here’s what RedMorph shows going on behind that piece:

Note that I have the data leak toward Facebook.net blocked by default.

Here’s a view through RedMorph’s controller pop-down:

And here’s what happens when I turn off “Block Trackers and Content”:

By the way, I want to make clear that Zeynep, Brian, Natasha and Sue are all innocents here, thanks both to the “Chinese wall” between the editorial and publishing functions of the Times, and the simple fact that the route any ad takes between advertiser and reader through any number of adtech intermediaries is akin to a ball falling through a pinball machine. Refresh your page while reading any of those pieces and you’ll see a different set of ads, no doubt aimed by automata guessing that you, personally, should be “impressed” by those ads. (They’ll count as “impressions” whether you are or not.)

Now…

What will happen when the Times, the New Yorker and other pubs own up to the simple fact that they are just as guilty as Facebook of leaking data about their readers to other parties, for—in many if not most cases—God knows what purposes besides “interest-based” advertising? And what happens when the EU comes down on them too? It’s game-on after 25 May, when the EU can start fining violators of the General Data Protection Regulation (GDPR). Key fact: the GDPR protects the data blood of what they call “EU data subjects” wherever those subjects’ necks are exposed in borderless digital world.

To explain more about how this works, here is the (lightly edited) text of a tweet thread posted this morning by @JohnnyRyan of PageFair:

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)

Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

The website does this to let these companies “bid” to show their ad to this visitor. Here is a video of how the system works. In Europe this accounts for about a quarter of publishers’ gross revenue.

Once these personal data leave the publisher, via “bid request”, the publisher has no control over what happens next. I repeat that: personal data are routinely sent, every time a page loads, to hundreds/thousands of companies, with no control over what happens to them.

This means that every person, and what they look at online, is routinely profiled by companies that receive these data from the websites they visit. Where possible, these data and combined with offline data. These profiles are built up in “DMPs”.

Many of these DMPs (data management platforms) are owned by data brokers. (Side note: The FTC’s 2014 report on data brokers is shocking. See https://www.ftc.gov/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014. There is no functional difference between an #adtech DMP and Cambridge Analytica.

—Terrell McSweeny, Julie Brill and EDPS

None of this will be legal under the #GDPR. (See one reason why at https://t.co/HXOQ5gb4dL). Publishers and brands need to take care to stop using personal data in the RTB system. Data connections to sites (and apps) have to be carefully controlled by publishers.

So far, #adtech’s trade body has been content to cover over this wholesale personal data leakage with meaningless gestures that purport to address the #GDPR (see my note on @IABEurope current actions here: https://t.co/FDKBjVxqBs). It is time for a more practical position.

And advertisers, who pay for all of this, must start to demand that safe, non-personal data take over in online RTB targeting. RTB works without personal data. Brands need to demand this to protect themselves – and all Internet users too. @dwheld @stephan_lo @BobLiodice

Websites need to control
1. which data they release in to the RTB system
2. whether ads render directly in visitors’ browsers (where DSPs JavaScript can drop trackers)
3. what 3rd parties get to be on their page
@jason_kint @epc_angela @vincentpeyregne @earljwilkinson 11/12

Lets work together to fix this. 12/12

Those last three recommendations are all good, but they also assume that websites, advertisers and their third party agents are the ones with the power to do something. Not readers.

But there’s lots readers will be able to do. More about that shortly. Meanwhile, publishers can get right with readers by dropping #adtech and going back to publishing the kind of high-value brand advertising they’ve run since forever in the physical world.

That advertising, as Bob Hoffman (@adcontrarian) and Don Marti (@dmarti) have been making clear for years, is actually worth a helluva lot more than adtech, because it delivers clear creative and economic signals and comes with no cognitive overhead (for example, wondering where the hell an ad comes from and what it’s doing right now).

As I explain here, “Real advertising wants to be in a publication because it values the publication’s journalism and readership” while “adtech wants to push ads at readers anywhere it can find them.”

Doing real advertising is the easiest fix in the world, but so far it’s nearly unthinkable for a tech industry that has been defaulted for more than twenty years to an asymmetric power relationship between readers and publishers called client-server. I’ve been told that client-server was chosen as the name for this relationship because “slave-master” didn’t sound so good; but I think the best way to visualize it is calf-cow:

As I put it at that link (way back in 2012), Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing about the Net that prevents each of us from having plenty of power on our own.

On the Net, we don’t need to be slaves, cattle or throbbing veins. We can be fully human. In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Customer Commons is working on exactly those terms. The first publication to agree to readers terms is Linux Journal, where I am now editor-in-chief. The first of those terms is #P2B1(beta), says “Just show me ads not based on tracking me,” and is hashtagged #NoStalking.

In Help Us Cure Online Publishing of Its Addiction to Personal Data, I explain how this models the way advertising ought to be done: by the grace of readers, with no spying.

Obeying readers’ terms also carries no risk of violating privacy laws, because every pub will have contracts with its readers to do the right thing. This is totally do-able. Read that last link to see how.

As I say there, we need help. Linux Journal still has a small staff, and Customer Commons (a California-based 501(c)(3) nonprofit) so far consists of five board members. What it aims to be is a worldwide organization of customers, as well as the place where terms we proffer can live, much as Creative Commons is where personal copyright licenses live. (Customer Commons is modeled on Creative Commons. Hats off to the Berkman Klein Center for helping bring both into the world.)

I’m also hoping other publishers, once they realize that they are no less a part of the surveillance economy than Facebook and Cambridge Analytica, will help out too.

[Later…] Not long after this post went up I talked about these topics on the Gillmor Gang. Here’s the video, plus related links.

I think the best push-back I got there came from Esteban Kolsky, (@ekolsky) who (as I recall anyway) saw less than full moral equivalence between what Facebook and Cambridge Analytica did to screw with democracy and what the New York Times and other ad-supported pubs do by baring the necks of their readers to dozens of data vampires.

He’s right that they’re not equivalent, any more than apples and oranges are equivalent. The sins are different; but they are still sins, just as apples and oranges are still both fruit. Exposing readers to data vampires is simply wrong on its face, and we need to fix it. That it’s normative in the extreme is no excuse. Nor is the fact that it makes money. There are morally uncompromised ways to make money with advertising, and those are still available.

Another push-back is the claim by many adtech third parties that the personal data blood they suck is anonymized. While that may be so, correlation is still possible. See Study: Your anonymous web browsing isn’t as anonymous as you think, by Barry Levine (@xBarryLevine) in Martech Today, which cites De-anonymizing Web Browsing Data with Social Networks, a study by Jessica Su (@jessicatsu), Ansh Shukla (@__anshukla__) and Sharad Goel (@5harad)
of Stanford and Arvind Narayanan (@random_walker) of Princeton.

(Note: Facebook and Google follow logged-in users by name. They also account for most of the adtech business.)

One commenter below noted that this blog as well carries six trackers (most of which I block).. Here is how those look on Ghostery:

So let’s fix this thing.

[Later still…] Lots of comments in Hacker News as well.

[Later again (8 April 2018)…] About the comments below (60+ so far): the version of commenting used by this blog doesn’t support threading. If it did, my responses to comments would appear below each one. Alas, some not only appear out of sequence, but others don’t appear at all. I don’t know why, but I’m trying to find out. Meanwhile, apologies.

Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):

Geologist Ed Keller

Our speakers were geology professor Ed Keller of UCSB and Engineering Geologist Larry Gurrola, who also works and studies with Ed. That’s Ed in the shot below.

As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs. The whole thing is beyond sad and awful.

The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.

I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the worst she has ever seen, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.

Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)

Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):

Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.

I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:

I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.

Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.

Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:

Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)

Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.

Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.

It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:

From bottom to top, here’s what it says:

  1. Fires happen almost regularly, spreading most widely where chaparral has matured to become abundant fuel, as the firefighters like to call it.
  2. Flood events are more random, given the relative rarity of rain and even more rare rains of “biblical” volume. But they do happen.
  3. Stream beds in the floors of canyons accumulate rocks and boulders that roll down the gradually eroding slopes over time. The depth of these is expressed as basin instablity. Debris flows clear out the rocks and boulders when a big flood event comes right after a fire and basin becomes stable (relatively rock-free) again.
  4. The sediment yield in a flood (F) is maximum when a debris flow (DF) occurs.
  5. Debris flows tend to happen once every few hundred years. And you’re not going to get the big ones if you don’t have the canyon stream bed full of rocks and boulders.

About this set of debris flows in particular:

  1. Destruction down Oak Creek wasn’t as bad as on Montecito, San Ysidro, Buena Vista and Romero Creeks because the canyon feeding it is smaller.
  2. When debris flows hit an obstruction, such as a bridge, they seek out a new bed to flow on. This is one of the actions that creates an alluvial fan. From the map it appears something like that happened—
    1. Where the flow widened when it hit Olive Mill Road, fanning east of Olive Mill to destroy all three blocks between Olive Mill and Santa Elena Lane before taking the Olive Mill bridge across 101 and down to the Biltmore while also helping other flows fill 101 as well. (See Mac’s comment below, and his link to a top map.)
    2. In the area between Buena Vista Creek and its East Fork, which come off different watersheds
    3. Where a debris flow forked south of Mountain Drive after destroying San Ysidro Ranch, continuing down both Randall and El Bosque Roads.

For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)

Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:

See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:

Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.

But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.

Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.

Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.

For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:

This is deep in the heart of Santa Barbara.

The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.

All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.

To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:

What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.

Here is an image that helps, from that same link:

Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.

I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.

30 January

 

« Older entries