This is the Ostrom Memorial Lecture I gave on 9 October of last year for the Ostrom Workshop at Indiana University. Here is the video. (The intro starts at 8 minutes in, and my part starts just after 11 minutes in.) I usually speak off the cuff, but this time I wrote it out, originally in outline form*, which is germane to my current collaborations with Dave Winer, father of outlining software (and, in related ways, of blogging and podcasting). So here ya go.

Intro

The movie Blade Runner was released in 1982; and was set in a future Los Angeles. Anyone here know when in the future Blade Runner is set? I mean, exactly?

The year was 2019. More precisely, next month: November.

In Blade Runner’s 2019, Los Angeles is a dark and rainy hellscape with buildings the size of mountains, flying cars, and human replicants working on off-world colonies. It also has pay phones and low-def computer screens that are vacuum tubes.

Missing is a communication system that can put everyone in the world at zero distance from everyone else, in disembodied form, at almost no cost—a system that lives on little slabs in people’s pockets and purses, and on laptop computers far more powerful than any computer, of any size, from 1982.

In other words, this communication system—the Internet—was less thinkable in 1982 than flying cars, replicants and off-world colonies. Rewind the world to 1982, and the future Internet would appear a miracle dwarfing the likes of loaves and fish.

In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.

As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.

There is also only one of it. As with the universe, it has no other examples.

As a source of abundance, the closest thing the Internet’s might have to an example is the periodic table. And the Internet might be even more elemental than that: so elemental that it is easy to overlook the simple fact that it is the largest goose ever to lay golden eggs.

It can, however, be misunderstood, and that’s why it’s in trouble.

The trouble it’s in is with human nature: the one that sees more value in the goose’s eggs than in the goose itself.

See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet. I’ll describe nine of them.

Enclosures

The first enclosure is service provisioning, for example with asymmetric connection speeds. On cable connections you may have up to 400 megabits per second downstream, but still only 10 megabits per second—one fortieth of that—upstream. (By the way this is exactly what Spectrum, formerly Time Warner Cable, provides with its most expensive home service to customers in New York City.)

They do that to maximize consumption while minimizing production by those customers. You can consume all the video you want, and think you’re getting great service. But meanwhile this asymmetrical provisioning prevents production at your end. Want to put out a broadcast or a podcast from your house, to run your own email server, or to store your own video or other personal data in your own personal “cloud”? Forget it.

The Internet was designed to support infinite production by anybody of anything. But cable TV companies don’t want you to have that that power. So you don’t. The home Internet you get from your cable company is nice to have, but it’s not the whole Internet. It’s an enclosed subset of capabilities biased by and for the cable company and large upstream producers of “content.”

So, it’s golden eggs for them, but none for you. Also missing are all the golden eggs you might make possible for those companies as an active producer rather than as a passive consumer.

The second enclosure is through 5G wireless service, currently promoted by phone companies as a new generation of Internet service. The companies deploying 5G promise greater speeds and lower lag times over wireless connections; but is also clear that they want to build in as many choke points as they like, all so you can be billed for as many uses as possible.

You want gaming? Here’s our gaming package. You want cloud storage? Here’s our cloud storage package. Each of these uses will carry terms and conditions that allow some uses and prevent others. Again, this is a phone company enclosure. No cable companies are deploying 5G. They’re fine with their own enclosure.

The third enclosure is government censorship. The most familiar example is China’s. In China’s closed Internet you will find no Google, Facebook, Twitter, Instagram or Reddit. No Pandora, Spotify, Slack or Dropbox. What you will find is pervasive surveillance of everyone and everything—and ranking of people in its Social Credit System.

By March of this year, China had already punished 23 million people with low social credit scores by banning them from traveling. Control of speech has also spread to U.S. companies such as the NBA and ESPN, which are now censoring themselves as well, bowing to the wishes of the Chinese government and its own captive business partners.

The fourth enclosure is the advertising-supported commercial Internet. This is led by Google and Facebook, but also includes all the websites and services that depend on tracking-based advertising. This form of advertising, known as adtech, has in the last decade become pretty much the only kind of advertising online.

Today there are very few major websites left that don’t participate in what Shoshana Zuboff calls surveillance capitalism, and Brett Frischmann and Evan Selinger call, in their book by that title, Re-engineering Humanity. Surveillance of individuals online is now so deep and widespread that nearly every news organization is either unaware of it or afraid to talk about it—in part because the advertising they run is aimed by it.

That’s why you’ll read endless stories about how bad Facebook and Google are, and how awful it is that we’re all being tracked everywhere like marked animals; but almost nothing about how the sites publishing stories about tracking also participate in exactly the same business—and far more surreptitiously. Reporting on their own involvement in the surveillance business is a third rail they won’t grab.

I know of only one magazine that took and shook that third rail, especially in the last year and a half.  That magazine was Linux Journal, where I worked for 24 years and was serving as editor-in-chief when it was killed by its owner in August. At least indirectly that was because we didn’t participate in the surveillance economy.

The fifth enclosure is protectionism. In Europe, for example, your privacy is protected by laws meant to restrict personal data use by companies online. As a result in Europe, you won’t see the Los Angeles Times or the Washington Post in your browsers, because those publishers don’t want to cope with what’s required by the EU’s laws.

While they are partly to blame—because they wish to remain in the reader-tracking business—the laws are themselves terribly flawed—for example by urging every website to put up a “cookie notice” on pages greeting readers. In most cases clicking “accept” to the site’s cookies only gives the site permission to continue doing exactly the kind of tracking the laws are meant to prevent.

So, while the purpose of these laws is to make the Internet safer, in effect they also make its useful space smaller.

The sixth enclosure is what The Guardian calls “digital colonialism.” The biggest example of that is  Facebook.org, originally called “Free Basics” and “Internet.org”

This is a China-like subset of the Internet, offered for free by Facebook in less developed parts of the world. It consists of a fully enclosed Web, only a few dozen sites wide, each hand-picked by Facebook. The rest of the Internet isn’t there.

The seventh enclosure is the forgotten past. Today the World Wide Web, which began as a kind of growing archive—a public set of published goods we could browse as if it were a library—is being lost. Forgotten. That’s because search engines are increasingly biased to index and find pages from the present and recent past, and by following the tracks of monitored browsers. It’s forgetting what’s old. Archival goods are starting to disappear, like snow on the water.

Why? Ask the algorithm.

Of course, you can’t. That brings us to our eighth enclosure: algorithmic opacity.

Consider for a moment how important power plants are, and how carefully governed they are as well. Every solar, wind, nuclear, hydro and fossil fuel power production system in the world is subject to inspection by whole classes of degreed and trained professionals.

There is nothing of the sort for the giant search engine and social networks of the world. Google and Facebook both operate dozens of data centers, each the size of many Walmart stores. Yet the inner workings of those data centers are nearly absent of government oversight.

This owes partly to the speed of change in what these centers do, but more to the simple fact that what they do is unknowable, by design. You can’t look at rows of computers with blinking lights in many acres of racks and have the first idea of what’s going on in there.

I would love to see research, for example, on that last enclosure I listed: on how well search engines continue to index old websites. Or to do anything. The whole business is as opaque as a bowling ball with no holes.

I’m not even sure you can find anyone at Google who can explain exactly why its index does one thing or another, for any one person or another. In fact, I doubt Facebook is capable of explaining why any given individual sees any given ad. They aren’t designed for that. And the algorithm itself isn’t designed to explain itself, perhaps even to the employees responsible for it.

Or so I suppose.

In the interest of moving forward with research on these topics, I invite anyone at Google, Facebook, Bing or Amazon to help researchers at institutions such as the Ostrom Workshop, and to explain exactly what’s going on inside their systems, and to provide testable and verifiable ways to research those goings-on.

The ninth and worst enclosure is the one inside our heads. Because, if we think the Internet is something we use by grace of Apple, Amazon, Facebook, Google and “providers” such as phone and cable companies, we’re only helping all those companies contain the Internet’s usefulness inside their walled gardens.

Not understanding the Internet can result in problems similar to ones we suffer by not understanding common pool resources such as the atmosphere, the oceans, and the Earth itself.

But there is a difference between common pool resources in the natural world, and the uncommon commons we have with the Internet.

See, while we all know that common-pool resources are in fact not limitless—even when they seem that way—we don’t have the same knowledge of the Internet, because its nature as a limitless non-thing is non-obvious.

For example, we know common pool resources in the natural world risk tragic outcomes if our use of them is ungoverned, either by good sense or governance systems with global reach. But we don’t know that the Internet is limitless by design, or that the only thing potentially tragic about it is how we restrict access to it and use of it, by enclosures such as the nine I just listed.

So my thesis here is this: if we can deeply and fully understand what the Internet is, why it is fully important, and why it is in danger of enclosure, we can also understand why, ten years after Lin Ostrom won a Nobel prize for her work on the commons, that work may be exactly what we need to save the Internet as a boundless commons that can support countless others.

The Internet

We’ll begin with what makes the Internet possible: a protocol.

A protocol is a code of etiquette for diplomatic exchanges between computers. A form of handshake.

What the Internet’s protocol does is give all the world’s digital devices and networks a handshake agreement about how to share data between any point A and any point B in the world, across any intermediary networks.

When you send an email, or look at a website, anywhere in the world, the route the shared data takes can run through any number of networks between the two. You might connect from Bloomington to Denver through Chicago, Tokyo and Mexico City. Then, two minutes later, through Toronto and Miami. Some packets within your data flows may also be dropped along the way, but the whole session will flow just fine because the errors get noticed and the data re-sent and re-assembled on the fly.

Oddly, none of this is especially complicated at the technical level, because what I just described is pretty much all the Internet does. It doesn’t concern itself with what’s inside the data traffic it routes, who is at the ends of the connections, or what their purposes are—any more than gravity cares about what it attracts.

Beyond the sunk costs of its physical infrastructure, and the operational costs of keeping the networks themselves standing up, the Internet has no first costs at its protocol level, and it adds no costs along the way. It also has no billing system.

In all these ways the Internet is, literally, neutral. It also doesn’t need regulators or lawmakers to make it neutral. That’s just its nature.

The Internet’s protocol called is called TCP/IP, and by using it, all the networks of the world subordinate their own selfish purposes.

This is what makes the Internet’s protocol generous and supportive to an absolute degree toward every purpose to which it is put. It is a rising tide that lifts all boats.

In retrospect we might say the big networks within the Internet—those run by phone and cable companies, governments and universities—agreed to participate in the Internet because it was so obviously useful that there was no reason not to.

But the rising-tide nature of the Internet was not obvious to all of them at first. In retrospect, they didn’t realize that the Internet was a Trojan Horse, wheeled through their gates by geeks who looked harmless but in fact were bringing the world a technical miracle.

I can support that claim by noting that even though phone and cable companies of the world now make trillions of dollars because of it, they never would have invented it.

Two reasons for that. One is because it was too damn simple. The other is because they would have started with billing. And not just billing you and me. They would have wanted to bill each other, and not use something invented by another company.

A measure of the Internet’s miraculous nature is that actually billing each other would have been so costly and complicated that what they do with each other, to facilitate the movement of data to, from, and across their networks, is called peering. In other words, they charge each other nothing.

Even today it is hard for the world’s phone and cable companies—and even its governments, which have always been partners of a sort—to realize that the Internet became the world-wide way to communicate because it didn’t start with billing.

Again, all TCP/IP says is that this is a way for computers, networks, and everything connected to them, to get along. And it succeeded, producing instant worldwide peace among otherwise competing providers of networks and services. It made every network operator involved win a vast positive-sum game almost none of them knew they were playing. And most of them still don’t.

You know that old joke in which the big fish says to the little fish, “Hi guys, how’s the water?” and one of the little fish says to the other “What’s water?” In 2005, David Foster Wallace gave a legendary commencement address at Kenyon College that I highly recommend, titled “This is water.”

I suspect that, if Wallace were around today, he’d address his point to our digital world.

Human experience

Those of you who already know me are aware that my wife Joyce is as much a companion and collaborator of mine as Vincent Ostrom was of Lin. I bring this up because much of of this talk is hers, including this pair of insights about the Internet: that it has no distance, and also no gravity.

Think about it: when you are on the Internet with another person—for example if you are in a chat or an online conference—there is no functional distance between you and the other person. One of you may be in Chicago and the other in Bangalore. But if the Internet is working, distance is gone. Gravity is also gone. Your face may be right-side-up on the other person’s screen, but it is absent of gravity. The space you both occupy is the other person’s two-dimensional rectangle. Even if we come up with holographic representations of ourselves, we are still incorporeal “on” the Internet. (I say “on” because we need prepositions to make sense of how things are positioned in the world. Yet our limited set of physical-world prepositions—over, under around, through, beside, within and the rest—misdirect our attention away from our disembodied state in the digital one.)

Familiar as that disembodied state may be to all of us by now, it is still new to human experience and inadequately informed by our experience as embodied creatures. It is also hard for us to see both what our limitations are, and how limitless we are at the same time.

Joyce points out that we are also highly adaptive creatures, meaning that eventually we’ll figure out what it means to live where there is no distance or gravity, much as astronauts learn to live as weightless beings in space.

But in the meantime, we’re having a hard time seeing the nature and limits of what’s good and what’s bad in this new environment. And that has to do, at least in part, on forms of enclosure in that world—and how we are exploited within private spaces where we hardly know we are trapped.

In The Medium is the Massage, Marshall McLuhan says every new medium, every new technology, “works us over completely.” Those are his words: works us over completely. Such as now, with digital technology, and the Internet.

I was talking recently with a friend about where our current digital transition ranks among all the other transitions in history that each have a formal cause. Was becoming ditital the biggest thing since the industrial revolution? Since movable type? Writing? Speech?

No, he said. “It’s the biggest thing since oxygenation.”

In case you weren’t there, or weren’t paying attention in geology class, oxygenation happened about 2.5 billion years ago. Which brings us to our next topic:

Institutions

Journalism is just one example of a trusted institution that is highly troubled in the digital world.

It worked fine in a physical world where truth-tellers who dig into topics and reported on them with minimized prejudice were relatively scarce yet easy to find, and to trust. But in a world flooded with information and opinion—a world where everyone can be a reporter, a publisher, a producer, a broadcaster, where the “news cycle” has the lifespan of a joke, and where news and gossip have become almost indistinguishable while being routed algorithmically to amplify prejudice and homophily, journalism has become an anachronism: still important, but all but drowning in a flood of biased “content” paid for by surveillance-led adtech.

People are still hungry for good information, of course, but our appetites are too easily fed by browsing through the surfeit of “content” on the Internet, which we can easily share by text, email or social media. Even if we do the best we can to share trustworthy facts and other substances that sound like truth, we remain suspended in a techno-social environment we mostly generate and re-generate ourselves. Kind of like our ancestral life forms made sense of the seas they oxygenated, long ago.

The academy is another institution that’s troubled in our digital time. After all, education on the Internet is easy to find. Good educational materials are easy to produce and share. For example, take Kahn Academy, which started with one guy tutoring his cousin though online videos.

Authority must still be earned, but there are now countless non-institutional ways to earn it. Credentials still matter, but less than they used to, and not in the same ways. Ad hoc education works in ways that can be cheap or free, while institutions of higher education remain very expensive. What happens when the market for knowledge and know-how starts moving past requirements for advanced degrees that might take students decades of their lives to pay off?

For one example of that risk already at work, take computer programming.

Which do you think matters more to a potential employer of programmers—a degree in computer science or a short but productive track record? For example, by contributing code to the Linux operating system?

To put this in perspective, Linux and operating systems like it are inside nearly every smart thing that connects to the Internet, including TVs, door locks, the world’s search engines, social network, laptops and mobile phones. Nothing could be more essential to computing life.

At the heart of Linux is what’s called the kernel. For code to get into the kernel, it has to pass muster with other programmers who have already proven their worth, and then through testing and debugging. If you’re looking for a terrific programmer, everyone contributing to the Linux kernel is well-proven. And there are thousands of them.

Now here’s the thing. It not only doesn’t matter whether or not those people have degrees in computer science, or even if they’ve had any formal training. What matters, for our purposes here, is that, to a remarkable degree, many of them don’t have either. Or perhaps most of them.

I know a little about this because, in the course of my work at Linux Journal, I would sometimes ask groups of alpha Linux programmers where they learned to code. Almost none told me “school.” Most were self-taught or learned from each other.

My point here is that the degree to which the world’s most essential and consequential operating system depends on the formal education of its makers is roughly zero.

See, the problem for educational institutions in the digital world is that most were built to leverage scarcity: scarce authority, scarce materials, scarce workspace, scarce time, scarce credentials, scarce reputation, scarce anchors of trust. To a highly functional degree we still need and depend on what only educational institutions can provide, but that degree is a lot lower than it used to be, a lot more varied among disciplines, and it risks continuing to decline as time goes on.

It might help at this point to see gravity in some ways as a problem the Internet solves. Because gravity is top-down. It fosters hierarchy and bell curves, sometimes where we need neither.

Absence of gravity instead fosters heterarchy and polycentrism. And, as we know, at the Ostrom Workshop perhaps better than anywhere, commons are good examples of heterarchy and polycentrism at work.

Knowledge Commons

In the first decade of our new millenium, Elinor Ostrom and Charlotte Hess—already operating in our new digital age—extended the commons category to include knowledge, calling it a complex ecosystem that operates as a common: a shared resource subject to social dilemmas.

They looked at ease of access to digital forms of knowledge and easy new ways to store, access and share knowledge as a common. They also looked at the nature of knowledge and its qualities of non-rivalry and non-excludability, which were both unlike what characterizes a natural commons, with its scarcities of rivalrous and excludable goods.

A knowledge commons, they said, is characterized by abundance. This is one way what Yochai Benkler calls Commons Based Peer Production on the Internet is both easy and rampant, giving us, among many other things, both the free software and open source movements in code development and sharing, plus the Internet and the Web.

Commons Based Peer Production also demonstrates how collaboration and non-material incentives can produce better quality products, and less social friction in the course of production.

I’ve given Linux as one example of Commons Based Peer Production. Others are Wikipedia and the Internet Archive. We’re also seeing it within the academy, for example with Indiana University’s own open archives, making research more accessible and scholarship more rich and productive.

Every one of those examples comports with Lin Ostrom’s design principles:

  1. clearly defined group boundaries;
  2. rules governing use of common goods within local needs and conditions;
  3. participation in modifying rules by those affected by the rules;
  4. accessible and low cost ways to resolve disputes;
  5. developing a system, carried out by community members, for monitoring members’ behavior;
  6. graduated sanctions for rule violators;
  7. and governing responsibility in nested tiers from the lowest level up to the entire interconnected system.

But there is also a crisis with Commons Based Peer Production on the Internet today.

Programmers who ten or fifteen years ago would not participate in enclosing their own environments are doing exactly that, for example with 5G, which is designed to put the phone companies in charge of what we can do on the Internet.

The 5G-enclosed Internet might be faster and more handy in many ways, the range of freedoms for each of us there will be bounded by the commercial interests of the phone companies and their partners, and subject to none of Lin’s rules for governing a commons.

Consider this: every one of the nine enclosures I listed at the beginning of this talk are enabled by programmers who either forgot or never learned about the freedom and openness that made the free and open Internet possible. They are employed in the golden egg gathering business—not in one that appreciates the goose that lays those eggs, and which their predecessors gave to us all.

But this isn’t the end of the world. We’re still at the beginning. And a good model for how to begin is—

The physical world

It is significant that all the commons the Ostroms and their colleagues researched in depth were local. Their work established beyond any doubt the importance of local knowledge and local control.

I believe demonstrating this in the digital world is our best chance of saving our digital world from the nine forms of enclosure I listed at the top of this talk.

It’s our best chance because there is no substitute for reality. We may be digital beings now, as well as physical ones. There are great advantages, even in the digital world, to operating in the here-and-now physical world, where all our prepositions still work, and our metaphors still apply.

Back to Joyce again.

In the mid ‘90s, when the Internet was freshly manifest on our home computers, I was mansplaining to Joyce how this Internet thing was finally the global village long promised by tech.

Her response was, “The sweet spot of the Internet is local.” She said that’s because local is where the physical and the virtual intersect. It’s where you can’t fake reality, because you can see and feel and shake hands with it.

She also said the first thing the Internet would obsolesce would be classified ads in newspapers. That’s because the Internet would be a better place than classifieds for parents to find a crib some neighbor down the street might have for sale. Then Craigslist came along and did exactly that.

We had an instructive experience with how the real world and the Internet work together helpfully at the local level about a year and a half ago. That’s when a giant rainstorm fell on the mountains behind Santa Barbara, where we live, and the town next door, called Montecito. This was also right after the Thomas Fire—largest at the time in recorded California history—had burned all the vegetation away, and there was a maximum risk of what geologists call a “debris flow.”

The result was the biggest debris flow in the history of the region: a flash flood of rock and mud that flowed across Montecito like lava from a volcano. Nearly two hundred homes were destroyed, and twenty-three people were killed. Two of them were never found, because it’s hard to find victims buried under what turned out to be at least twenty thousand truckloads of boulders and mud.

Right afterwards, all of Montecito was evacuated, and very little news got out while emergency and rescue workers did their jobs. Our local news media did an excellent job of covering this event as a story. But I also noticed that not much was being said about the geology involved.

So, since I was familiar with debris flows out of the mountains above Los Angeles, where they have infrastructure that’s ready to handle this kind of thing, I put up a post on my blog titled “Making sense of what happened to Montecito.” In that post I shared facts about the geology involved, and also published the only list on the Web of all the addresses of homes that had been destroyed. Visits to my blog jumped from dozens a day to dozens of thousands. Lots of readers also helped improve what I wrote and re-wrote.

All of this happened over the Internet, but it pertained to a real-world local crisis.

Now here’s the thing. What I did there wasn’t writing a story. I didn’t do it for the money, and my blog is a noncommercial one anyway. I did it to help my neighbors. I did it by not being a bystander.

I also did it in the context of a knowledge commons.

Specifically, I was respectful of boundaries of responsibility; notably those of local authorities—rescue workers, law enforcement, reporters from local media, city and county workers preparing reports, and so on. I gave much credit where it was due and didn’t step on the toes of others helping out as well.

An interesting fact about journalism there at the time was the absence of fake news. Sure, there was plenty of fingers pointing in blog comments and in social media. But it was marginalized away from the fact-reporting that mattered most. There was a very productive ecosystem of information, made possible by the Internet in everyone’s midst. And by everyone, I mean lots of very different people.

Humanity

We are learning creatures by nature. We can’t help it. And we don’t learn by freight forwarding

By that, I mean what I am doing here, and what we do with each other when we talk or teach, is not delivering a commodity called information, as if we were forwarding freight. Something much more transformational is taking place, and this is profoundly relevant to the knowledge commons we share.

Consider the word information. It’s a noun derived from the verb to inform, which in turn is derived from the verb to form. When you tell me something I don’t know, you don’t just deliver a sum of information to me. You form me. As a walking sum of all I know, I am changed by that.

This means we are all authors of each other.

In that sense, the word authority belongs to the right we give others to author us: to form us.

Now look at how much more of that can happen on our planet, thanks to the Internet, with its absence of distance and gravity.

And think about how that changes every commons we participate in, as both physical and digital beings. And how much we need guidance to keep from screwing up the commons we have, or forming the ones we don’t, or forming might have in the future—if we don’t screw things up.

A rule in technology is that what can be done will be done—until we find out what shouldn’t be done. Humans have done this with every new technology and practice from speech to stone tools to nuclear power.

We are there now with the Internet. In fact, many of those enclosures I listed are well-intended efforts to limit dangerous uses of the Internet.

And now we are at a point where some of those too are a danger.

What might be the best way to look at the Internet and its uses most sensibly?

I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons, and subject to both research and guidance informed by Lin Ostrom’s rules.

And I hope that guides our study.

There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long. And I look forward to working with many of us here on answers to these and many other questions.

Thank you. 

Sources

Ostrom, Elinor. Governing the Commons. Cambridge University Press, 1990

Ostrom, Elinor and Hess, Charlotte, editors. Understanding Knowledge as a Commons:
From Theory to Practice, MIT Press, 2011
https://mitpress.mit.edu/books/understanding-knowledge-commons
Full text online: https://wtf.tw/ref/hess_ostrom_2007.pdf

Paul D. Aligica and Vlad Tarko, “Polycentricity: From Polanyi to Ostrom, and Beyond” https://asp.mercatus.org/system/files/Polycentricity.pdf

Elinor Ostrom, “Coping With Tragedies of the Commons,” 1998 https://pdfs.semanticscholar.org/7c6e/92906bcf0e590e6541eaa41ad0cd92e13671.pdf

Lee Anne Fennell, “Ostrom’s Law: Property rights in the commons,” March 3, 2011
https://www.thecommonsjournal.org/articles/10.18352/ijc.252/

Christopher W. Savage, “Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy.” Stanford Law School, 2019. https://law.stanford.edu/wp-content/uploads/2019/01/Savage_20190129-1.pdf

 

________________

*I wrote it using—or struggling in—the godawful Outline view in Word. Since I succeeded (most don’t, because they can’t or won’t, with good reason), I’ll brag on succeeding at the subhead level:

As I’m writing this, in Febrary, 2020, Dave Winer is working on what he calls writing on rails. That’s what he gave the pre-Internet world with MORE several decades ago, and I’m helping him with now with the Internet-native kind, as a user. He explains that here. (MORE was, for me, like writing on rails. It’ll be great to go back—or forward—to that again.)

Journalism’s biggest problem (as I’ve said before) is what it’s best at: telling stories. That’s what Thomas B. Edsall (of Columbia and The New York Times) does in Trump’s Digital Advantage Is Freaking Out Democratic Strategists, published in today’s New York Times. He tells a story.

It’s an interesting one, about the fight between Republican and Democratic campaigns, and Repubicans’ superior use of modern methods for persuading voters:

Experts in the explosively growing field of political digital technologies have developed an innovative terminology to describe what they do — a lexicon that is virtually incomprehensible to ordinary voters. This language provides an inkling of the extraordinarily arcane universe politics has entered:

geofencingmass personalizationdark patternsidentity resolution technologiesdynamic prospectinggeotargeting strategieslocation analyticsgeo-behavioural segmentpolitical data cloudautomatic content recognitiondynamic creative optimization.

Geofencing and other emerging digital technologies derive from microtargeting marketing initiatives that use consumer and other demographic data to identify the interests of specific voters or very small groups of like-minded individuals to influence their thoughts or actions.

In fact the “arcane universe” he’s talking about is just the direct marketing playbook, which was born offline as the junk mail business. In that business, tracking individuals and bothering them personally is a fine and fully rationalized thing. And let’s face it: political campaigning has always wanted to get personal. It’s why we have mass mailings, mass callings, mass textings and the rest of it—all to personal addresses, numbers and faces.

Coincidence: I just got this:

There is nothing new here other than (at the moment) the Trump team doing it better than any Democrat. (Except maybe Bernie.) Obama’s team was better at it in ’08 and ’12. Trump’s was better at it in ’16 and is better again in ’20.*

However, debating which candidates do the best marketing misdirects our attention away from the destruction of personal privacy by constant tracking of our asses online—including tracking of asses by politicians. This, I submit, is a bigger and badder issue than which politicians do the best direct marketing. It may even be bigger than who gets elected to what in November.

As issues go, personal privacy is soul-deep. Who gets elected, and how, are not.

As I put it here,

Surveillance of people is now the norm for nearly every website and app that harvests personal data for use by machines. Privacy, as we’ve understood it in the physical world since the invention of the loincloth and the door latch, doesn’t yet exist. Instead, all we have are the “privacy policies” of corporate entities participating in the data extraction marketplace, plus terms and conditions they compel us to sign, either of which they can change on a whim. Most of the time our only choice is to deny ourselves the convenience of these companies’ services or live our lives offline.

Worse is that these are proffered on the Taylorist model, meaning mass-produced.

There is a natural temptation to want to fix this with policy. This is a mistake for two reasons:

  1. Policy-makers are themselves part of the problem. Hell, most of their election campaigns are built on direct marketing. And law enforcement (which carries out certain forms of policy) has always regarded personal privacy as a problem to overcome rather than a solution to anything. Example.
  2. Policy-makers often screw things up. Exhibit A: the EU’s GDPR, which has done more to clutter the Web with insincere and misleading cookie notices than it has to advance personal privacy tech online. (I’ve written about this a lot. Here’s one sample.)

We need tech of our own. Terms and policies of our own. In the physical world, we have privacy tech in the forms of clothing, shelter, doors, locks and window shades. We have policies in the form of manners, courtesies, and respect for privacy signals we send to each other. We lack all of that online. Until we invent it, the most we’ll do to achieve real privacy online is talk about it, and inveigh for politicians to solve it for us. Which they won’t.

If you’re interested in solving personal privacy at the personal level, take a look at Customer Commons. If you want to join our efforts there, talk to me.

_____________
*The Trump campaign also has the enormous benefit of an already-chosen Republican ticket. The Democrats have a mess of candidates and a split in the party between young and old, socialists and moderates, and no candidate as interesting as is Trump.

At this point, it’s no contest. Trump is the biggest character in the biggest story of our time. (I explain this in Where Journalism Fails.) And he’s on a glide path to winning in November, just as I said he was in 2016.

Here’s the popover that greets visitors on arrival at Rolling Stone‘s website:

Our Privacy Policy has been revised as of January 1, 2020. This policy outlines how we use your information. By using our site and products, you are agreeing to the policy.

That policy is supplied by Rolling Stone’s parent (PMC) and weighs more than 10,000 words. In it the word “advertising” appears 68 times. Adjectives modifying it include “targeted,” “personalized,” “tailored,” “cookie-based,” “behavioral” and “interest-based.” All of that is made possible by, among other things—

Information we collect automatically:

Device information and identifiers such as IP address; browser type and language; operating system; platform type; device type; software and hardware attributes; and unique device, advertising, and app identifiers

Internet network and device activity data such as information about files you download, domain names, landing pages, browsing activity, content or ads viewed and clicked, dates and times of access, pages viewed, forms you complete or partially complete, search terms, uploads or downloads, the URL that referred you to our Services, the web sites you visit after this web site; if you share our content to social media platforms; and other web usage activity and data logged by our web servers, whether you open an email and your interaction with email content, access times, error logs, and other similar information. See “Cookies and Other Tracking Technologies” below for more information about how we collect and use this information.

Geolocation information such as city, state and ZIP code associated with your IP address or derived through Wi-Fi triangulation; and precise geolocation information from GPS-based functionality on your mobile devices, with your permission in accordance with your mobile device settings.

The “How We Use the Information We Collect” section says they will—

Personalize your experience to Provide the Services, for example to:

  • Customize certain features of the Services,
  • Deliver relevant content and to provide you with an enhanced experience based on your activities and interests
  • Send you personalized newsletters, surveys, and information about products, services and promotions offered by us, our partners, and other organizations with which we work
  • Customize the advertising on the Services based on your activities and interests
  • Create and update inferences about you and audience segments that can be used for targeted advertising and marketing on the Services, third party services and platforms, and mobile apps
  • Create profiles about you, including adding and combining information we obtain from third parties, which may be used for analytics, marketing, and advertising
  • Conduct cross-device tracking by using information such as IP addresses and unique mobile device identifiers to identify the same unique users across multiple browsers or devices (such as smartphones or tablets, in order to save your preferences across devices and analyze usage of the Service.
  • using inferences about your preferences and interests for any and all of the above purposes

For a look at what Rolling Stone, PMC and their third parties are up to, Privacy Badger’s browser extension “found 73 potential trackers on www.rollingstone.com:

tagan.adlightning.com
 acdn.adnxs.com
 ib.adnxs.com
 cdn.adsafeprotected.com
 static.adsafeprotected.com
 d.agkn.com
 js.agkn.com
 c.amazon-adsystem.com
 z-na.amazon-adsystem.com
 display.apester.com
 events.apester.com
 static.apester.com
 as-sec.casalemedia.com
 ping.chartbeat.net
 static.chartbeat.com
 quantcast.mgr.consensu.org
 script.crazyegg.com
 dc8xl0ndzn2cb.cloudfront.net
cdn.digitru.st
 ad.doubleclick.net
 securepubads.g.doubleclick.net
 hbint.emxdgt.com
 connect.facebook.net
 adservice.google.com
 pagead2.googlesyndication.com
 www.googletagmanager.com
 www.gstatic.com
 static.hotjar.com
 imasdk.googleapis.com
 js-sec.indexww.com
 load.instinctiveads.com
 ssl.p.jwpcdn.com
 content.jwplatform.com
 ping-meta-prd.jwpltx.com
 prd.jwpltx.com
 assets-jpcust.jwpsrv.com
 g.jwpsrv.com
pixel.keywee.co
 beacon.krxd.net
 cdn.krxd.net
 consumer.krxd.net
 www.lightboxcdn.com
 widgets.outbrain.com
 cdn.permutive.com
 assets.pinterest.com
 openbid.pubmatic.com
 secure.quantserve.com
 cdn.roiq.ranker.com
 eus.rubiconproject.com
 fastlane.rubiconproject.com
 s3.amazonaws.com
 sb.scorecardresearch.com
 p.skimresources.com
 r.skimresources.com
 s.skimresources.com
 t.skimresources.com
launcher.spot.im
recirculation.spot.im
 js.spotx.tv
 search.spotxchange.com
 sync.search.spotxchange.com
 cc.swiftype.com
 s.swiftypecdn.com
 jwplayer.eb.tremorhub.com
 pbs.twimg.com
 cdn.syndication.twimg.com
 platform.twitter.com
 syndication.twitter.com
 mrb.upapi.net
 pixel.wp.com
 stats.wp.com
 www.youtube.com
 s.ytimg.com

This kind of shit is why we have the EU’s GDPR (General Data Protection Regulation) and California’s CCPA (California Consumer Privacy Act). (No, it’s not just because Google and Facebook.) If publishers and the adtech industry (those third parties) hadn’t turned the commercial Web into a target-rich environment for suckage by data vampires, we’d never have had either law. (In fact, both laws are still new: the GDPR went into effect in May 2018 and the CCPA a few days ago.)

I’m in California, where the CCPA gives me the right to shake down the vampiretariat for all the information about me they’re harvesting, sharing, selling or giving away to or through those third parties.* But apparently Rolling Stone and PMC don’t care about that.

Others do, and I’ll visit some of those in later posts. Meanwhile I’ll let Rolling Stone and PMC stand as examples of bad acting by publishers that remains rampant, unstopped and almost entirely unpunished, even under these new laws.

I also suggest following and getting involved with the fight against the plague of data vampirism in the publishing world. These will help:

  1. Reading Don Marti’s blog, where he shares expert analysis and advice on the CCPA and related matters. Also People vs. Adtech, a compilation of my own writings on the topic, going back to 2008.
  2. Following what the browser makers are doing with tracking protection (alas, differently†). Shortcuts: Brave, Google’s Chrome, Ghostery’s Cliqz, Microsoft’s Edge, Epic, Mozilla’s Firefox.
  3. Following or joining communities working to introduce safe forms of nourishment for publishers and better habits for advertisers and their agencies. Those include Customer CommonsMe2B AllianceMyData Global and ProjectVRM.

______________

*The bill (AB 375), begins,

The California Constitution grants a right of privacy. Existing law provides for the confidentiality of personal information in various contexts and requires a business or person that suffers a breach of security of computerized data that includes personal information, as defined, to disclose that breach, as specified.

This bill would enact the California Consumer Privacy Act of 2018. Beginning January 1, 2020, the bill would grant a consumer a right to request a business to disclose the categories and specific pieces of personal information that it collects about the consumer, the categories of sources from which that information is collected, the business purposes for collecting or selling the information, and the categories of 3rd parties with which the information is shared. The bill would require a business to make disclosures about the information and the purposes for which it is used. The bill would grant a consumer the right to request deletion of personal information and would require the business to delete upon receipt of a verified request, as specified. The bill would grant a consumer a right to request that a business that sells the consumer’s personal information, or discloses it for a business purpose, disclose the categories of information that it collects and categories of information and the identity of 3rd parties to which the information was sold or disclosed…

Don Marti has a draft letter one might submit to the brokers and advertisers who use all that personal data. (He also tweets a caution here.)

†This will be the subject of my next post.

Deepfakes are a big thing, and a bad one.

On the big side, a Google search for deepfake brings up more than 23 billion results.

On the bad side, today’s top result in a search on Twitter for the hashtag #deepfake says, “Technology is slowly killing reality. I am worried of tomorrow’s truths that will be made in shops. This #deepfake is bothering my soul deeply.” In another of the top results, a Vice report is headlined Deepfake Porn Is Evolving to Give People Total Control Over Women’s Bodies.

Clearly we need an antidote here.

I suggest deepreal.

If deepfake lies at the bottom of the uncanny valley (as more than 37 thousand sites online suggest), deepreal should be just as highly out of that valley. As the graphic above (source) suggests, the deeply real (I added that) is fully human, and can elicit any level of emotional response, as real humans tend to do.

So what do we know that’s already deepreal?

Well, there’s reality itself, meaning the physical kind. A real person talking to you in the real world is undeniably human (at least until robots perfectly emulate human beings walking, talking and working among us, which will be icky and therefore deep in the uncanny valley). But what about the digital world? How can we be sure that a fully human being is also deeply real where the prevalent state is incorporeal—absent of physical existence?

The only way I know, so far, is with self-sovereign identity (SSI) technology, which gives us standardized ways of letting others know required facts about us (e.g. “I’m over 18,” “I’m a citizen of this country,” “I have my own car insurance,” “I live in this state,” “I’m a member of this club.”) Here’s some of what I’ve written and said about SSI:

  1. The Sovereign Identity Revolution (OneWorldIdentity, 21 February, 2017)
  2. New Hope for Digital Identity (Linux Journal, 9 November 2017)
  3. Some Thoughts About Self-Sovereign Identity (doc.blog, 16 March 2019)
  4. Some Perspective on  Self-Sovereign Identity (KuppingerCole, 20 April 2019)
  5. Thoughts at #ID2020 (Doc Searls Weblog, 19 September 2019)

As I put it in #4 above, “The time has come to humanize identity in the networked world by making it as personal as it has been all along in the natural one.” I believe it is only by humanizing identity in the networked world that we can start to deal with deepfakes and other ways we are being dehumanized online. (And, if you’re skeptical about SSI, as are some I shouted out to here, what other means to you suggest? It’s still early, and the world of possibility is large.)

I also look forward to discussing this with real people here online—and in the physical world. Toward that, here are some identity tech gatherings coming up in 2020:

I also look forward to playing whack-a-mole with robots faking interest in this post; and which, because I’ll succeed, you’ll not see in the comment section below. (You probably also won’t see comments by humans, because humans prefer conversational venues not hogged by robots.)

The Los Angeles in your head is a Neutra house. You’ve seen many of them in movies, and some of them in many movies. Some of those are now gone, alas, as is the architect and preservationist who also designed, or helped design, many of the buildings that bear his surname. Dion Neutra died last week, at 93 years of work more than of age. Here is a Google search for his obituary, which brings up a great many entries.

Dion was a good man and a good friend. Here he is in our Santa Barbara back yard a few years ago:

If you read Dion’s obituaries (of which the longest and best is the LA Times’), you’ll learn much about his life, work and legacy. But I know some things that don’t quite make it through those channels, so I’ll fill in a couple of those details.

One is that Dion was a peripatetic correspondent, mostly by email, especially via his White Light newsletter, which he sent out on a schedule that rounded to always. “White Light” meant healing energy, which was directed by Dion and his readers toward friends who might need some. There were many other topics in their midst (he could hold forth at great length on you-name-it), but health was perhaps the biggest one. Over the last few months, Dion’s letter increasingly reported on his own decline (which seemed radically at odds with his high lifelong energy level, which was invested in a great deal of golf, among other physical activities), but always also about what others were up to. The last words of his last letter, on October 24, were “Lots of love to everybody. Bye!”

The other is that Dion was eager to jump on the Internet, starting in the last millennium. I know this because I was the guy he asked for help putting up his first website. Which I did, at Neutra.org: a domain name I also helped him acquire. Here is the first capture of it, by the Internet Archive, 21 years and 1 day ago. I remember arguing with Dion about making the whole site a constant appeal to save one Neutra building or another, but that turned out to be his main work, from that point onward. He failed in some efforts, but succeeded in others. Thanks to that work, Neutra architecture and all it stands for live on.

Lots of love to you and what you’ve done for us all, old friend.

A few days ago a Twitter exchange contained an “OK Boomer” response to one of my tweets. At the time I laughed it off, tweeting back a pointer to Report: Burying, Cremating Baby Boomers To Generate $200 Trillion In GDP, which ran five years ago in The Onion.

But it got me thinking that “OK Boomer” might be more—and worse—than a mere meme. Still, I wasn’t moved to say anything, because I had better stuff to do.

Then today I followed a link to Not So OK, Boomers, on Pulp. Illustrated by Goya’s horrifying Saturn Devouring His Son, it ends with this:

Goya’s Saturn does not swallow his children whole, but has taken chunks out of the body, chewing off the head and the limbs.

The cannibalism Boomers are inflicting on us appears to be closer to Goya’s vision: deranged, irreversible, and violent. Unwilling to accept a world that goes on without them, they are gluttonously consuming resources.

Their own lives have been extended, but without any appreciable gains in quality of life, and so in their rage, their confusion, they poison the air and water, they raise our cortisol levels.

What do we do with the knowledge that our parents are actively trying to harm us but are incapable of accepting the suffering they’re inflicting?

Our response is going to have to be better than depression memes and the odd glib, ‘OK Boomer,’ if we’re going to survive.

That got 2,200 claps. So far.

So this time I responded, with this:

I like Pulp, perhaps because I’ve been young a long time. But this piece is worse than wrong. It’s cruel and inflammatory.

To see how, answer this: Is there moral difference between prejudice against a race, a gender, an ethnicity, a nationality—and a demographic? If there is, it’s one of degree, not one of kind.

As soon as you otherize any human category as a them vs. an us, you’re practicing the same kind of prejudice—and, at its worst, bigotry.

Try substituting the words “women” or “blacks” for the word “Boomers” in this piece, and you get the point.

Ageism may not be worse than sexism or racism, but it’s still an ism, good only for amplifying itself, which seems to be the purpose of this piece.

Read the closing paragraphs again and ask what kind of action the author calls for that would be proportional to the cannibalism he accuses Boomers of inflicting on his generation.

And then hope it doesn’t happen.

That got 10 claps. So far.

But what the hell, I’ll continue.

If young people want to understand old people (which Boomers are, or soon will be) I suggest this: imagine that if a fifth, then a quarter, then a third, then half, and then most of the people you grew up or worked with in your life—friends, cousins, co-workers, classmates—are now dead. And that meanwhile you’re putting your useful experience and wisdom to work as best you can while you’re still able, knowing that, too soon at any age, you’ll be gone too.

There’s no shit you can give a person like that, sitting on the short end of life’s death row, that can measure up to their intimate familiarity with mortality, and with the work they still face, most of which they’ll never finish. So an “OK Boomer” put-down isn’t going to bother most of them.

But it’s still shit. Or worse.

We can all do better than that.

 

 

A Route of Evanescence,
With a revolving Wheel –
A Resonance of Emerald
A Rush of Cochineal –
And every Blossom on the Bush
Adjusts it’s tumbled Head –
The Mail from Tunis – probably,
An easy Morning’s Ride –

—Emily Dickinson
(via The Poetry Foundation)

While that poem is apparently about a hummingbird, it’s the one that comes first to my mind when I contemplate the form of evanescence that’s rooted in the nature of the Internet, where all of us are here right now, as I’m writing and you’re reading this.

Because, let’s face it: the Internet is no more about anything “on” it than air is about noise, speech or anything at all. Like air, sunlight, gravity and other useful graces of nature, the Internet is good for whatever can be done with it.

Same with the Web. While the Web was born as a way to share documents at a distance (via the Internet), it was never a library, even though we borrowed the language of real estate and publishing (domains and sites with pages one could author, edit, publish, syndicate, visit and browse) to describe it. While the metaphorical framing in all those words suggests durability and permanence, they belie the inherently evanescent nature of all we call content.

Think about the words memorystorageupload, and download. All suggest that content in digital form has substance at least resembling the physical kind. But it doesn’t. It’s a representation, in a pattern of ones and zeros, recorded on a medium for as long the responsible party wishes to keep it there, or the medium survives. All those states are volatile, and none guarantee that those ones and zeroes will last.

I’ve been producing digital content for the Web since the early 90s, and for much of that time I was lulled into thinking of the digital tech as something at least possibly permanent. But then my son Allen pointed out a distinction between the static Web of purposefully durable content and what he called the live Web. That was in 2003, when blogs were just beginning to become a thing. Since then the live Web has become the main Web, and people have come to see content as writing or projections on a World Wide Whiteboard. Tweets, shares, shots and posts are mostly of momentary value. Snapchat succeeded as a whiteboard where people could share “moments” that erased themselves after one view. (It does much more now, but evanescence remains its root.)

But, being both (relatively) old and (seriously) old-school about saving stuff that matters, I’ve been especially concerned with how we can archive, curate and preserve as much as possible of what’s produced for the digital world.

Last week, for example, I was involved in the effort to return Linux Journal to the Web’s shelves. (The magazine and site, which lived from April 1994 to August 2019, was briefly down, and with it all my own writing there, going back to 1996. That corpus is about a third of my writing in the published world.) Earlier, when it looked like Flickr might go down, I worried aloud about what would become of my many-dozen-thousand photos there. SmugMug saved it (Yay!); but there is no guarantee that any Website will persist forever, in any form. In fact, the way to bet is on the mortality of everything there. (Perspective: earlier today, over at doc.blog, I posted a brief think piece about the mortality of our planet, and the youth of the Universe.)

But the evanescent nature of digital memory shouldn’t stop us from thinking about how to take better care of what of the Net and the Web we wish to see remembered for the world. This is why it’s good to be in conversation on the topic with Brewster Kahle (of archive.org), Dave Winer and other like-minded folk. I welcome your thoughts as well.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers are their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it) and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools and each other.

Louis Brandeis and Samuel Warren visited the same problem more than a century ago, when they became alarmed at the implications of recording and reporting technologies that were far more primitive than the kind we have today. In response to those technologies, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal.”

But it’s hard to argue for those rights in the digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there is no limit to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

And facial recognition systems are getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and perhaps not just because they’re acquiescing to the inevitable.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of  what might get done with facial data if the bank gets hacked, or changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting thrugh to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But special cases such as that one haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional, because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

However, given the unlkelihood that the facial recognition genie will ever go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example to unlock phones or to sort through old photos. But, the data they gather should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes).
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted in to the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

I posted this Cluetrain retrospective at doc.blog last year. I’m putting it here now because it’s timely again. cluetrain coverDig:

1) The original site and book are online in full at http://cluetrain.com and http://cluetrain.com/book

2) The 10th anniversary edition has new chapters by the four original authors, plus additional ones by JP RangaswamiDan Gillmor and Jake McKee.

3) David Weinberger and I posted an addendum to Cluetrain in 2015 called New Clues: http://cluetrain.com/newclues

4) The word “cluetrain” is more or less constantly mentioned on Twitter: https://twitter.com/search?q=cluetrain

5) A search in Google books https://www.google.com/search?tbm=bks&q=cluetrain brings up more than 13,000 results, almost nineteen years after the original was published.

6) A search in Google Scholar https://scholar.google.com/scholar?en&q=cluetrain brings up more than 4,000 results.

7) A dig through old emails just turned up the earliest evidence  (at least to me) of Cluetrain’s inception: a draft of a joint JOHO (David Weinberger’s email list) and EGR (Chris Locke’s list) posting, vetted for input by yours truly. This was when the three of us were first sharing the co-thinkings that became Cluetrain in early 1999. That email is dated 30 October 1998, meaning that more than two decades have passed since this thing started.

A few weeks ago, in Where journalism fails, I wrote about how journalism, for all its high-minded (and essential) purposes, is still interested only in stories. I explained that stories have just three requirements—characterproblem, and movement—and that, by focusing on those three requirements alone, journalism excludes a boundless volume of facts, many of which actually matter. I also point out that story-telling is vulnerable to manipulation by experts at feeding journalism’s appetites.

In this post my focus is on the near-infinite abundance of stories that have never been told, have been forgotten, or both, but some of which might still matter to somebody, or to the world.

You’ll find pointers to billions of those in cemeteries. Every headstone marks the absence of countless stories as lost and buried as the graves’ occupants. All the long-buried were characters in their own lives’ stories, and within each of those lives were countless other stories. But the characters in those stories are gone, their problems are over, and movement has ceased. All have been, or will soon be, erased by time and growing disinterests of the living—even of surviving friends and heirs.

So I want to surface a few stories of deceased ancestors and relatives of my own, whose bodies are among the 300,000+ occupants of just one cemetery: Woodlawn, in The Bronx, New York. We’ll start with my great-grandfather, Henry Roman EnglertThat’s him with his first four daughters, above. Clockwise from top left are Loretto (“Loretta”), Regina (“Gene”), Ethel (my grandma Searls), and Florence. Here’s Henry as a younger man:

Here are the same four girls in the top picture, at the Jersey shore in 1953, ten years after Henry died:

All those ladies lived long full lives. The longest was Grandma (second from right), who made it to a few days short of 108.

Here’s henry his granddaughter, Grace (née Searls) Apgar, my father’s sister, ten years before that:

And here is his headstone, placed ten years after the shot above:

Henry R. Englert headstone

Some biography:

Henry was a fastidious dude, meaning highly disciplinary as well as disciplined. Grandma told a story about how her father, on arriving home from work, would summon his four daughters (of which she was one) to appear and stand in a row… He would then run his white glove over some horizontal surface and wipe it on a white shoulder of a daughter’s dress, expecting no dust to leave a mark on either glove or girl.

Henry was the son of German immigrants: Christian Englert and Jacobina Rung, both of Alsace, then of Bavaria and now part of France. They were brewers, and had a tavern on the east side of Manhattan on 110th Street. (Though an 1870 census page calls Christian a “laborer.”) Jacobina was a Third Order Carmelite nun, and was buried in its brown robes. Both were born in 1825. Christian is said to have died in 1886 while picking hops in Utica. Jacobina died in 1904.

Here’s more:

  1. Henry was sometimes called “HRE.”
  2. He headed (or was said to have headed) the Steel and Copper Plate Engravers Union in New York—and was put out of business by mechanization, like many others in his trade. I don’t know what else he did after that. Perhaps he lived off savings.
  3. He was what his daughter (my grandma) called a “good socialist.”
  4. He had at least seven daughters and one son (Henry Jr., known as Harry, who died at age four).
  5. He was married twice, and outlived both his wives and three of his kids, all by long margins.
  6. His second wife, Teresa, was (again, by lore) both an alcoholic and kinda crazy. Still, she produced several children.
  7. It was said that he died after having his first dentistry—a tooth pulled, at age 87. I don’t know if that was correct, but it’s one story about him.
  8. He rarely visited the families of children by his first wife: the Knoebels (by daughter Regina, known as Gene), the Searls (by daughter Ethel, my grandma) or the Dwyers (by daughter Florence), though there seem to be plenty of pictures of him with those families.
  9. Nobody alive can say why the graves of the wives and kids buried with him are unmarked, or why Henry’s is the only headstone. Here’s some detail on who lies where in his plot:

Henry Roman Englert, wives and kids

My grandmother and her sisters used to take their families on picnic trips to this plot when it was unmarked. Why did they not mark it before Henry died? Nobody who knew is alive to say.

About 80 feet away is an older three-grave plot occupied by Henry’s parents, plus one of his brothers and three cousins and in-laws named Fehn*:

Woodlawn’s own records say this about the distribution of the graves and their occupants

Left:

  1. Theresa M. Fen, 10 mos 8/2/1887
  2. Agnes Fen, 1 yr

Center:

  1. Annie T. Englert, 29 yrs, 4/12/1881 Bellview Hosp. NYC
  2. Christian Englert, 60 yrs, 10/4/1886, 16 Devereux St. Utica, NY
  3. Jacobina Englert, 78 yrs & 7′ deep, 3/1/04 110 e. 106th St. NYC

Right:

  1. Christian P. Englert, 33 yrs 4/12/1891 Bellview Hosp. NYC
  2. Henry W. Fehn, 85 yrs 10/23/1948 Am Vet

A hmm here: to bury Jacobina 7 feet deep,  they surely would have had to dig past her husband (dead 18 years) and daughter (dead 23 years), and to have encountered bones along the way. I can say that, because I’ve seen evidence

—that bones survive well in glacial till (about which more later). So I suspect that this three-person grave is seven feet deep, with the final occupant stacked on top.

Also, since Jacobina was a Carmelite nun, I call her “Nun of the Below.”

Further digging of the research kind, done my my aunt Katherine (née Dwyer) Burns (daughter of Florence Englert), turns up an 1870 census page that says this about the Englert family at that time:

  1. Christian, from Bavaria, a laborer, age 45
  2. Jacobina, from Bavaria, “keeps houses,” age 45
  3. Henry, “(illegible) engraver,” age 15
  4. Christian, age 12
  5. Annie, age 9
  6. Mary, age 7*
  7. Andrew, age 4

*Mary, I gather, married a Fehn. Here’s a clue. [Later…] Ah! I found a better one:

Mary A Fehn (born Englert), 1863 – 1957

(This is from Geni.com, which wants money to reveal details at those links.)

Mostly I’m impressed that, among Christian and Jacobina’s kids, Mary and Henry alone lived long lives.

Here are Christian and Jacobina, in life, perhaps around the time of the 1870 census:

And here are their three sons, with Henry’s first three daughters, the future Grandma Searls on the right:

There are differences between the caption I wrote under that photo eight years ago (based on what I knew, or thought I knew, at the time), Grace’s comment below it, posted when she was 100 years old. (Grace rocked. Here’s her 100th birthday party, in Maine.) In that comment, Grace says she thinks the one on the left is Andrew, and the one in the middle is Christopher, by which I’m sure she means Christian (the younger). Both died not long after this photo was taken. Not clear whether Christian or Andrew was the one who died of a terminal cold acquired while working in a frozen food warehouse or something.

While he’s not in the Englert plot above, he is in an unmarked one nearby, which Woodlawn identifies thus:

  1. Andrew J. Englert, 35 yrs, 5/29/1901
  2. Annie C. Englert, 67 yrs, 11/17/1935

I suppose, since his sister Annie (named Anna) is buried with her parents and brother Christian (among various Fehns), that she was Andrew’s wife. Here is a shot of that grave.

And here is a Google Earth GPS trace of a visit to all three gravesites: Henry at B, his parents Christian and Jacobina + sibs Anna (Annie) and Christian at A, and Andrew + (wife?) Annie at C:

At D in that shot is a collection of headstones for New York’s Association for the Relief of Respectable Aged Indigent Females, which occupied a beautiful Victorian gothic building in Harlem that is now home to a youth hostel. The Wikipedia entry at the last link fails to mention the cemetery. (I should correct that.)

Last is the Knoebel plot, nearby. Bigger than any of the Englert plots, it is first in a way, because Regina Knoebel was the Eldest of Henry Englert’s many children. It looks like this:

From the caption under that photo:

The six-grave, twelve-body Knoebel plot is described by Woodlawn Cemetery here. Since the descriptions of those graves that don’t quite agree with some of the headstones (for example, with spellings), I’ve combined the two in the description below.

First, behind the main monument are three graves. Left to right, they are—

1
Lillian (Lillie) Raichle, 1876-3/3/1958, 81 years
Lillian W. Raichle, 1902-1907, 5 years
Herman Raichle, 1877-1933

2
Sarah Bladen, 1864 to 1926, 61 years

3
Henry Vier, 8 years
Rita P. Knoebel, 81 years, 2/15/92

All three have headstones.

In front of the monument are three more graves, left to right, those are:

4
John E. Knoebel, 78 years 9/4/50
John E. Knoebel, 84 years, 12/25/2000
Regina Knoebel, 80 years, 1/6/1960, exhumed on 10/7/70 and reburied in Fairview Cemetery in New Jersey

5
John E. Knoebel, 61 years

6
Louis F. Knoebel, 50 years, 11/11/2013
Anastasia Knoebel, 60 years

Note that grave 4 is a bit sunken. This is the one from which Regina (née Englert) Knoebel (Aunt Gene), who was married to one of the John E. (“Johnny”) Knoebels, and whose son John E. was, apparently, buried in her stead.

A story I recall about Aunt Gene is almost certainly apocryphal, but still interesting, is that she once climbed a spire of rock in New Jersey’s Palisades and carved her initials, “RE,” near the top—and that these were later visible from the George Washington Bridge, because it was built right next to the spire. (On the North side.)

Lending credence to this story is an absent fear of heights that runs in my father’s family (his mom was Gene’s younger sister Ethel). Pop also grew up on the Palisades and was a cable rigger working on the Bridge itself. (Here he is.) And I do at least recall Aunt Gene as the most alpha (being the eldest) of the four Englert sisters; so it kinda seemed in character that she might do such a thing. But … I have no idea. I’ve been by there many times since then, and the whole face of the Palisades is so overrun with greenery now that it’s hard to tell if a spire is even there. I do recall that there was one, though.

Yet the sad but true summary of all this is that today none of these people matter much to anybody, even though most or all of them mattered to others a great deal when they were alive. Living relatives, including me, are all way too busy with stories of their own, and long since past caring much, if at all, about any of the departed here.

A measure of caring about the preservation of graves at Woodlawn is whether or not the headstone is “endowed,” meaning maintained in its original upright and above-ground condition. The elder Englerts’ stone, as we see in the shot above, is endowed. Henry’s, I suppose, is not, but appears to be holding up. So far.

Many of those not endowed are sinking into the Earth. See the examples here, here and here. The last of those is this:

The gravestone business calls its products memorials, defined as “something, especially a structure, established to remind people of a person or event.” The headstone above may have reminded some people a century ago of Henry Kremer and his infant namesake, but today I find nothing about either online. And soon this stone, like so many others around it, will be buried no less than the graves they once marked, simply because most of Woodlawn, like much of New York City itself, is barely settled glacial till, and so soft you can dig it with a spoon. (In fact, New York’s glacial history is far more interesting today than the lives of nearly all the inhabitants of its cemeteries. That’s why it makes the great story at that last link.)

Archeology is “the study of human history and prehistory through the excavation of sites and the analysis of artifacts and other physical remains.” These days we do much of that online, in digital space. It’s what I’m doing here, in some faith that at least a few small bits of what I tossed out in the paragraphs above will prove useful to story-tellers among the living.

And I suggest that this, and not just telling the usual stories, needs to be a bigger part of journalism’s calling in our time.

« Older entries