I really don’t want to bust Zoom. No tech company on Earth is doing more to keep civilization working at a time when it could so easily fall apart.

Zoom does that by providing an exceptionally solid, reliable, friendly, flexible, useful (and even fun!) way for people to be present with each other, regardless of distance.

No wonder Zoom is now to conferencing what Google is to search. Meaning: it’s a verb. Case in point: between the last sentence and this one, a friend here in town sent me an email that began with this:

That’s a screen shot.

What an amazing grace to have a service in our midst that gives us meet space that’s closer to meat space than anything else in the world.

Zoom also has problems, and I’ve spent two posts, so far, busting them for one of those problems: their apparent lack of commitment to personal privacy:

  1. Zoom needs to cleanup its privacy act
  2. More on Zoom and privacy

With this third one I’d like to turn that around.

I’ll start with the email I got yesterday from a person at a company engaged by Zoom for (seems to me) reputation management, asking me to update my posts based on the “facts” (his word) in this statement:

Zoom takes its users’ privacy extremely seriously, and does not mine user data or sell user data of any kind to anyone. Like most software companies, we use third-party advertising service providers (like Google) for marketing purposes: to deliver tailored ads to our users about Zoom products the users may find interesting. (For example, if you visit our website, later on, depending on your cookie preferences, you may see an ad from Zoom reminding you of all the amazing features that Zoom has to offer). However, this only pertains to your activity on our Zoom.us website. The Zoom services do not contain advertising cookies. No data regarding user activity on the Zoom platform – including video, audio and chat content – is ever used for advertising purposes. If you do not want to receive targeted ads about Zoom, simply click the “Cookie Preferences” link at the bottom of any page on the zoom.us site and adjust the slider to ‘Required Cookies.’

I don’t think this squares with what Zoom says in the “Does Zoom sell Personal Data?” section of its privacy policy (which I unpacked in my first post, and that Forbes, Consumer Reports and others have also flagged as problematic)—or with the choices provided in Zoom’s cookie settings, which list 70 (by my count) third parties whose involvement in Zoom sessions the user can opt into or out of (by a set of options I unpacked in my second post). The logos in the image above are just 16 of those 70 parties, some of which include more than one domain.

Also, if all the ads shown to users are just “about Zoom,” why are those other companies in the picture at all? Specifically, under “About Cookies on This Site,” the slider is defaulted to allow all “functional cookies” and “advertising cookies,” the latter of which are “used by advertising companies to serve ads that are relevant to your interests.” Wouldn’t Zoom be in a better position to know your interests, respecting Zoom, than all those other companies?

More questions:

  1. Are those third parties “processors” under GDPR, or “service providers by the CCPAs definition? (I’m not an authority on either. So I’m asking.)
  2. How do these third parties know what your interests are? (Presumably by tracking you, or by learning from others who do.)
  3. What data about you do those companies leave with Zoom (or with each other, somehow) with after you’ve been exposed to them on the Zoom site?
  4. What targeting intelligence do those companies bring with them to Zoom’s pages because you’re already carrying cookies from those companies, and those cookies can alert those companies (or others, for example through real time bidding auctions) to your presence on the Zoom site?
  5. If all Zoom wants to do is promote Zoom products to Zoom users (as that statement says), why bring in any of those companies?

Here is what I think is going on (and I welcome corrections): Because Zoom wants to comply with GDPR and CCPA, they’ve hired TrustArc to put that opt-out cookie gauntlet in front of users. They could just as easily have used Quantcast‘s system, or consentmanager‘s, or OneTrust‘s, or somebody else’s.

All those services are designed to give clients such as Zoom a way to obey the letter of the GDPR while violating its spirit. That spirit says stop tracking people unless they ask you to, consciously and deliberately: opting in, rather than opting out. And the same goes for the CCPA. Every time you click “Accept” to one of those cookie notices, you know you’ve just lost one more battle in a losing war for your privacy online.

I also assume that Zoom’s deal with TrustArc—and, by implication, all those 70 other parties listed in the cookie gauntlet—also requires that Zoom put a bunch of weasel-y jive in their privacy policy. Which looks suspicious as hell, because it is.

Zoom can fix all of this easily by just stopping it. Other companies—ones that depend on adtech (tracking-based advertising)—don’t have that luxury. But Zoom does.

If we take Zoom at its word (in that paragraph they sent me), they aren’t interested in being part of the adtech fecosystem. They just want help aiming promotional ads for their own services, on their own site.

Three things about that:

  1. Neither the Zoom site, nor the possible uses of it, are so complicated that they need aiming help from those third parties.
  2. Zoom is the world’s leading sellers’ market right now, meaning they hardly need to advertise at all.
  3. Being in adtech’s fecosystem raises huge fears about what Zoom and those third parties might be doing where people actually use Zoom most of the time: in its app. Again, Consumer Reports, Forbes and others have assumed, as have I, that the company’s embrasure of adtech in its privacy policy means that the same privacy exposures are there in the app (where they would also be easier to hide).

By severing its ties with adtech, Zoom can start restoring people’s faith in its commitment to personal privacy.

There’s a helpful model for this: Apple’s privacy policy. Zoom is in a position to have a policy like that one because, like Apple, Zoom doesn’t need to be in the advertising business. In fact, Zoom could follow Apple’s footprints out of the ad business.

And then Zoom could do Apple one better, by participating in work going on already to put people in charge of their own privacy online, at scale. In my last post. I named two organizations doing that work. Four more are the Me2B Alliance, Kantara, ProjectVRM, and MyData.

Finally, I’d be glad to help with that too. If anyone at zoom is interested, contact me directly this time. Thanks.

 

 

 

Zoom needs to clean up its privacy act, which I posted yesterday, hit a nerve. While this blog normally gets about 50 reads a day, by the end of yesterday it got more than 16000. So far this morning (11:15am Pacific), it has close to 8000 new reads. Most of those owe to this posting on Hacker News, which topped the charts all yesterday and has 483 comments so far. If you care about this topic, I suggest reading them.

Also, while this was going down, as a separate matter (with a separate thread on Hacker News), Zoom got busted for leaking personal data to Facebook, and promptly plugged it. Other privacy issues have also come up for Zoom. For example, this one.

But I want to stick to the topic I raised yesterday, which requires more exploration, for example into how one opts out from Zoom “selling” one’s personal data. This morning I finished a pass at that, and here’s what I found.

First, by turning off Privacy Badger on Chrome (my main browser of the moment) I got to see Zoom’s cookie notice on its index page, https://zoom.us/. (I know, I should have done that yesterday, but I didn’t. Today I did, and we proceed.) It said,

To opt out of Zoom making certain portions of your information relating to cookies available to third parties or Zoom’s use of your information in connection with similar advertising technologies or to opt out of retargeting activities which may be considered a “sale” of personal information under the California Consumer Privacy Act (CCPA) please click the “Opt-Out” button below.

The buttons below said “Accept” (pre-colored a solid blue, to encourage a yes), “Opt-Out” and “More Info.” Clicking “Opt-Out” made the notice disappear, revealing, in the tiny print at the bottom of the page, linked text that says “Do Not Sell My Personal Information.” Clicking on that link took me to the same place I later went by clicking on “More Info”: a pagelet (pop-over) that’s basically an opt-in notice:

By clicking on that orange button, you’ve opted in… I think. Anyway, I didn’t click it, but instead clicked on a smaller and less noticeable “advanced settings” link off to the right. This took me to a pagelet with this:

The “view cookies” links popped down to reveal 16 CCPA Opt-Out “Required Cookies,” 23 “Functional Cookies,” and 47 “Advertising Cookies.” You can’t separately opt out or in of the “required” ones, but you can do that with the other 70 in the sections below. It’s good, I suppose, that these are defaulted to “Out.” (Or seem to be, at least to me.)

So I hit the “Submit Preferences” button and got this:

All the pagelets say “Powered by TrustArc,” by the way. TrustArc is an off-the-shelf system for giving companies a way (IMHO) to obey the letter of the GDPR while violating its spirit. These systems do that by gathering “consents” to various cookie uses. I’m suppose Zoom is doing all this off a TrustArc API, because one of the cookies it wants to give me (blocked by Privacy Badger before I disabled that) is called “consent.trustarc.com”).

So, what’s going on here?

My guess is that Zoom is doing marketing from the lead-generation playbook, meaning that most of its intentional data collection is actually for its own use in pitching possible customers, or its own advertising on its own site, and not for leaking personal data to other parties.

But that doesn’t mean you’re not exposed, or that Zoom isn’t playing in the tracking-based advertising (aka adtech) fecosystem, and therefore is to some degree in the advertising business.

Seems to me, by the choices laid out above, that any of those third parties (up to 70 of them in my view above) are free to gather and share data about you. Also free to give you “interest based” advertising based on what those companies know about your activities elsewhere.

Alas, there is no way to tell what any of those parties actually do, because nobody has yet designed a way to keep track of, or to audit, any of the countless “consents” you click on or default to as you travel the Web. Also, the only thing keeping those valves closed in your browser are cookies that remember which valves do what (if, in fact, the cookies are set and they actually work).

And that’s only on one browser. If you’re like me, you use a number of browsers, each with its own jar of cookies.

The Zoom app is a different matter, and that’s mostly where you operate on Zoom. I haven’t dug into that one. (Though I did learn, on the ProjectVRM mailing list, that there is an open source Chrome extension, called Zoom Redirector, that will keep your Zoom session in a browser and out of the Zoom app.)

I did, however, dig down into my cookie jar in Chome to find the ones for zoom.us. It wasn’t easy. If you want to leverage my labors there, here’s my crumb trail:

  1. Settings
  2. Site Settings
  3. Cookies and Site Data
  4. See all Cookies and Site Data
  5. Zoom.us (it’s near the bottom of a very long list)

The URL for that end point is this: chrome://settings/cookies/detail?site=zoom.us). (Though dropping that URL into a new window or tab works only some of the time.)

I found 22 cookies in there. Here they are:

_zm_cdn_blocked
_zm_chtaid
_zm_client_tz
_zm_ctaid
_zm_currency
_zm_date_format
_zm_everlogin_type
_zm_ga_trackid
_zm_gdpr_email
_zm_lang
_zm_launcher
_zm_mtk_guid
_zm_page_auth
_zm_ssid
billingChannel
cmapi_cookie_privacy
cmapi_gtm_bl
cred
notice_behavior
notice_gdpr_prefs
notice_preferences
slirequested
zm_aid
zm_cluster
zm_haid

Some have obvious and presumably innocent meanings. Others … can’t tell. Also, these are just Zoom’s cookies. If I acquired cookies from any of those 70 other entities, they’re in different bags in my Chrome cookie jar.

Anyway, my point remains the same: Zoom still doesn’t need any of the advertising stuff—especially since they now (and deservedly) lead their category and are in a sellers’ market for their services. That means now is a good time for them to get serious about privacy.

As for fixing this crazy system of consents and cookies (which was broken when we got it in 1994), the only path forward starts on your side and mine. Not on the sites’ side. What each of us need is our own global way to signal our privacy demands and preferences: a Do Not Track signal, or a set of standardized and easily-read signals that sites and services will actually obey. That way, instead of you consenting to every site’s terms and policies, they consent to yours. Much simpler for everyone. Also much more like what we enjoy here in the physical world, where the fact that someone is wearing clothes is a clear signal that it would be rude to reach inside those clothes to plant a tracking beacon on them—a practice that’s pro forma online.

We can come up with that new system, and some of us are working on exactly that. My own work is with Customer Commons. The first Customer Commons term you can proffer, and sites can agree to, is called #P2B1(beta), better known as #NoStalking. it says this:

nostalking

By agreeing to #NoStalking, publishers still get to make money with ads (of the kind that have worked since forever and don’t involve tracking), and you know you aren’t being tracked, because you have a simple and sensible record of the agreement in a form both sides can keep and enforce if necessary.

Toward making that happen I’m also involved in an IEEE working group called P7012 – Standard for Machine Readable Personal Privacy Terms.

If you want to help bring these and similar solutions into the world, talk to me. (I’m first name @ last name dot com.) And if you want to read some background on the fight to turn the advertising fecosystem back into a healthy ecosystem, read here. Thanks.

zoom with eyes

[This is the first in a series of posts. If you’re interested in the topic, please read all of them. The one that follows this is More on Zoom and Privacy.]

As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

 

Three weekends ago, we drove from New York to Baltimore to visit with family. We had planned this for awhile, but there was added urgency: knowing the world was about to change in a big way. Or in many big ways.

The hints were clear, from China and elsewhere: major steps would need to be taken—by people, businesses and governments—to slow the spread of a new virus against which there was yet no defense other than, mainly, hiding out. Not only were quarantines likely, but it was reasonable to suspect that whole sectors of the economy would be disabled.

Since then, all that has happened. And more.

On the drive down we also tried to guess, just among ourselves, about what would be the second, third and fourth order effects of, for example, shutting down retail,  education or other social and economic sectors. None of our guesses came close to what has happened since then, or what the full effects will be.

As of today, sports, live entertainment, conferences, travel, church, education, business, restaurants, and much more are closed, reduced, forbidden or sphinctered to trickles of activity. Levels of economic and social anesthesia, and degrees of personal freedom (and risk) differ widely by state, county and municipality. As for effects, however, it’s hard to see far beyond the obvious: domestic confinements, closed stores, empty streets, trucks still rolling down highways.

Two weeks ago today, a few days after that weekend, my wife and I relocated our butts to our house in Santa Barbara and haven’t left since then except for two quick trips to a market (by my wife) and daily long walks in the woods (by me). We are also working more than ever, it seems, mostly on our computers and phones. This Internet thing timed its existence well.

As for writing, a rule I generally fail to follow is the one Quakers have for silent meetings: “Don’t speak unless you can improve on the silence.” But what we have now, with this coronavirus pandemic, is the opposite of silence. I don’t know how to improve on that, so I’ll default for now to the Quaker option.

Leaders in business and government do need to speak up, of course. I hope you listen to them and make up your own mind about what they say. Meanwhile I’ll stick to sharing what I hope might be useful, inside my own communities. Also trying to get some work done in what I’m sure we can all agree is a very pivotal moment in world history.

Another thing we might be sure about is that there will be no end to books, movies and plays about this moment in time. I just hope it’ll be fun, in at least some ways, to look back on it.

The picture of Freddy Herrick I carry everywhere is in my wallet, on the back of my membership card for a retail store. It got there after I loaned my extra card to Freddy so he could use it every once in awhile. As Freddy explained it, one day, while checking out at the store, he was notified at the cash register that the card had expired. So he went to the service counter and presented the card for renewal. When the person behind the counter looked at my picture on the card and said, “This doesn’t look like you,” Freddy replied, “That was before the accident.” The person said “Okay,” and shot Freddy’s picture, which has appeared on the back of that same membership card every year it has been issued since then.

I met Freddy in 2001, when I first arrived in Santa Barbara, and he was installing something at the house we had just bought. When my wife, who had hired him for the work, introduced Freddy to me, he pointed at my face and said, “July, 1947.”

“Right,” I replied.

“Me too.” Then he added, “New York, right?”

“New Jersey, across the river in Fort Lee.”

“Well, close enough. New York for me. Long Island.”

“How do you know this stuff?”

“I don’t know. I’ve never done anything like this before. It’s just weird.”

Everything was weird with Freddy, who became my best friend in Santa Barbara that very day. In the years since then he has also remained one of the most interesting people I’ve ever known.

Freddy was an athlete, an author, a playwright, a screenwriter and an actor, most of whose work is still unpublished, sitting in boxes and on floppies, hard drives and various laptops. These last few months, while avoiding doctors yet sick with what turned out to be liver cancer, he was working on a deal for one of his scripts. I hope it still goes through somehow, for the sake of his family and his art. The dude was exceptionally talented, smart, funny, generous and kind. He could also fix anything, which is why he mostly worked as a handyman the whole nineteen years I’ve known him.

Freddy grew up in wealth, and did his best to avoid that condition for most of his life, or at least for the nineteen years I knew him. This was manifested in a number of odd and charming ways. For example, his car was an early-’60s Volkswagen bug he drove for more than fifty years.

I last saw Freddy in late January, before I headed to New York. And, though I later learned his cancer was terminal, I did expect to find him among the living when I got back to Santa Barbara on Wednesday. Alas, I learned this morning that he died at home in his sleep last Saturday.

Freddy talked about death often, and in an almost casual and friendly way. Both his parents died in middle age, as did Jeff MacNelly, a childhood friend of Freddy’s who also happened to be—in the judgement of us both—the best cartoonist who ever lived. Measured against the short lives of those three, Freddy felt that every year he lived past their spans was a bonus.

And all those years were exactly that, for all who knew him.

Rest in Fun, old friend.

Freeman Dyson

By his own description, Freeman was a frog:

Some mathematicians are birds, others are frogs. Birds fly high in the air and survey broad vistas of mathematics out to the far horizon. They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and see only the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time. I happen to be a frog, but many of my best friends are birds.

What came to call birds and frogs he first labeled “unifiers and diversifiers.” Or so I gathered at his lecture on Michael Polanyi at UNC, back when I lived in Chapel Hill, long before I got to hang with Freeman at his daughter Esther‘s wonderful (and still missed) PC Forum conferences.

When I eventually got to talk Polanyi with Freeman, I also brought up Polanyi’s friend Athur Koestler, who in one of his own lectures said Polanyi was a brilliant thinker but a terrible writer. Both were birds, Freeman told me. But Freeman’s opinions of the two were divided as well. While he liked Polanyi’s work, especially around the role of tacit knowing and inquiry in science, he also had to agree with Koestler about the opacity of Polanyi’s writing. (Far as I know, Polanyi’s only memorable one-liner was “We know more than we can tell.”) And, while Freeman admired Koestler’s writing, he found some of it, especially stuff about parapsychology (a field in which I had also labored for awhile, and Freeman, naturally, knew a great deal),”delightful but wrong.”

Once time at LAX, long after Esther’s conference ceased, I ran into Freeman on a shuttle bus. He was connecting from a visit with family, he said, and iur brief conversation was entirely about his kids and grandkids. He was delighted with all of them.

Freeman worked tirelessly throughout his life, during which he starred in more than a dozen documentaries, wrote even more books, and made countless contributions to many sciences. Also, as an alpha frog, he raised at least as many questions as he answered.

It was out of character for Freeman to die, which he did last week at age 96. For me his death recalls what someone said of Amos Tversky: “death is unrepresentative of him.” The world is less without Freeman, but his body of work and the questions he left behind have value beyond measure.

Earth is 4.54 billion years old. It was born 9.247 billion years after the Big Bang, which happened 13.787 billion years ago. Meaning that our planet is a third the age of the Universe.

Hydrogen, helium and other light elements formed with the Big Bang, but the heavy elements were cooked up at various times in an 8 billion year span before our solar system was formed, and some, perhaps, are still cooking.

Best we know so far, life appeared on earth at least 3.5 billion years ago. Oxygenation sufficient to support life as we know it happened at the start of proterozoic era, about 2.5 billion years ago. The phanerozoic eon began 0.541 billion years ago year ago, and is characterized by an abundance of plants and animals. It will continue until the Sun gets so large and hot that photosynthesis as we know it becomes impossible. A rough consensus within science is that this will likely happen in just 600 million years, meaning we’re about 80% of our way through the time window for life on Earth.

Some additional perspective: the primary rock formation on which most of Manhattan’s ranking skyscrapers repose—Manhattan Schist—is itself about a half billion years old. (My ass is three floors above some right now.)

In another 4.5 billion years, our galaxy, the Milky Way, will become one with Andromeda, which is currently 2.5 million light years distant but headed our way on a collision course. The two will begin merging (not colliding, because nearly all stars are too far apart for that) around 4 billion years from now, and will complete a new galaxy about 7 billion years from now. Here is a simulation of that future. Bear in mind when watching it that it covers the next 8 billion years. Our Sun, by the way, will likely be around for all of that, though by the time it’s over will a have become a red giant with a diameter wider than Earth’s orbit, and perhaps by then will have gone past that, shrinking down into a white dwarf.

In TIME WITHOUT END: PHYSICS AND BIOLOGY IN AN OPEN UNIVERSE, Freeman Dyson gives these estimates for the future age of the Universe:

TABLE I. Summary of time scales.

Closed Universe
Total duration 10^11 yr

Open Universe
Low-mass stars cool off 10^14 yr
Planets detached from stars 10^15 yr
Stars detached from galaxies 10^19 yr
Decay of orbits by gravitational radiation 10^20 yr
Decay of black holes by Hawking process 10^64 yr
Matter liquid at zero temperature 10^65 yr
All matter decays to iron 10^1500 yr
Collapse of ordinary matter to black hole
[alternative (ii)] 10^(10^26) yr
Collapse of stars to neutron stars
or black holes [alternative (iv)] 10^(10^76) yr

So, at the short end the Universe is now about 1% into its lifespan, and at the long end it’s many zeros to the right of the decimal point. In biological terms, that means its not even a baby, or a fetus: more like a zygote, or a blastula.

So maybe… just maybe… the forms of life we know on Earth are just early prototypes of what’s to come in the fullness of time, space and evolving existence.

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

This is the Ostrom Memorial Lecture I gave on 9 October of last year for the Ostrom Workshop at Indiana University. Here is the video. (The intro starts at 8 minutes in, and my part starts just after 11 minutes in.) I usually speak off the cuff, but this time I wrote it out, originally in outline form*, which is germane to my current collaborations with Dave Winer, father of outlining software (and, in related ways, of blogging and podcasting). So here ya go.

Intro

The movie Blade Runner was released in 1982; and was set in a future Los Angeles. Anyone here know when in the future Blade Runner is set? I mean, exactly?

The year was 2019. More precisely, next month: November.

In Blade Runner’s 2019, Los Angeles is a dark and rainy hellscape with buildings the size of mountains, flying cars, and human replicants working on off-world colonies. It also has pay phones and low-def computer screens that are vacuum tubes.

Missing is a communication system that can put everyone in the world at zero distance from everyone else, in disembodied form, at almost no cost—a system that lives on little slabs in people’s pockets and purses, and on laptop computers far more powerful than any computer, of any size, from 1982.

In other words, this communication system—the Internet—was less thinkable in 1982 than flying cars, replicants and off-world colonies. Rewind the world to 1982, and the future Internet would appear a miracle dwarfing the likes of loaves and fish.

In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.

As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.

There is also only one of it. As with the universe, it has no other examples.

As a source of abundance, the closest thing to an example the Internet might have is the periodic table. And the Internet might be even more elemental than that: so elemental that it is easy to overlook the simple fact that it is the largest goose ever to lay golden eggs.

It can, however, be misunderstood, and that’s why it’s in trouble.

The trouble it’s in is with human nature: the one that sees more value in the goose’s eggs than in the goose itself.

See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet. I’ll describe nine of them.

Enclosures

The first enclosure is service provisioning, for example with asymmetric connection speeds. On cable connections you may have up to 400 megabits per second downstream, but still only 10 megabits per second—one fortieth of that—upstream. (By the way this is exactly what Spectrum, formerly Time Warner Cable, provides with its most expensive home service to customers in New York City.)

They do that to maximize consumption while minimizing production by those customers. You can consume all the video you want, and think you’re getting great service. But meanwhile this asymmetrical provisioning prevents production at your end. Want to put out a broadcast or a podcast from your house, to run your own email server, or to store your own video or other personal data in your own personal “cloud”? Forget it.

The Internet was designed to support infinite production by anybody of anything. But cable TV companies don’t want you to have that that power. So you don’t. The home Internet you get from your cable company is nice to have, but it’s not the whole Internet. It’s an enclosed subset of capabilities biased by and for the cable company and large upstream producers of “content.”

So, it’s golden eggs for them, but none for you. Also missing are all the golden eggs you might make possible for those companies as an active producer rather than as a passive consumer.

The second enclosure is through 5G wireless service, currently promoted by phone companies as a new generation of Internet service. The companies deploying 5G promise greater speeds and lower lag times over wireless connections; but is also clear that they want to build in as many choke points as they like, all so you can be billed for as many uses as possible.

You want gaming? Here’s our gaming package. You want cloud storage? Here’s our cloud storage package. Each of these uses will carry terms and conditions that allow some uses and prevent others. Again, this is a phone company enclosure. No cable companies are deploying 5G. They’re fine with their own enclosure.

The third enclosure is government censorship. The most familiar example is China’s. In China’s closed Internet you will find no Google, Facebook, Twitter, Instagram or Reddit. No Pandora, Spotify, Slack or Dropbox. What you will find is pervasive surveillance of everyone and everything—and ranking of people in its Social Credit System.

By March of this year, China had already punished 23 million people with low social credit scores by banning them from traveling. Control of speech has also spread to U.S. companies such as the NBA and ESPN, which are now censoring themselves as well, bowing to the wishes of the Chinese government and its own captive business partners.

The fourth enclosure is the advertising-supported commercial Internet. This is led by Google and Facebook, but also includes all the websites and services that depend on tracking-based advertising. This form of advertising, known as adtech, has in the last decade become pretty much the only kind of advertising online.

Today there are very few major websites left that don’t participate in what Shoshana Zuboff calls surveillance capitalism, and Brett Frischmann and Evan Selinger call, in their book by that title, Re-engineering Humanity. Surveillance of individuals online is now so deep and widespread that nearly every news organization is either unaware of it or afraid to talk about it—in part because the advertising they run is aimed by it.

That’s why you’ll read endless stories about how bad Facebook and Google are, and how awful it is that we’re all being tracked everywhere like marked animals; but almost nothing about how the sites publishing stories about tracking also participate in exactly the same business—and far more surreptitiously. Reporting on their own involvement in the surveillance business is a third rail they won’t grab.

I know of only one magazine that took and shook that third rail, especially in the last year and a half.  That magazine was Linux Journal, where I worked for 24 years and was serving as editor-in-chief when it was killed by its owner in August. At least indirectly that was because we didn’t participate in the surveillance economy.

The fifth enclosure is protectionism. In Europe, for example, your privacy is protected by laws meant to restrict personal data use by companies online. As a result in Europe, you won’t see the Los Angeles Times or the Washington Post in your browsers, because those publishers don’t want to cope with what’s required by the EU’s laws.

While they are partly to blame—because they wish to remain in the reader-tracking business—the laws are themselves terribly flawed—for example by urging every website to put up a “cookie notice” on pages greeting readers. In most cases clicking “accept” to the site’s cookies only gives the site permission to continue doing exactly the kind of tracking the laws are meant to prevent.

So, while the purpose of these laws is to make the Internet safer, in effect they also make its useful space smaller.

The sixth enclosure is what The Guardian calls “digital colonialism.” The biggest example of that is  Facebook.org, originally called “Free Basics” and “Internet.org”

This is a China-like subset of the Internet, offered for free by Facebook in less developed parts of the world. It consists of a fully enclosed Web, only a few dozen sites wide, each hand-picked by Facebook. The rest of the Internet isn’t there.

The seventh enclosure is the forgotten past. Today the World Wide Web, which began as a kind of growing archive—a public set of published goods we could browse as if it were a library—is being lost. Forgotten. That’s because search engines are increasingly biased to index and find pages from the present and recent past, and by following the tracks of monitored browsers. It’s forgetting what’s old. Archival goods are starting to disappear, like snow on the water.

Why? Ask the algorithm.

Of course, you can’t. That brings us to our eighth enclosure: algorithmic opacity.

Consider for a moment how important power plants are, and how carefully governed they are as well. Every solar, wind, nuclear, hydro and fossil fuel power production system in the world is subject to inspection by whole classes of degreed and trained professionals.

There is nothing of the sort for the giant search engine and social networks of the world. Google and Facebook both operate dozens of data centers, each the size of many Walmart stores. Yet the inner workings of those data centers are nearly absent of government oversight.

This owes partly to the speed of change in what these centers do, but more to the simple fact that what they do is unknowable, by design. You can’t look at rows of computers with blinking lights in many acres of racks and have the first idea of what’s going on in there.

I would love to see research, for example, on that last enclosure I listed: on how well search engines continue to index old websites. Or to do anything. The whole business is as opaque as a bowling ball with no holes.

I’m not even sure you can find anyone at Google who can explain exactly why its index does one thing or another, for any one person or another. In fact, I doubt Facebook is capable of explaining why any given individual sees any given ad. They aren’t designed for that. And the algorithm itself isn’t designed to explain itself, perhaps even to the employees responsible for it.

Or so I suppose.

In the interest of moving forward with research on these topics, I invite anyone at Google, Facebook, Bing or Amazon to help researchers at institutions such as the Ostrom Workshop, and to explain exactly what’s going on inside their systems, and to provide testable and verifiable ways to research those goings-on.

The ninth and worst enclosure is the one inside our heads. Because, if we think the Internet is something we use by grace of Apple, Amazon, Facebook, Google and “providers” such as phone and cable companies, we’re only helping all those companies contain the Internet’s usefulness inside their walled gardens.

Not understanding the Internet can result in problems similar to ones we suffer by not understanding common pool resources such as the atmosphere, the oceans, and the Earth itself.

But there is a difference between common pool resources in the natural world, and the uncommon commons we have with the Internet.

See, while we all know that common-pool resources are in fact not limitless—even when they seem that way—we don’t have the same knowledge of the Internet, because its nature as a limitless non-thing is non-obvious.

For example, we know common pool resources in the natural world risk tragic outcomes if our use of them is ungoverned, either by good sense or governance systems with global reach. But we don’t know that the Internet is limitless by design, or that the only thing potentially tragic about it is how we restrict access to it and use of it, by enclosures such as the nine I just listed.

So my thesis here is this: if we can deeply and fully understand what the Internet is, why it is fully important, and why it is in danger of enclosure, we can also understand why, ten years after Lin Ostrom won a Nobel prize for her work on the commons, that work may be exactly what we need to save the Internet as a boundless commons that can support countless others.

The Internet

We’ll begin with what makes the Internet possible: a protocol.

A protocol is a code of etiquette for diplomatic exchanges between computers. A form of handshake.

What the Internet’s protocol does is give all the world’s digital devices and networks a handshake agreement about how to share data between any point A and any point B in the world, across any intermediary networks.

When you send an email, or look at a website, anywhere in the world, the route the shared data takes can run through any number of networks between the two. You might connect from Bloomington to Denver through Chicago, Tokyo and Mexico City. Then, two minutes later, through Toronto and Miami. Some packets within your data flows may also be dropped along the way, but the whole session will flow just fine because the errors get noticed and the data re-sent and re-assembled on the fly.

Oddly, none of this is especially complicated at the technical level, because what I just described is pretty much all the Internet does. It doesn’t concern itself with what’s inside the data traffic it routes, who is at the ends of the connections, or what their purposes are—any more than gravity cares about what it attracts.

Beyond the sunk costs of its physical infrastructure, and the operational costs of keeping the networks themselves standing up, the Internet has no first costs at its protocol level, and it adds no costs along the way. It also has no billing system.

In all these ways the Internet is, literally, neutral. It also doesn’t need regulators or lawmakers to make it neutral. That’s just its nature.

The Internet’s protocol called is called TCP/IP, and by using it, all the networks of the world subordinate their own selfish purposes.

This is what makes the Internet’s protocol generous and supportive to an absolute degree toward every purpose to which it is put. It is a rising tide that lifts all boats.

In retrospect we might say the big networks within the Internet—those run by phone and cable companies, governments and universities—agreed to participate in the Internet because it was so obviously useful that there was no reason not to.

But the rising-tide nature of the Internet was not obvious to all of them at first. In retrospect, they didn’t realize that the Internet was a Trojan Horse, wheeled through their gates by geeks who looked harmless but in fact were bringing the world a technical miracle.

I can support that claim by noting that even though phone and cable companies of the world now make trillions of dollars because of it, they never would have invented it.

Two reasons for that. One is because it was too damn simple. The other is because they would have started with billing. And not just billing you and me. They would have wanted to bill each other, and not use something invented by another company.

A measure of the Internet’s miraculous nature is that actually billing each other would have been so costly and complicated that what they do with each other, to facilitate the movement of data to, from, and across their networks, is called peering. In other words, they charge each other nothing.

Even today it is hard for the world’s phone and cable companies—and even its governments, which have always been partners of a sort—to realize that the Internet became the world-wide way to communicate because it didn’t start with billing.

Again, all TCP/IP says is that this is a way for computers, networks, and everything connected to them, to get along. And it succeeded, producing instant worldwide peace among otherwise competing providers of networks and services. It made every network operator involved win a vast positive-sum game almost none of them knew they were playing. And most of them still don’t.

You know that old joke in which the big fish says to the little fish, “Hi guys, how’s the water?” and one of the little fish says to the other “What’s water?” In 2005, David Foster Wallace gave a legendary commencement address at Kenyon College that I highly recommend, titled “This is water.”

I suspect that, if Wallace were around today, he’d address his point to our digital world.

Human experience

Those of you who already know me are aware that my wife Joyce is as much a companion and collaborator of mine as Vincent Ostrom was of Lin. I bring this up because much of of this talk is hers, including this pair of insights about the Internet: that it has no distance, and also no gravity.

Think about it: when you are on the Internet with another person—for example if you are in a chat or an online conference—there is no functional distance between you and the other person. One of you may be in Chicago and the other in Bangalore. But if the Internet is working, distance is gone. Gravity is also gone. Your face may be right-side-up on the other person’s screen, but it is absent of gravity. The space you both occupy is the other person’s two-dimensional rectangle. Even if we come up with holographic representations of ourselves, we are still incorporeal “on” the Internet. (I say “on” because we need prepositions to make sense of how things are positioned in the world. Yet our limited set of physical-world prepositions—over, under around, through, beside, within and the rest—misdirect our attention away from our disembodied state in the digital one.)

Familiar as that disembodied state may be to all of us by now, it is still new to human experience and inadequately informed by our experience as embodied creatures. It is also hard for us to see both what our limitations are, and how limitless we are at the same time.

Joyce points out that we are also highly adaptive creatures, meaning that eventually we’ll figure out what it means to live where there is no distance or gravity, much as astronauts learn to live as weightless beings in space.

But in the meantime, we’re having a hard time seeing the nature and limits of what’s good and what’s bad in this new environment. And that has to do, at least in part, on forms of enclosure in that world—and how we are exploited within private spaces where we hardly know we are trapped.

In The Medium is the Massage, Marshall McLuhan says every new medium, every new technology, “works us over completely.” Those are his words: works us over completely. Such as now, with digital technology, and the Internet.

I was talking recently with a friend about where our current digital transition ranks among all the other transitions in history that each have a formal cause. Was becoming ditital the biggest thing since the industrial revolution? Since movable type? Writing? Speech?

No, he said. “It’s the biggest thing since oxygenation.”

In case you weren’t there, or weren’t paying attention in geology class, oxygenation happened about 2.5 billion years ago. Which brings us to our next topic:

Institutions

Journalism is just one example of a trusted institution that is highly troubled in the digital world.

It worked fine in a physical world where truth-tellers who dig into topics and reported on them with minimized prejudice were relatively scarce yet easy to find, and to trust. But in a world flooded with information and opinion—a world where everyone can be a reporter, a publisher, a producer, a broadcaster, where the “news cycle” has the lifespan of a joke, and where news and gossip have become almost indistinguishable while being routed algorithmically to amplify prejudice and homophily, journalism has become an anachronism: still important, but all but drowning in a flood of biased “content” paid for by surveillance-led adtech.

People are still hungry for good information, of course, but our appetites are too easily fed by browsing through the surfeit of “content” on the Internet, which we can easily share by text, email or social media. Even if we do the best we can to share trustworthy facts and other substances that sound like truth, we remain suspended in a techno-social environment we mostly generate and re-generate ourselves. Kind of like our ancestral life forms made sense of the seas they oxygenated, long ago.

The academy is another institution that’s troubled in our digital time. After all, education on the Internet is easy to find. Good educational materials are easy to produce and share. For example, take Kahn Academy, which started with one guy tutoring his cousin though online videos.

Authority must still be earned, but there are now countless non-institutional ways to earn it. Credentials still matter, but less than they used to, and not in the same ways. Ad hoc education works in ways that can be cheap or free, while institutions of higher education remain very expensive. What happens when the market for knowledge and know-how starts moving past requirements for advanced degrees that might take students decades of their lives to pay off?

For one example of that risk already at work, take computer programming.

Which do you think matters more to a potential employer of programmers—a degree in computer science or a short but productive track record? For example, by contributing code to the Linux operating system?

To put this in perspective, Linux and operating systems like it are inside nearly every smart thing that connects to the Internet, including TVs, door locks, the world’s search engines, social network, laptops and mobile phones. Nothing could be more essential to computing life.

At the heart of Linux is what’s called the kernel. For code to get into the kernel, it has to pass muster with other programmers who have already proven their worth, and then through testing and debugging. If you’re looking for a terrific programmer, everyone contributing to the Linux kernel is well-proven. And there are thousands of them.

Now here’s the thing. It not only doesn’t matter whether or not those people have degrees in computer science, or even if they’ve had any formal training. What matters, for our purposes here, is that, to a remarkable degree, many of them don’t have either. Or perhaps most of them.

I know a little about this because, in the course of my work at Linux Journal, I would sometimes ask groups of alpha Linux programmers where they learned to code. Almost none told me “school.” Most were self-taught or learned from each other.

My point here is that the degree to which the world’s most essential and consequential operating system depends on the formal education of its makers is roughly zero.

See, the problem for educational institutions in the digital world is that most were built to leverage scarcity: scarce authority, scarce materials, scarce workspace, scarce time, scarce credentials, scarce reputation, scarce anchors of trust. To a highly functional degree we still need and depend on what only educational institutions can provide, but that degree is a lot lower than it used to be, a lot more varied among disciplines, and it risks continuing to decline as time goes on.

It might help at this point to see gravity in some ways as a problem the Internet solves. Because gravity is top-down. It fosters hierarchy and bell curves, sometimes where we need neither.

Absence of gravity instead fosters heterarchy and polycentrism. And, as we know, at the Ostrom Workshop perhaps better than anywhere, commons are good examples of heterarchy and polycentrism at work.

Knowledge Commons

In the first decade of our new millenium, Elinor Ostrom and Charlotte Hess—already operating in our new digital age—extended the commons category to include knowledge, calling it a complex ecosystem that operates as a common: a shared resource subject to social dilemmas.

They looked at ease of access to digital forms of knowledge and easy new ways to store, access and share knowledge as a common. They also looked at the nature of knowledge and its qualities of non-rivalry and non-excludability, which were both unlike what characterizes a natural commons, with its scarcities of rivalrous and excludable goods.

A knowledge commons, they said, is characterized by abundance. This is one way what Yochai Benkler calls Commons Based Peer Production on the Internet is both easy and rampant, giving us, among many other things, both the free software and open source movements in code development and sharing, plus the Internet and the Web.

Commons Based Peer Production also demonstrates how collaboration and non-material incentives can produce better quality products, and less social friction in the course of production.

I’ve given Linux as one example of Commons Based Peer Production. Others are Wikipedia and the Internet Archive. We’re also seeing it within the academy, for example with Indiana University’s own open archives, making research more accessible and scholarship more rich and productive.

Every one of those examples comports with Lin Ostrom’s design principles:

  1. clearly defined group boundaries;
  2. rules governing use of common goods within local needs and conditions;
  3. participation in modifying rules by those affected by the rules;
  4. accessible and low cost ways to resolve disputes;
  5. developing a system, carried out by community members, for monitoring members’ behavior;
  6. graduated sanctions for rule violators;
  7. and governing responsibility in nested tiers from the lowest level up to the entire interconnected system.

But there is also a crisis with Commons Based Peer Production on the Internet today.

Programmers who ten or fifteen years ago would not participate in enclosing their own environments are doing exactly that, for example with 5G, which is designed to put the phone companies in charge of what we can do on the Internet.

The 5G-enclosed Internet might be faster and more handy in many ways, the range of freedoms for each of us there will be bounded by the commercial interests of the phone companies and their partners, and subject to none of Lin’s rules for governing a commons.

Consider this: every one of the nine enclosures I listed at the beginning of this talk are enabled by programmers who either forgot or never learned about the freedom and openness that made the free and open Internet possible. They are employed in the golden egg gathering business—not in one that appreciates the goose that lays those eggs, and which their predecessors gave to us all.

But this isn’t the end of the world. We’re still at the beginning. And a good model for how to begin is—

The physical world

It is significant that all the commons the Ostroms and their colleagues researched in depth were local. Their work established beyond any doubt the importance of local knowledge and local control.

I believe demonstrating this in the digital world is our best chance of saving our digital world from the nine forms of enclosure I listed at the top of this talk.

It’s our best chance because there is no substitute for reality. We may be digital beings now, as well as physical ones. There are great advantages, even in the digital world, to operating in the here-and-now physical world, where all our prepositions still work, and our metaphors still apply.

Back to Joyce again.

In the mid ‘90s, when the Internet was freshly manifest on our home computers, I was mansplaining to Joyce how this Internet thing was finally the global village long promised by tech.

Her response was, “The sweet spot of the Internet is local.” She said that’s because local is where the physical and the virtual intersect. It’s where you can’t fake reality, because you can see and feel and shake hands with it.

She also said the first thing the Internet would obsolesce would be classified ads in newspapers. That’s because the Internet would be a better place than classifieds for parents to find a crib some neighbor down the street might have for sale. Then Craigslist came along and did exactly that.

We had an instructive experience with how the real world and the Internet work together helpfully at the local level about a year and a half ago. That’s when a giant rainstorm fell on the mountains behind Santa Barbara, where we live, and the town next door, called Montecito. This was also right after the Thomas Fire—largest at the time in recorded California history—had burned all the vegetation away, and there was a maximum risk of what geologists call a “debris flow.”

The result was the biggest debris flow in the history of the region: a flash flood of rock and mud that flowed across Montecito like lava from a volcano. Nearly two hundred homes were destroyed, and twenty-three people were killed. Two of them were never found, because it’s hard to find victims buried under what turned out to be at least twenty thousand truckloads of boulders and mud.

Right afterwards, all of Montecito was evacuated, and very little news got out while emergency and rescue workers did their jobs. Our local news media did an excellent job of covering this event as a story. But I also noticed that not much was being said about the geology involved.

So, since I was familiar with debris flows out of the mountains above Los Angeles, where they have infrastructure that’s ready to handle this kind of thing, I put up a post on my blog titled “Making sense of what happened to Montecito.” In that post I shared facts about the geology involved, and also published the only list on the Web of all the addresses of homes that had been destroyed. Visits to my blog jumped from dozens a day to dozens of thousands. Lots of readers also helped improve what I wrote and re-wrote.

All of this happened over the Internet, but it pertained to a real-world local crisis.

Now here’s the thing. What I did there wasn’t writing a story. I didn’t do it for the money, and my blog is a noncommercial one anyway. I did it to help my neighbors. I did it by not being a bystander.

I also did it in the context of a knowledge commons.

Specifically, I was respectful of boundaries of responsibility; notably those of local authorities—rescue workers, law enforcement, reporters from local media, city and county workers preparing reports, and so on. I gave much credit where it was due and didn’t step on the toes of others helping out as well.

An interesting fact about journalism there at the time was the absence of fake news. Sure, there was plenty of fingers pointing in blog comments and in social media. But it was marginalized away from the fact-reporting that mattered most. There was a very productive ecosystem of information, made possible by the Internet in everyone’s midst. And by everyone, I mean lots of very different people.

Humanity

We are learning creatures by nature. We can’t help it. And we don’t learn by freight forwarding

By that, I mean what I am doing here, and what we do with each other when we talk or teach, is not delivering a commodity called information, as if we were forwarding freight. Something much more transformational is taking place, and this is profoundly relevant to the knowledge commons we share.

Consider the word information. It’s a noun derived from the verb to inform, which in turn is derived from the verb to form. When you tell me something I don’t know, you don’t just deliver a sum of information to me. You form me. As a walking sum of all I know, I am changed by that.

This means we are all authors of each other.

In that sense, the word authority belongs to the right we give others to author us: to form us.

Now look at how much more of that can happen on our planet, thanks to the Internet, with its absence of distance and gravity.

And think about how that changes every commons we participate in, as both physical and digital beings. And how much we need guidance to keep from screwing up the commons we have, or forming the ones we don’t, or forming might have in the future—if we don’t screw things up.

A rule in technology is that what can be done will be done—until we find out what shouldn’t be done. Humans have done this with every new technology and practice from speech to stone tools to nuclear power.

We are there now with the Internet. In fact, many of those enclosures I listed are well-intended efforts to limit dangerous uses of the Internet.

And now we are at a point where some of those too are a danger.

What might be the best way to look at the Internet and its uses most sensibly?

I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons, and subject to both research and guidance informed by Lin Ostrom’s rules.

And I hope that guides our study.

There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long. And I look forward to working with many of us here on answers to these and many other questions.

Thank you. 

Sources

Ostrom, Elinor. Governing the Commons. Cambridge University Press, 1990

Ostrom, Elinor and Hess, Charlotte, editors. Understanding Knowledge as a Commons:
From Theory to Practice, MIT Press, 2011
https://mitpress.mit.edu/books/understanding-knowledge-commons
Full text online: https://wtf.tw/ref/hess_ostrom_2007.pdf

Paul D. Aligica and Vlad Tarko, “Polycentricity: From Polanyi to Ostrom, and Beyond” https://asp.mercatus.org/system/files/Polycentricity.pdf

Elinor Ostrom, “Coping With Tragedies of the Commons,” 1998 https://pdfs.semanticscholar.org/7c6e/92906bcf0e590e6541eaa41ad0cd92e13671.pdf

Lee Anne Fennell, “Ostrom’s Law: Property rights in the commons,” March 3, 2011
https://www.thecommonsjournal.org/articles/10.18352/ijc.252/

Christopher W. Savage, “Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy.” Stanford Law School, 2019. https://law.stanford.edu/wp-content/uploads/2019/01/Savage_20190129-1.pdf

 

________________

*I wrote it using—or struggling in—the godawful Outline view in Word. Since I succeeded (most don’t, because they can’t or won’t, with good reason), I’ll brag on succeeding at the subhead level:

As I’m writing this, in Febrary, 2020, Dave Winer is working on what he calls writing on rails. That’s what he gave the pre-Internet world with MORE several decades ago, and I’m helping him with now with the Internet-native kind, as a user. He explains that here. (MORE was, for me, like writing on rails. It’ll be great to go back—or forward—to that again.)

Journalism’s biggest problem (as I’ve said before) is what it’s best at: telling stories. That’s what Thomas B. Edsall (of Columbia and The New York Times) does in Trump’s Digital Advantage Is Freaking Out Democratic Strategists, published in today’s New York Times. He tells a story. Or, in the favored parlance of our time, a narrative, about what he sees Republicans’ superior use of modern methods for persuading voters:

Experts in the explosively growing field of political digital technologies have developed an innovative terminology to describe what they do — a lexicon that is virtually incomprehensible to ordinary voters. This language provides an inkling of the extraordinarily arcane universe politics has entered:

geofencingmass personalizationdark patternsidentity resolution technologiesdynamic prospectinggeotargeting strategieslocation analyticsgeo-behavioural segmentpolitical data cloudautomatic content recognitiondynamic creative optimization.

Geofencing and other emerging digital technologies derive from microtargeting marketing initiatives that use consumer and other demographic data to identify the interests of specific voters or very small groups of like-minded individuals to influence their thoughts or actions.

In fact the “arcane universe” he’s talking about is the direct marketing playbook, which was born offline as the junk mail business. In that business, tracking individuals and bothering them personally is a fine and fully rationalized practice. And let’s face it: political campaigning has always wanted to get personal. It’s why we have mass mailings, mass callings, mass textings and the rest of it—all to personal addresses, numbers and faces.

Coincidence: I just got this:

There is nothing new here other than (at the moment) the Trump team doing it better than any Democrat. (Except maybe Bernie.) Obama’s team was better at it in ’08 and ’12. Trump’s was better at it in ’16 and is better again in ’20.*

However, debating which candidates do the best marketing misdirects our attention away from the destruction of personal privacy by constant tracking of our asses online—including tracking of asses by politicians. This, I submit, is a bigger and badder issue than which politicians do the best direct marketing. It may even be bigger than who gets elected to what in November.

As issues go, personal privacy is soul-deep. Who gets elected, and how, are not.

As I put it here,

Surveillance of people is now the norm for nearly every website and app that harvests personal data for use by machines. Privacy, as we’ve understood it in the physical world since the invention of the loincloth and the door latch, doesn’t yet exist. Instead, all we have are the “privacy policies” of corporate entities participating in the data extraction marketplace, plus terms and conditions they compel us to sign, either of which they can change on a whim. Most of the time our only choice is to deny ourselves the convenience of these companies’ services or live our lives offline.

Worse is that these are proffered on the Taylorist model, meaning mass-produced.

There is a natural temptation to want to fix this with policy. This is a mistake for two reasons:

  1. Policy-makers are themselves part of the problem. Hell, most of their election campaigns are built on direct marketing. And law enforcement (which carries out certain forms of policy) has always regarded personal privacy as a problem to overcome rather than a solution to anything. Example.
  2. Policy-makers often screw things up. Exhibit A: the EU’s GDPR, which has done more to clutter the Web with insincere and misleading cookie notices than it has to advance personal privacy tech online. (I’ve written about this a lot. Here’s one sample.)

We need tech of our own. Terms and policies of our own. In the physical world, we have privacy tech in the forms of clothing, shelter, doors, locks and window shades. We have policies in the form of manners, courtesies, and respect for privacy signals we send to each other. We lack all of that online. Until we invent it, the most we’ll do to achieve real privacy online is talk about it, and inveigh for politicians to solve it for us. Which they won’t.

If you’re interested in solving personal privacy at the personal level, take a look at Customer Commons. If you want to join our efforts there, talk to me.

_____________
*The Trump campaign also has the enormous benefit of an already-chosen Republican ticket. The Democrats have a mess of candidates and a split in the party between young and old, socialists and moderates, and no candidate as interesting as is Trump. (Also, I’m not Joyce.)

At this point, it’s no contest. Trump is the biggest character in the biggest story of our time. (I explain this in Where Journalism Fails.) And he’s on a glide path to winning in November, just as I said he was in 2016.

« Older entries