problems

You are currently browsing the archive for the problems category.

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Two years after the GDPR became enforceable, privacy would be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced. It’s because it’s too easy to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if every shop you passed on the street sent someone outside to painlessly jab a needle into your neck, and then injecting a load of tracking beacons into your bloodstream. Would you be okay with that?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data doing the same.

Yes, Google says you’re anonymized somehow in both doubleclick and google-analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily” That’s what’s going on.)

Get this: There is also no way for you to know exactly how you are being tracked or what is done with that information, because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything., other than a cookie it’s hard to find, examine or explain, deep in your browser’s bowels. Consenting to a cookie notice leaves nothing resembling an audit trail.

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR was made to thwart online tracking. On the whole it has not. Instead, it has made the experience of being tracked online a worse one.

Yes, that was not the intent. And yes, the GDPR has done some good.

But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some of it.

And tracking is still worse than rampant: it remains a defaulted practice for both advertising and site analytics.

So, nothing to celebrate. Not this Privmas.

Tags: ,

In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. The greatest example of “gone” rock is what John Wesley Powell observed in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado river, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between the basement rock and the floors of rock above it. He didn’t know how much time, and could hardly guess. The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.

I the decades since Powell made his notes, the same gap has been found all over the world, and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.

All of those stretches end abruptly with the Cambrian Explosion, which began about 541 million years ago, when the Cambrian period arrived, and with it an amplitude of history, written in stone.

Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing paradigm is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well.

The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, abrading their ways through mountains and across hills and plains like a trowel through wet cement. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, so plate tectonics could hide the evidence. (A fact little known outside geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled as debris on continental edges here and there.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.

I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish, because it lacks even a glacier’s talent for accumulation.

There is nothing about a bit that lends itself to persistence, other than the media it is written on, if it is written at all. Form follows function, and right now, most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities may have happened during an interval about which otherwise nothing is known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that began in the late 20th century.

This isn’t my theory. It comes from my wife, who has long claimed that future historians will look on our digital age as an invisible one, because it sucks so royally at archiving itself.

Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid state memory—are volatile by nature?

My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can be read only by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will the floppies or my mothalled PCs, all of which are likely bricked by decades of un-use.

As for other media, the prospect isn’t any better.

At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of Video8 and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? Maybe.

Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded my mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, so her whole generation is now gone.

The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassete tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in perpetuity in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation.

Same goes for my photographs. My old photographs are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photographs are spread across a mess of duplicated back-up drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year a piece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). It is unclear currently whether any of that will survive after any of those entities stop paying the yearly fee. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.

As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).

It should also be sobering to recognize that nobody owns a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.

So the historic progression I see, and try to illustrate in the photo at the beginning of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds that go away. Everything digital is snow falling and disappearing on the waters of time.

Will there ever be a way to save for the very long term what we ironically call our digital “assets” for more than a few dozen years? Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Digital Unconformity, when everything was forgotten?

I can’t think of any technical questions more serious than those two.


The original version of this post appeared in the March 2019 issue of Linux Journal.

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

Yesterday (March 29), Zoom updated its privacy policy with a major rewrite. The new language is far more clear than what it replaced, and which had caused the concerns I detailed in my previous three posts:

  1. Zoom needs to clean up its privacy act,
  2. More on Zoom and privacy, and
  3. Helping Zoom

Those concerns were shared by Consumer ReportsForbes and others as well. (Here’s Consumer Reports‘ latest on the topic.)

Mainly the changes clarify the difference between Zoom’s services (what you use to conference with other people) and its websites, zoom.us and zoom.com (which are just one site: the latter redirects to the former). As I read the policy, nothing in the services is used for marketing. Put another way, your Zoom sessions are firewalled from adtech, and you shouldn’t worry about personal information leaking to adtech (tracking based advertising) systems.

The websites are another matter. Zoom calls those websites—its home pages—”marketing websites.” This, I suppose, is so they can isolate their involvement with adtech to their marketing work.

The problem with this is an optical one: encountering a typically creepy cookie notice and opting gauntlet (which still defaults hurried users to “consenting” to being tracked through “functional” and “advertising” cookies) on Zoom’s home page still conveys the impression that these consents, and these third parties, work across everything Zoom does, and not just its home pages.

And why call one’s home on the Web a “marketing website”—even if that’s mostly what it is? Zoom is classier than that.

My advice to Zoom is to just drop the jive. There will be no need for Zoom to disambiguate services and websites if neither is involved with adtech at all. And Zoom will be in a much better position to trumpet its commitment to privacy.

That said, this privacy policy rewrite is a big help. So thank you, Zoom, for listening.

 

[This is the third of four posts. The last of those, Zoom’s new privacy policy, visits the company’s positive response to input such as mine here. So you might want to start with that post (because it’s the latest) and look at the other three, including this one, after that.]

I really don’t want to bust Zoom. No tech company on Earth is doing more to keep civilization working at a time when it could so easily fall apart. Zoom does that by providing an exceptionally solid, reliable, friendly, flexible, useful (and even fun!) way for people to be present with each other, regardless of distance. No wonder Zoom is now to conferencing what Google is to search. Meaning: it’s a verb. Case in point: between the last sentence and this one, a friend here in town sent me an email that began with this:

That’s a screen shot.

But Zoom also has problems, and I’ve spent two posts, so far, busting them for one of those problems: their apparent lack of commitment to personal privacy:

  1. Zoom needs to cleanup its privacy act
  2. More on Zoom and privacy

With this third post, I’d like to turn that around.

I’ll start with the email I got yesterday from a person at a company engaged by Zoom for (seems to me) reputation management, asking me to update my posts based on the “facts” (his word) in this statement:

Zoom takes its users’ privacy extremely seriously, and does not mine user data or sell user data of any kind to anyone. Like most software companies, we use third-party advertising service providers (like Google) for marketing purposes: to deliver tailored ads to our users about Zoom products the users may find interesting. (For example, if you visit our website, later on, depending on your cookie preferences, you may see an ad from Zoom reminding you of all the amazing features that Zoom has to offer). However, this only pertains to your activity on our Zoom.us website. The Zoom services do not contain advertising cookies. No data regarding user activity on the Zoom platform – including video, audio and chat content – is ever used for advertising purposes. If you do not want to receive targeted ads about Zoom, simply click the “Cookie Preferences” link at the bottom of any page on the zoom.us site and adjust the slider to ‘Required Cookies.’

I don’t think this squares with what Zoom says in the “Does Zoom sell Personal Data?” section of its privacy policy (which I unpacked in my first post, and that Forbes, Consumer Reports and others have also flagged as problematic)—or with the choices provided in Zoom’s cookie settings, which list 70 (by my count) third parties whose involvement you can opt into or out of (by a set of options I unpacked in my second post). The logos in the image above are just 16 of those 70 parties, some of which include more than one domain.

Also, if all the ads shown to users are just “about Zoom,” why are those other companies in the picture at all? Specifically, under “About Cookies on This Site,” the slider is defaulted to allow all “functional cookies” and “advertising cookies,” the latter of which are “used by advertising companies to serve ads that are relevant to your interests.” Wouldn’t Zoom be in a better position to know your relevant (to Zoom) interests, than all those other companies?

More questions:

  1. Are those third parties “processors” under GDPR, or “service providers by the CCPAs definition? (I’m not an authority on either, so I’m asking.)
  2. How do these third parties know what your interests are? (Presumably by tracking you, or by learning from others who do. But it would help to know more.)
  3. What data about you do those companies give to Zoom (or to each other, somehow) after you’ve been exposed to them on the Zoom site?
  4. What targeting intelligence do those companies bring with them to Zoom’s pages because you’re already carrying cookies from those companies, and those cookies can alert those companies (or others, for example through real time bidding auctions) to your presence on the Zoom site?
  5. If all Zoom wants to do is promote Zoom products to Zoom users (as that statement says), why bring in any of those companies?

Here is what I think is going on (and I welcome corrections): Because Zoom wants to comply with GDPR and CCPA, they’ve hired TrustArc to put that opt-out cookie gauntlet in front of users. They could just as easily have used Quantcast‘s system, or consentmanager‘s, or OneTrust‘s, or somebody else’s.

All those services are designed to give companies a way to obey the letter of privacy laws while violating their spirit. That spirit says stop tracking people unless they ask you to, consciously and deliberately. In other words, opting in, rather than opting out. Every time you click “Accept” to one of those cookie notices, you’ve just lost one more battle in a losing war for your privacy online.

I also assume that Zoom’s deal with TrustArc—and, by implication, all those 70 other parties listed in the cookie gauntlet—also requires that Zoom put a bunch of weasel-y jive in their privacy policy. Which looks suspicious as hell, because it is.

Zoom can fix all of this easily by just stopping it. Other companies—ones that depend on adtech (tracking-based advertising)—don’t have that luxury. But Zoom does.

If we take Zoom at its word (in that paragraph they sent me), they aren’t interested in being part of the adtech fecosystem. They just want help in aiming promotional ads for their own services, on their own site.

Three things about that:

  1. Neither the Zoom site, nor the possible uses of it, are so complicated that they need aiming help from those third parties.
  2. Zoom is the world’s leading sellers’ market right now, meaning they hardly need to advertise at all.
  3. Being in adtech’s fecosystem raises huge fears about what Zoom and those third parties might be doing where people actually use Zoom most of the time: in its app. Again, Consumer Reports, Forbes and others have assumed, as have I, that the company’s embrasure of adtech in its privacy policy means that the same privacy exposures exist in the app (where they are also easier to hide).

By severing its ties with adtech, Zoom can start restoring people’s faith in its commitment to personal privacy.

There’s a helpful model for this: Apple’s privacy policy. Zoom is in a position to have a policy like that one because, like Apple, Zoom doesn’t need to be in the advertising business. In fact, Zoom could follow Apple’s footprints out of the ad business.

And then Zoom could do Apple one better, by participating in work going on already to put people in charge of their own privacy online, at scale. In my last post. I named two organizations doing that work. Four more are the Me2B Alliance, Kantara, ProjectVRM, and MyData.

I’d be glad to help with that too. If anyone at zoom is interested, contact me directly this time. Thanks.

 

 

 

[This is the second of four posts. The last of those, Zoom’s new privacy policy., visits the company’s positive response to input such as mine here. So you might want to start with that post (because it’s current) and look at the other three, including this one, after that.]

Zoom needs to clean up its privacy act, which I posted yesterday, hit a nerve. While this blog normally gets about 50 reads a day, by the end of yesterday it got more than 16000. So far this morning (11:15am Pacific), it has close to 8000 new reads. Most of those owe to this posting on Hacker News, which topped the charts all yesterday and has 483 comments so far. If you care about this topic, I suggest reading them.

Also, while this was going down, as a separate matter (with a separate thread on Hacker News), Zoom got busted for leaking personal data to Facebook, and promptly plugged it. Other privacy issues have also come up for Zoom. For example, this one.

But I want to stick to the topic I raised yesterday, which requires more exploration, for example into how one opts out from Zoom “selling” one’s personal data. This morning I finished a pass at that, and here’s what I found.

First, by turning off Privacy Badger on Chrome (my main browser of the moment) I got to see Zoom’s cookie notice on its index page, https://zoom.us/. (I know, I should have done that yesterday, but I didn’t. Today I did, and we proceed.) It said,

To opt out of Zoom making certain portions of your information relating to cookies available to third parties or Zoom’s use of your information in connection with similar advertising technologies or to opt out of retargeting activities which may be considered a “sale” of personal information under the California Consumer Privacy Act (CCPA) please click the “Opt-Out” button below.

The buttons below said “Accept” (pre-colored a solid blue, to encourage a yes), “Opt-Out” and “More Info.” Clicking “Opt-Out” made the notice disappear, revealing, in the tiny print at the bottom of the page, linked text that says “Do Not Sell My Personal Information.” Clicking on that link took me to the same place I later went by clicking on “More Info”: a pagelet (pop-over) that’s basically an opt-in notice:

By clicking on that orange button, you’ve opted in… I think. Anyway, I didn’t click it, but instead clicked on a smaller and less noticeable “advanced settings” link off to the right. This took me to a pagelet with this:

The “view cookies” links popped down to reveal 16 CCPA Opt-Out “Required Cookies,” 23 “Functional Cookies,” and 47 “Advertising Cookies.” You can’t separately opt out or in of the “required” ones, but you can do that with the other 70 in the sections below. It’s good, I suppose, that these are defaulted to “Out.” (Or seem to be, at least to me.)

So I hit the “Submit Preferences” button and got this:

All the pagelets say “Powered by TrustArc,” by the way. TrustArc is an off-the-shelf system for giving companies a way (IMHO) to obey the letter of the GDPR while violating its spirit. These systems do that by gathering “consents” to various cookie uses. I’m suppose Zoom is doing all this off a TrustArc API, because one of the cookies it wants to give me (blocked by Privacy Badger before I disabled that) is called “consent.trustarc.com”).

So, what’s going on here?

My guess is that Zoom is doing marketing from the lead-generation playbook, meaning that most of its intentional data collection is actually for its own use in pitching possible customers, or its own advertising on its own site, and not for leaking personal data to other parties.

But that doesn’t mean you’re not exposed, or that Zoom isn’t playing in the tracking-based advertising (aka adtech) fecosystem, and therefore is to some degree in the advertising business.

Seems to me, by the choices laid out above, that any of those third parties (up to 70 of them in my view above) are free to gather and share data about you. Also free to give you “interest based” advertising based on what those companies know about your activities elsewhere.

Alas, there is no way to tell what any of those parties actually do, because nobody has yet designed a way to keep track of, or to audit, any of the countless “consents” you click on or default to as you travel the Web. Also, the only thing keeping those valves closed in your browser are cookies that remember which valves do what (if, in fact, the cookies are set and they actually work).

And that’s only on one browser. If you’re like me, you use a number of browsers, each with its own jar of cookies.

The Zoom app is a different matter, and that’s mostly where you operate on Zoom. I haven’t dug into that one. (Though I did learn, on the ProjectVRM mailing list, that there is an open source Chrome extension, called Zoom Redirector, that will keep your Zoom session in a browser and out of the Zoom app.)

I did, however, dig down into my cookie jar in Chome to find the ones for zoom.us. It wasn’t easy. If you want to leverage my labors there, here’s my crumb trail:

  1. Settings
  2. Site Settings
  3. Cookies and Site Data
  4. See all Cookies and Site Data
  5. Zoom.us (it’s near the bottom of a very long list)

The URL for that end point is this: chrome://settings/cookies/detail?site=zoom.us). (Though dropping that URL into a new window or tab works only some of the time.)

I found 22 cookies in there. Here they are:

_zm_cdn_blocked
_zm_chtaid
_zm_client_tz
_zm_ctaid
_zm_currency
_zm_date_format
_zm_everlogin_type
_zm_ga_trackid
_zm_gdpr_email
_zm_lang
_zm_launcher
_zm_mtk_guid
_zm_page_auth
_zm_ssid
billingChannel
cmapi_cookie_privacy
cmapi_gtm_bl
cred
notice_behavior
notice_gdpr_prefs
notice_preferences
slirequested
zm_aid
zm_cluster
zm_haid

Some have obvious and presumably innocent meanings. Others … can’t tell. Also, these are just Zoom’s cookies. If I acquired cookies from any of those 70 other entities, they’re in different bags in my Chrome cookie jar.

Anyway, my point remains the same: Zoom still doesn’t need any of the advertising stuff—especially since they now (and deservedly) lead their category and are in a sellers’ market for their services. That means now is a good time for them to get serious about privacy.

As for fixing this crazy system of consents and cookies (which was broken when we got it in 1994), the only path forward starts on your side and mine. Not on the sites’ side. What each of us need is our own global way to signal our privacy demands and preferences: a Do Not Track signal, or a set of standardized and easily-read signals that sites and services will actually obey. That way, instead of you consenting to every site’s terms and policies, they consent to yours. Much simpler for everyone. Also much more like what we enjoy here in the physical world, where the fact that someone is wearing clothes is a clear signal that it would be rude to reach inside those clothes to plant a tracking beacon on them—a practice that’s pro forma online.

We can come up with that new system, and some of us are working on exactly that. My own work is with Customer Commons. The first Customer Commons term you can proffer, and sites can agree to, is called #P2B1(beta), better known as #NoStalking. it says this:

nostalking

By agreeing to #NoStalking, publishers still get to make money with ads (of the kind that have worked since forever and don’t involve tracking), and you know you aren’t being tracked, because you have a simple and sensible record of the agreement in a form both sides can keep and enforce if necessary.

Toward making that happen I’m also involved in an IEEE working group called P7012 – Standard for Machine Readable Personal Privacy Terms.

If you want to help bring these and similar solutions into the world, talk to me. (I’m first name @ last name dot com.) And if you want to read some background on the fight to turn the advertising fecosystem back into a healthy ecosystem, read here. Thanks.

zoom with eyes

[21 April 2020—Hundreds of people are arriving here from this tweet, which calls me a “Harvard researcher” and suggests that this post and the three that follow are about “the full list of the issues, exploits, oversights, and dubious choices Zoom has made.” So, two things. First, while I run a project at Harvard’s Berkman Klein Center, and run a blog that’s hosted by Harvard, I am not a Harvard employee, and would not call myself a “Harvard researcher.” Second, this post and the ones that follow—More on Zoom and Privacy, Helping Zoom, and Zoom’s new privacy policy—are focused almost entirely on Zoom’s privacy policy and how its need to explain the (frankly, typical) tracking-based marketing tech on its home page gives misleading suggestions about the privacy of Zoom’s whole service. If you’re interested in that, read on. (I suggest by starting at the end of the series, written after Zoom changed its privacy policy, and working back.) If you want research on other privacy issues around Zoom, look elsewhere. Thanks.]


As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

A Route of Evanescence,
With a revolving Wheel –
A Resonance of Emerald
A Rush of Cochineal –
And every Blossom on the Bush
Adjusts it’s tumbled Head –
The Mail from Tunis – probably,
An easy Morning’s Ride –

—Emily Dickinson
(via The Poetry Foundation)

While that poem is apparently about a hummingbird, it’s the one that comes first to my mind when I contemplate the form of evanescence that’s rooted in the nature of the Internet, where all of us are here right now, as I’m writing and you’re reading this.

Because, let’s face it: the Internet is no more about anything “on” it than air is about noise, speech or anything at all. Like air, sunlight, gravity and other useful graces of nature, the Internet is good for whatever can be done with it.

Same with the Web. While the Web was born as a way to share documents at a distance (via the Internet), it was never a library, even though we borrowed the language of real estate and publishing (domains and sites with pages one could author, edit, publish, syndicate, visit and browse) to describe it. While the metaphorical framing in all those words suggests durability and permanence, they belie the inherently evanescent nature of all we call content.

Think about the words memorystorageupload, and download. All suggest that content in digital form has substance at least resembling the physical kind. But it doesn’t. It’s a representation, in a pattern of ones and zeros, recorded on a medium for as long the responsible party wishes to keep it there, or the medium survives. All those states are volatile, and none guarantee that those ones and zeroes will last.

I’ve been producing digital content for the Web since the early 90s, and for much of that time I was lulled into thinking of the digital tech as something at least possibly permanent. But then my son Allen pointed out a distinction between the static Web of purposefully durable content and what he called the live Web. That was in 2003, when blogs were just beginning to become a thing. Since then the live Web has become the main Web, and people have come to see content as writing or projections on a World Wide Whiteboard. Tweets, shares, shots and posts are mostly of momentary value. Snapchat succeeded as a whiteboard where people could share “moments” that erased themselves after one view. (It does much more now, but evanescence remains its root.)

But, being both (relatively) old and (seriously) old-school about saving stuff that matters, I’ve been especially concerned with how we can archive, curate and preserve as much as possible of what’s produced for the digital world.

Last week, for example, I was involved in the effort to return Linux Journal to the Web’s shelves. (The magazine and site, which lived from April 1994 to August 2019, was briefly down, and with it all my own writing there, going back to 1996. That corpus is about a third of my writing in the published world.) Earlier, when it looked like Flickr might go down, I worried aloud about what would become of my many-dozen-thousand photos there. SmugMug saved it (Yay!); but there is no guarantee that any Website will persist forever, in any form. In fact, the way to bet is on the mortality of everything there. (Perspective: earlier today, over at doc.blog, I posted a brief think piece about the mortality of our planet, and the youth of the Universe.)

But the evanescent nature of digital memory shouldn’t stop us from thinking about how to take better care of what of the Net and the Web we wish to see remembered for the world. This is why it’s good to be in conversation on the topic with Brewster Kahle (of archive.org), Dave Winer and other like-minded folk. I welcome your thoughts as well.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers are their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it) and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools and each other.

Louis Brandeis and Samuel Warren visited the same problem more than a century ago, when they became alarmed at the implications of recording and reporting technologies that were far more primitive than the kind we have today. In response to those technologies, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal.”

But it’s hard to argue for those rights in the digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there is no limit to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

And facial recognition systems are getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and perhaps not just because they’re acquiescing to the inevitable.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of  what might get done with facial data if the bank gets hacked, or changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting thrugh to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But special cases such as that one haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional, because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

However, given the unlkelihood that the facial recognition genie will ever go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example to unlock phones or to sort through old photos. But, the data they gather should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes).
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted in to the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

Whither Linux Journal?

[16 August 2019…] Had a reassuring call yesterday with Ted Kim, CEO of London Trust Media. He told me the company plans to keep the site up as an archive at the LinuxJournal.com domain, and that if any problems develop around that, he’ll let us know. I told him we appreciate it very much—and that’s where it stands. I’m leaving up the post below for historical purposes.

On August 5th, Linux Journal‘s staff and contractors got word from the magazine’s parent company, London Trust Media, that everyone was laid off and the business was closing. Here’s our official notice to the world on that.

I’ve been involved with Linux Journal since before it started publishing in 1994, and have been on its masthead since 1996. I’ve also been its editor-in-chief since January of last year, when it was rescued by London Trust Media after nearly going out of business the month before. I say this to make clear how much I care about Linux Journal‘s significance in the world, and how grateful I am to London Trust Media for saving the magazine from oblivion.

London Trust Media can do that one more time, by helping preserve the Linux Journal website, with its 25 years of archives, so all its links remain intact, and nothing gets 404’d. Many friends, subscribers and long-time readers of Linux Journal have stepped up with offers to help with that. The decision to make that possible, however, is not in my hands, or in the hands of anyone who worked at the magazine. It’s up to London Trust Media. The LinuxJournal.com domain is theirs.

I have had no contact with London Trust Media in recent months. But I do know at least this much:

  1. London Trust Media has never interfered with Linux Journal‘s editorial freedom. On the contrary, it quietly encouraged our pioneering work on behalf of personal privacy online. Among other things, LTM published the first draft of a Privacy Manifesto now iterating at ProjectVRM, and recently published on Medium.
  2. London Trust Media has always been on the side of freedom and openness, which is a big reason why they rescued Linux Journal in the first place.
  3. Since Linux Journal is no longer a functioning business, its entire value is in its archives and their accessibility to the world. To be clear, these archives are not mere “content.” They are a vast store of damned good writing, true influence, and important history that search engines should be able to find where it has always been.
  4. While Linux Journal is no longer listed as one of London Trust Media’s brands, the website is still up, and its archives are still intact.

While I have no hope that Linux Journal can be rescued again as a subscriber-based digital magazine, I do have hope that the LinuxJournal.com domain, its (Drupal-based) website and its archives will survive. I base that hope on believing that London Trust Media’s heart has always been in the right place, and that the company is biased toward doing the right thing.

But the thing is up to them. It’s their choice whether or not to support the countless subscribers and friends who have stepped forward with offers to help keep the website and its archives intact and persistent on the Web. It won’t be hard to do that. And it’s the right thing to do.

« Older entries