problems

You are currently browsing the archive for the problems category.

The goal here is to obsolesce this brilliant poster by Despair.com:

I got launched on that path a couple months ago, when I got this email from  The_New_Yorker at e-mail.condenast.com:

Why did they “need” a “confirmation” to a subscription which, best I could recall, was last renewed early this year?

So I looked at the links.

The “renew,” Confirmation Needed” and “Discounted Subscription” links all go to a page with a URL that began with https://subscriptions.newyorker.com/pubs…, followed by a lot of tracking cruft. Here’s a screen shot of that one, cut short of where one filled in a credit card number. Note the price:

I was sure I had been paying $80-something per year, for years. As I also recalled, this was a price one could only obtain by calling the 800 number at NewYorker.com.

Or somewhere. After digging around, I found it at
 https://w1.buysub.com/pubs/N3/NYR/accoun…, which is where the link to Customer Care under My Account on the NewYorker website goes. It also required yet another login.

So, when I told the representative at the call center that I’d rather not “confirm” a year for a “discount” that probably wasn’t, she said I could renew for the $89.99 I had paid in the past, and that the deal would be good  through February of 2022. I said fine, let’s do that. So I gave her my credit card, said this was way too complicated, and added that a single simple subscription price would be better. She replied,  “Never gonna happen.” Let’s repeat that:

Never gonna happen.

Then I got this by email:

This appeared to confirm the subscription I already had. To see if that was the case, I went back to the buysub.com website and looked under the Account Summary tab, where it said this:

think this means that I last renewed on February 3 of this year, and what I did on the phone in August was commit to paying $89.99/year until February 10 of 2022.

If that’s what happened, all my call did was extend my existing subscription. Which was fine, but why require a phone call for that?

And WTF was that “Account Confirmation Required” email about? I assume it was bait to switch existing subscribers into paying $50 more per year.

Then there was this, at the bottom of the Account summary page:

This might explain why I stopped getting Vanity Fair, which I suppose I should still be getting.

So I clicked on”Reactivate and got a login page where the login I had used to get this far didn’t work.

After other failing efforts that I neglected to write down, I decided to go back to the New Yorker site and work my way back through two logins to the same page, and then click Reactivate one more time. Voila! ::::::

So now I’ve got one page that tells me I’m good to March 2021 next to a link that takes me to another page that says I ordered 12 issues last December and I can “start” a new subscription for $15 that would begin nine months ago. This is how one “reactivates” a subscription?  OMFG.

I’m also not going into the hell of moving the print subscription back and forth between the two places where I live. Nor will I bother now, in October, to ask why I haven’t seen another copy of Vanity Fair. (Maybe they’re going to the other place. Maybe not. I don’t know, and I’m too weary to try finding out.)

I want to be clear here that I am not sharing this to complain. In fact, I don’t want The New YorkerVanity Fair, Wred, Condé Nast (their parent company) or buysub.com to do a damn thing. They’re all FUBAR. By design. (Bonus link.)

Nor do I want any action out of Spectrum, SiriusXM, Dish Network or the other subscription-based whatevers whose customer disservice systems have recently soaked up many hours of my life.

See, with too many subscription systems (especially ones for periodicals), FUBAR is the norm. A matter of course. Pro forma. Entrenched. A box outside of which nobody making, managing or working in those systems can think.

This is why, when an alien idea appears, for example from a loyal customer just wanting a single and simple damn price, the response is “Never gonna happen.”

This is also why the subscription fecosystem can only be turned into an ecosystem from the outside. Our side. The subscribers’ side.

I’ll explain how at Customer Commons, which we created for exactly that purpose. Stay tuned for that.

[Later, in November 2022…] Since I wrote this, Rocket Money (formerly Truebill) and Trim have emerged as the leading competing services in this space. A review of the former here lays out the kinds of services both offer. My two problems with both are 1) they seem to only work on phones, and 2) they deal with the status quo (of companies out to screw us) rather than with the need for whole new customer-based solutions to the whole systemic subscription problem.


Two exceptions are Consumer Reports and The Sun.

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.

door knocker

Remember the dot com boom?

Doesn’t matter if you don’t. What does matter is that it ended. All business manias do.

That’s why we can expect the “platform economy” and “surveillance capitalism” to end. Sure, it’s hard to imagine that when we’re in the midst of the mania, but the end will come.

When it does, we can have a “privacy debate.” Meanwhile, there isn’t one. In fact there can’t be one, because we don’t have privacy in the online world.

We do have privacy in the offline world, and we’ve had it ever since we invented clothing, doors, locks and norms for signaling what’s okay and what’s not okay in respect to our personal spaces, possessions and information.

That we hardly have the equivalent in the networked world doesn’t mean we won’t. Or that we can’t. The Internet in its current form was only born in the mid-’90s. In the history of business and culture, that’s a blip.

Really, it’s still early.

So, the fact that websites, network services, phone companies, platforms, publishers, advertisers and governments violate our privacy with wanton disregard for it doesn’t mean we can’t ever stop them. It means we haven’t done it yet, because we don’t have the tech for it. (Sure, some wizards do, but muggles don’t. And most of us are muggles.)

And, since we don’t have privacy tech yet, we lack the simple norms that grow around technologies that give us ways signal our privacy preferences. We’ll get those when we have the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells.

This is what many of us have been working on at ProjectVRM, Customer Commons, the Me2B Alliance, MyData and other organizations whose mission is getting each of us the tech we need to operate at full agency when dealing with the companies and governments of the world.

I bring all this up as a “Yes, and” to a piece in Salon by Michael Corn (@MichaelAlanCorn), CISO of UCSD, titled We’re losing the war against surveillance capitalism because we let Big Tech frame the debate. Subtitle: “It’s too late to conserve our privacy — but to preserve what’s left, we must stop defining people as commodities.”

Indeed. And we do need the “optimism and activism” he calls for. In the activism category is code. Specifically, code that gives us the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells

Some of those are in the works. Others are not—yet. But they will be. Inevitably. Especially now that it’s becoming clearer every day that we’ll never get them from any system with a financial interest in violating it*. Or from laws that fail at protecting it.

If you want to help, join one or more of the efforts in the links four paragraphs up. And, if you’re a developer already on the case, let us know how we can help get your solutions into each and all of our digital hands.

For guidance, this privacy manifesto should help. Thanks.


*Especially publishers such as Salon, which Privacy Badger tells me tries to pump 20 potential trackers into my browser while I read the essay cited above. In fact, according to WhoTracksMe.com, Salon tends to run 204 tracking requests per page load, and the vast majority of those are for tracking-based advertising purposes. And Salon is hardly unique. Despite the best intentions of the GDPR and the CCPA, surveillance capitalism remains fully defaulted on the commercial Web—and will continue to remain entrenched until we have the privacy tech we’ve needed from the start.

For more on all this, see People vs. Adtech.

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Because, two years after the GDPR became enforceable, privacy would now be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced.  It’s because it’s too easy for every damn site on the Web—and every damn business with an Internet connection—to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if there was a way for every establishment you entered to painlessly inject a load of tracking beacons into your bloodstream without you knowing it. And that these beacons followed you everywhere and reported your activities back to parties unknown. Would you be okay with that? And how would you like it if you couldn’t even enter without recording your agreement to accept being tracked—on a ledger kept only by the establishment, so you have no way to audit their compliance to the agreement, whatever it might be?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data… while doing the same. (By the way, those red sliders say I’m blocking those trackers. Were it not for Privacy Badger, I’d be allowing them.)

Yes, Google says you’re anonymized somehow in both DoubleClick and Google Analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily.” That’s what’s going on.)

The main problem with the GDPR is that it effectively requires that every visitor to every website opt out of being tracked, and to do so (thank you, insincere “compliance” systems) by going down stairs into the basements of website popovers to throw tracking choice toggles to “off” positions which are typically defaulted on when you get there.

Again, let’s be clear about this: There is no way for you to know exactly how you are being tracked or what is done with information gathered about you. That’s because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything. It’s not even clear that the site or its third parties have a record of that. All you’ve got is a cookie planted deep in your browser’s bowels, designed to announce itself to other parties everywhere you go on the Web. In sum, consenting to a cookie notice leaves nothing resembling an audit trail.

Oh, and the California Consumer Protection Privacy Act (CCPA) makes matters worse by embedding opt-out into law there, while also requiring shit like this in the opt-out basement of every website facing a visitor suspected of coming from that state:

CCPA notice

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR and the CCPA were made to thwart that kind of thing. But they have failed. Instead, they have made the experience of being tracked online a worse one.

Yes, that was not their intent. And yes, both have done some good. But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some tracking. (Though in very different ways, so your experience of not being followed is not a consistent one. Or even perceptible in many cases.)

So tracking remains worse than rampant: it’s defaulted practice for both advertising and site analytics. And will remain so until we have code, laws and enforcement to stop it.

So, nothing to celebrate. Not this Privmas.

Tags: , ,

In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. John Wesley Powell recorded the greatest example of gone rock in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado River, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock, the layers of which were cocked at an angle from the flatnesses of every layer above. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between when the basement rock was beveled flat, against its own grain, and when the floors of rock above it were successively laid down. He didn’t know how much time had passed between basement and flooring, and could hardly guess.

The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.

In the decades since Powell made his notes, the same gap has been found all over the world and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.

All of those absent records end abruptly with the Cambrian Explosion, which began about 541 million years ago. That’s when the Cambrian period arrived and with it an amplitude of history, written in stone.

Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing guess is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well. Every continent was a Greenland or an Antarctica.

The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, bulldozing their ways seaward through mountains and across hills and plains. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, where plate tectonics could hide the evidence. (A fact little known outside of geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled up as debris on continental edges here and there. Example: the Bay Area of California is an ocean floor that wasn’t subducted into a trench.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.

I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish because it lacks even a glacier’s talent for accumulation.

The World Wide Web is also the World Wide Whiteboard.

Think about it: there is nothing about a bit that lends itself to persistence, other than the media it is written on. Form follows function; and most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs, and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities that may have happened during an interval about which nothing can ever be known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that was born in the late 20th century.

This theory actually comes from my wife, who has long claimed that future historians will look at our digital age as an invisible one because it sucks so royally at archiving itself.

Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid-state memory—are volatile by nature?

My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can only be read by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will these floppies or my mothballed PCs, all of which are likely bricked by decades of un-use.

As for other media, the prospect isn’t any better.

At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of MiniDV and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections (e.g. FireWire) no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? The one after that? I still hope, but the odds are against it.

Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded our mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, and her whole generation is now gone.

The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassette tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG, or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms, and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation. I assume my old open reel and cassette tapes are okay, but I can’t tell right now because both my Sony TCWE-475 cassette deck (high end in its day) and my Akai 202D-SS open-reel deck (a quadrophonic model from the early ’70s) are in need of work, since some of their rubber parts have rotted.

The same goes for my photographs. My printed photos—countless thousands of them dating from the late 1800s to 2004—are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photos are spread across a mess of duplicated backup drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year apiece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). I doubt any of those will survive after those entities stop getting paid their yearly fees. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.

As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).

It should also be sobering to recognize that nobody truly “owns” a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.

So the historic progression I see, and try to illustrate in the photo at the top of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds… that go away. Everything digital is snow falling and disappearing on the waters of time.

Will there ever be a way to save for the very long term what we ironically call our digital “assets?” Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Great Digital Unconformity, when everything was forgotten?

I can’t think of any technical questions more serious than those two.


The original version of this post appeared in the March 2019 issue of Linux Journal.

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

Yesterday (March 29), Zoom updated its privacy policy with a major rewrite. The new language is far more clear than what it replaced, and which had caused the concerns I detailed in my previous three posts:

  1. Zoom needs to clean up its privacy act,
  2. More on Zoom and privacy, and
  3. Helping Zoom

Those concerns were shared by Consumer ReportsForbes and others as well. (Here’s Consumer Reports‘ latest on the topic.)

Mainly the changes clarify the difference between Zoom’s services (what you use to conference with other people) and its websites, zoom.us and zoom.com (which are just one site: the latter redirects to the former). As I read the policy, nothing in the services is used for marketing. Put another way, your Zoom sessions are firewalled from adtech, and you shouldn’t worry about personal information leaking to adtech (tracking based advertising) systems.

The websites are another matter. Zoom calls those websites—its home pages—”marketing websites.” This, I suppose, is so they can isolate their involvement with adtech to their marketing work.

The problem with this is an optical one: encountering a typically creepy cookie notice and opting gauntlet (which still defaults hurried users to “consenting” to being tracked through “functional” and “advertising” cookies) on Zoom’s home page still conveys the impression that these consents, and these third parties, work across everything Zoom does, and not just its home pages.

And why call one’s home on the Web a “marketing website”—even if that’s mostly what it is? Zoom is classier than that.

My advice to Zoom is to just drop the jive. There will be no need for Zoom to disambiguate services and websites if neither is involved with adtech at all. And Zoom will be in a much better position to trumpet its commitment to privacy.

That said, this privacy policy rewrite is a big help. So thank you, Zoom, for listening.

 

[This is the third of four posts. The last of those, Zoom’s new privacy policy, visits the company’s positive response to input such as mine here. So you might want to start with that post (because it’s the latest) and look at the other three, including this one, after that.]

I really don’t want to bust Zoom. No tech company on Earth is doing more to keep civilization working at a time when it could so easily fall apart. Zoom does that by providing an exceptionally solid, reliable, friendly, flexible, useful (and even fun!) way for people to be present with each other, regardless of distance. No wonder Zoom is now to conferencing what Google is to search. Meaning: it’s a verb. Case in point: between the last sentence and this one, a friend here in town sent me an email that began with this:

That’s a screen shot.

But Zoom also has problems, and I’ve spent two posts, so far, busting them for one of those problems: their apparent lack of commitment to personal privacy:

  1. Zoom needs to cleanup its privacy act
  2. More on Zoom and privacy

With this third post, I’d like to turn that around.

I’ll start with the email I got yesterday from a person at a company engaged by Zoom for (seems to me) reputation management, asking me to update my posts based on the “facts” (his word) in this statement:

Zoom takes its users’ privacy extremely seriously, and does not mine user data or sell user data of any kind to anyone. Like most software companies, we use third-party advertising service providers (like Google) for marketing purposes: to deliver tailored ads to our users about Zoom products the users may find interesting. (For example, if you visit our website, later on, depending on your cookie preferences, you may see an ad from Zoom reminding you of all the amazing features that Zoom has to offer). However, this only pertains to your activity on our Zoom.us website. The Zoom services do not contain advertising cookies. No data regarding user activity on the Zoom platform – including video, audio and chat content – is ever used for advertising purposes. If you do not want to receive targeted ads about Zoom, simply click the “Cookie Preferences” link at the bottom of any page on the zoom.us site and adjust the slider to ‘Required Cookies.’

I don’t think this squares with what Zoom says in the “Does Zoom sell Personal Data?” section of its privacy policy (which I unpacked in my first post, and that Forbes, Consumer Reports and others have also flagged as problematic)—or with the choices provided in Zoom’s cookie settings, which list 70 (by my count) third parties whose involvement you can opt into or out of (by a set of options I unpacked in my second post). The logos in the image above are just 16 of those 70 parties, some of which include more than one domain.

Also, if all the ads shown to users are just “about Zoom,” why are those other companies in the picture at all? Specifically, under “About Cookies on This Site,” the slider is defaulted to allow all “functional cookies” and “advertising cookies,” the latter of which are “used by advertising companies to serve ads that are relevant to your interests.” Wouldn’t Zoom be in a better position to know your relevant (to Zoom) interests, than all those other companies?

More questions:

  1. Are those third parties “processors” under GDPR, or “service providers by the CCPAs definition? (I’m not an authority on either, so I’m asking.)
  2. How do these third parties know what your interests are? (Presumably by tracking you, or by learning from others who do. But it would help to know more.)
  3. What data about you do those companies give to Zoom (or to each other, somehow) after you’ve been exposed to them on the Zoom site?
  4. What targeting intelligence do those companies bring with them to Zoom’s pages because you’re already carrying cookies from those companies, and those cookies can alert those companies (or others, for example through real time bidding auctions) to your presence on the Zoom site?
  5. If all Zoom wants to do is promote Zoom products to Zoom users (as that statement says), why bring in any of those companies?

Here is what I think is going on (and I welcome corrections): Because Zoom wants to comply with GDPR and CCPA, they’ve hired TrustArc to put that opt-out cookie gauntlet in front of users. They could just as easily have used Quantcast‘s system, or consentmanager‘s, or OneTrust‘s, or somebody else’s.

All those services are designed to give companies a way to obey the letter of privacy laws while violating their spirit. That spirit says stop tracking people unless they ask you to, consciously and deliberately. In other words, opting in, rather than opting out. Every time you click “Accept” to one of those cookie notices, you’ve just lost one more battle in a losing war for your privacy online.

I also assume that Zoom’s deal with TrustArc—and, by implication, all those 70 other parties listed in the cookie gauntlet—also requires that Zoom put a bunch of weasel-y jive in their privacy policy. Which looks suspicious as hell, because it is.

Zoom can fix all of this easily by just stopping it. Other companies—ones that depend on adtech (tracking-based advertising)—don’t have that luxury. But Zoom does.

If we take Zoom at its word (in that paragraph they sent me), they aren’t interested in being part of the adtech fecosystem. They just want help in aiming promotional ads for their own services, on their own site.

Three things about that:

  1. Neither the Zoom site, nor the possible uses of it, are so complicated that they need aiming help from those third parties.
  2. Zoom is the world’s leading sellers’ market right now, meaning they hardly need to advertise at all.
  3. Being in adtech’s fecosystem raises huge fears about what Zoom and those third parties might be doing where people actually use Zoom most of the time: in its app. Again, Consumer Reports, Forbes and others have assumed, as have I, that the company’s embrasure of adtech in its privacy policy means that the same privacy exposures exist in the app (where they are also easier to hide).

By severing its ties with adtech, Zoom can start restoring people’s faith in its commitment to personal privacy.

There’s a helpful model for this: Apple’s privacy policy. Zoom is in a position to have a policy like that one because, like Apple, Zoom doesn’t need to be in the advertising business. In fact, Zoom could follow Apple’s footprints out of the ad business.

And then Zoom could do Apple one better, by participating in work going on already to put people in charge of their own privacy online, at scale. In my last post. I named two organizations doing that work. Four more are the Me2B Alliance, Kantara, ProjectVRM, and MyData.

I’d be glad to help with that too. If anyone at zoom is interested, contact me directly this time. Thanks.

 

 

 

[This is the second of four posts. The last of those, Zoom’s new privacy policy., visits the company’s positive response to input such as mine here. So you might want to start with that post (because it’s current) and look at the other three, including this one, after that.]

Zoom needs to clean up its privacy act, which I posted yesterday, hit a nerve. While this blog normally gets about 50 reads a day, by the end of yesterday it got more than 16000. So far this morning (11:15am Pacific), it has close to 8000 new reads. Most of those owe to this posting on Hacker News, which topped the charts all yesterday and has 483 comments so far. If you care about this topic, I suggest reading them.

Also, while this was going down, as a separate matter (with a separate thread on Hacker News), Zoom got busted for leaking personal data to Facebook, and promptly plugged it. Other privacy issues have also come up for Zoom. For example, this one.

But I want to stick to the topic I raised yesterday, which requires more exploration, for example into how one opts out from Zoom “selling” one’s personal data. This morning I finished a pass at that, and here’s what I found.

First, by turning off Privacy Badger on Chrome (my main browser of the moment) I got to see Zoom’s cookie notice on its index page, https://zoom.us/. (I know, I should have done that yesterday, but I didn’t. Today I did, and we proceed.) It said,

To opt out of Zoom making certain portions of your information relating to cookies available to third parties or Zoom’s use of your information in connection with similar advertising technologies or to opt out of retargeting activities which may be considered a “sale” of personal information under the California Consumer Privacy Act (CCPA) please click the “Opt-Out” button below.

The buttons below said “Accept” (pre-colored a solid blue, to encourage a yes), “Opt-Out” and “More Info.” Clicking “Opt-Out” made the notice disappear, revealing, in the tiny print at the bottom of the page, linked text that says “Do Not Sell My Personal Information.” Clicking on that link took me to the same place I later went by clicking on “More Info”: a pagelet (pop-over) that’s basically an opt-in notice:

By clicking on that orange button, you’ve opted in… I think. Anyway, I didn’t click it, but instead clicked on a smaller and less noticeable “advanced settings” link off to the right. This took me to a pagelet with this:

The “view cookies” links popped down to reveal 16 CCPA Opt-Out “Required Cookies,” 23 “Functional Cookies,” and 47 “Advertising Cookies.” You can’t separately opt out or in of the “required” ones, but you can do that with the other 70 in the sections below. It’s good, I suppose, that these are defaulted to “Out.” (Or seem to be, at least to me.)

So I hit the “Submit Preferences” button and got this:

All the pagelets say “Powered by TrustArc,” by the way. TrustArc is an off-the-shelf system for giving companies a way (IMHO) to obey the letter of the GDPR while violating its spirit. These systems do that by gathering “consents” to various cookie uses. I’m suppose Zoom is doing all this off a TrustArc API, because one of the cookies it wants to give me (blocked by Privacy Badger before I disabled that) is called “consent.trustarc.com”).

So, what’s going on here?

My guess is that Zoom is doing marketing from the lead-generation playbook, meaning that most of its intentional data collection is actually for its own use in pitching possible customers, or its own advertising on its own site, and not for leaking personal data to other parties.

But that doesn’t mean you’re not exposed, or that Zoom isn’t playing in the tracking-based advertising (aka adtech) fecosystem, and therefore is to some degree in the advertising business.

Seems to me, by the choices laid out above, that any of those third parties (up to 70 of them in my view above) are free to gather and share data about you. Also free to give you “interest based” advertising based on what those companies know about your activities elsewhere.

Alas, there is no way to tell what any of those parties actually do, because nobody has yet designed a way to keep track of, or to audit, any of the countless “consents” you click on or default to as you travel the Web. Also, the only thing keeping those valves closed in your browser are cookies that remember which valves do what (if, in fact, the cookies are set and they actually work).

And that’s only on one browser. If you’re like me, you use a number of browsers, each with its own jar of cookies.

The Zoom app is a different matter, and that’s mostly where you operate on Zoom. I haven’t dug into that one. (Though I did learn, on the ProjectVRM mailing list, that there is an open source Chrome extension, called Zoom Redirector, that will keep your Zoom session in a browser and out of the Zoom app.)

I did, however, dig down into my cookie jar in Chome to find the ones for zoom.us. It wasn’t easy. If you want to leverage my labors there, here’s my crumb trail:

  1. Settings
  2. Site Settings
  3. Cookies and Site Data
  4. See all Cookies and Site Data
  5. Zoom.us (it’s near the bottom of a very long list)

The URL for that end point is this: chrome://settings/cookies/detail?site=zoom.us). (Though dropping that URL into a new window or tab works only some of the time.)

I found 22 cookies in there. Here they are:

_zm_cdn_blocked
_zm_chtaid
_zm_client_tz
_zm_ctaid
_zm_currency
_zm_date_format
_zm_everlogin_type
_zm_ga_trackid
_zm_gdpr_email
_zm_lang
_zm_launcher
_zm_mtk_guid
_zm_page_auth
_zm_ssid
billingChannel
cmapi_cookie_privacy
cmapi_gtm_bl
cred
notice_behavior
notice_gdpr_prefs
notice_preferences
slirequested
zm_aid
zm_cluster
zm_haid

Some have obvious and presumably innocent meanings. Others … can’t tell. Also, these are just Zoom’s cookies. If I acquired cookies from any of those 70 other entities, they’re in different bags in my Chrome cookie jar.

Anyway, my point remains the same: Zoom still doesn’t need any of the advertising stuff—especially since they now (and deservedly) lead their category and are in a sellers’ market for their services. That means now is a good time for them to get serious about privacy.

As for fixing this crazy system of consents and cookies (which was broken when we got it in 1994), the only path forward starts on your side and mine. Not on the sites’ side. What each of us need is our own global way to signal our privacy demands and preferences: a Do Not Track signal, or a set of standardized and easily-read signals that sites and services will actually obey. That way, instead of you consenting to every site’s terms and policies, they consent to yours. Much simpler for everyone. Also much more like what we enjoy here in the physical world, where the fact that someone is wearing clothes is a clear signal that it would be rude to reach inside those clothes to plant a tracking beacon on them—a practice that’s pro forma online.

We can come up with that new system, and some of us are working on exactly that. My own work is with Customer Commons. The first Customer Commons term you can proffer, and sites can agree to, is called #P2B1(beta), better known as #NoStalking. it says this:

nostalking

By agreeing to #NoStalking, publishers still get to make money with ads (of the kind that have worked since forever and don’t involve tracking), and you know you aren’t being tracked, because you have a simple and sensible record of the agreement in a form both sides can keep and enforce if necessary.

Toward making that happen I’m also involved in an IEEE working group called P7012 – Standard for Machine Readable Personal Privacy Terms.

If you want to help bring these and similar solutions into the world, talk to me. (I’m first name @ last name dot com.) And if you want to read some background on the fight to turn the advertising fecosystem back into a healthy ecosystem, read here. Thanks.

zoom with eyes

[21 April 2020—Hundreds of people are arriving here from this tweet, which calls me a “Harvard researcher” and suggests that this post and the three that follow are about “the full list of the issues, exploits, oversights, and dubious choices Zoom has made.” So, two things. First, while I run a project at Harvard’s Berkman Klein Center, and run a blog that’s hosted by Harvard, I am not a Harvard employee, and would not call myself a “Harvard researcher.” Second, this post and the ones that follow—More on Zoom and Privacy, Helping Zoom, and Zoom’s new privacy policy—are focused almost entirely on Zoom’s privacy policy and how its need to explain the (frankly, typical) tracking-based marketing tech on its home page gives misleading suggestions about the privacy of Zoom’s whole service. If you’re interested in that, read on. (I suggest by starting at the end of the series, written after Zoom changed its privacy policy, and working back.) If you want research on other privacy issues around Zoom, look elsewhere. Thanks.]


As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

« Older entries § Newer entries »