privacy

You are currently browsing the archive for the privacy category.


In July 2008, when I posted the photo above on this blog, some readers thought Santa Barbara Mission was on fire. It didn’t matter that I explained in that post how I got the shot, or that news reports made clear that the Gap Fire was miles away. The photo was a good one, but it also collapsed three dimensions into just two. Hence the confusion. If you didn’t know better, it looked like the building was on fire. The photo removed distance.

So does the Internet, at least when we are there. Let’s look at what there means.

Pre-digital media were limited by distance, and to a high degree defined by it. Radio and television signals degrade across distances from transmitters, and are limited as well by buildings, terrain, weather, and (on some frequency bands), ionospheric conditions. Even a good radio won’t get an absent signal. Nor will a good TV. Worse, if you live more than a few dozen miles from a TV station’s transmitter, you need a good antenna mounted on your roof, a chimney, or a tall pole. For signals coming from different locations, you need a rotator as well. Even on cable, there is still a distinction between local channels and cable-only ones. You pay more to get “bundles” of the latter, so there is a distance in cost between local and distant channel sources. If you get your TV by satellite, your there needs to be in the satellite’s coverage footprint.

But with the Internet, here and there are the same. Distance is gone, on purpose. Its design presumes that all physical and wireless connections are one, no matter who owns them or how they get paid to move chunks of Internet data. It is a world of ends meant to show nothing of its middles, which are countless paths the ends ignore. (Let’s also ignore, for the moment, that some countries and providers censor or filter the Internet, in some cases blocking access from the physical locations their systems detect. Those all essentially violate the Internet’s simple assumption of openness and connectivity for everybody and everything at every end.)

For people on the Internet, distance is collapsed to the height and width of a window. There is also no gravity because space implies three dimensions and your screen has only two, and the picture is always upright. When persons in Scotland and Australia talk, neither is upside down to the other. But they are present with each other and that’s what matters. (This may change in the metaverse, whatever that becomes, but will likely require headgear not everyone will wear. And it will still happen over the Internet.)

Digital life, almost all of which now happens on the Internet, is new to human experience, and our means of coping are limited. For example, by language. And I don’t mean different ones. I mean all of them, because they are made for making sense of a three-dimensional physical world, which the Internet is not.

Take prepositions. English, like most languages, has a few dozen prepositions, most of which describe placement in three-dimensional space. Over, around, under, through, beside, within, off, on, over, aboard… all presume three dimensions. That’s also where our bodies are, and it is through our bodies that we make sense of the world. We say good is light and bad is dark because we are diurnal hunters and gatherers, with eyes optimized for daylight. We say good is up and bad is down because we walk and run upright. We “grasp” or “hold on” to an idea because we have opposable thumbs on hands built to grab. We say birth is “arrival,” death is “departure” and careers are “paths,” because we experience life as travel.

But there are no prepositions yet that do justice to the Internet’s absence of distance. Of course, we say we are “on” the Internet like we say we are “on” the phone. And it works well enough, as does saying we are on” fire or drugs. We just frame our understanding of the Internet in metaphorical terms that require a preposition, and “on” makes the most sense. But can we do better than that? Not sure.

Have you noticed that how we present ourselves in the digital world also differs from how we do the same in the physical one? On social media, for example, we perform roles, as if on a stage. We talk to an audience straight through a kind of fourth wall, like an actor, a lecturer, a comedian, musician, or politician. My point here is that the arts and methods of performing in the physical world are old, familiar, and reliant on physical surroundings. How we behave with others in our offices, our bedrooms, our kitchens, our clubs, and our cars are all well-practiced and understood. In social media, the sense of setting is much different and more limited.

In the physical world, much of our knowledge (as the scientist and philosopher Michael Polanyi first taught) is tacit rather than explicit, yet digital technology is entirely explicit: ones and zeroes, packets and pixels. We do have tacit knowledge of the digital world, but the on/off present/absent two-dimensionality of that world is still new to us and lacks much of what makes life in the three-dimensional natural world so rich and subtle.

Marshall McLuhan says all media, all technologies, extend us. When we drive a car we wear it like a carapace or a suit of armor. We also speak of it in the first person possessive: my engine, my fenders, my wheels, much as we would say my fingers and my hair. There is distance here too, and it involves performance. A person who would never yell at another person standing in line at a theater might do exactly that at another car. This kind of distance is gone, or very different, in the digital world.

In a way we are naked in the digital world, and vulnerable. By that I mean we lack the rudimentary privacy technologies we call clothing and shelter, which protect our private parts and spaces from observation and intrusion while also signaling the forms of observation and contact that we permit or welcome. The absence of this kind of privacy tech is why it is so easy for websites and apps to fill our browsers with cookies and other ways to track us and report our activities back to dozens, hundreds or thousands of unknown parties. In this early stage of life on the Internet, what’s called privacy is just the “choices” sites and services give us, none of which are recorded where we can easily find, audit, or dispute them.

Can we get new forms of personal tech that truly extend and project our agency in the digital world? I think so, but it’s a question so completely good that we don’t yet have an answer.

 


That’s the flyer for the first salon in our Beyond the Web Series at the Ostrom Workshop, here at Indiana University. You can attend in person or on Zoom. Register here for that. It’s at 2 PM Eastern on Monday, September 19.

And yes, all those links are on the Web. What’s not on the Web—yet—are all the things listed here. These are things the Internet can support, because, as a World of Ends (defined and maintained by TCP/IP), it is far deeper and broader than the Web alone, no matter what version number we append to the Web.

The salon will open with an interview of yours truly by Dr. Angie Raymond, Program Director of Data Management and Information Governance at the Ostrom Workshop, and Associate Professor of Business Law and Ethics in the Kelley School of Business (among too much else to list here), and quickly move forward into a discussion. Our purpose is to introduce and talk about these ideas:

  1. That free customers are more valuable—to themselves, to businesses, and to the marketplace—than captive ones.
  2. That the Internet’s original promises of personal empowerment, peer-to-peer communication, free and open markets, and other utopian ideals, can actually happen without surveillance, algorithmic nudging, and capture by giants, all of which have all become norms in these early years of our digital world.
  3. That, since the admittedly utopian ambitions behind 1 and 2 require boiling oceans, it’s a good idea to try first proving them locally, in one community, guided by Ostrom’s principles for governing a commons. Which we are doing with a new project called the Byway.

This is our second Beyond the Web Salon series. The first featured David P. Reed, Ethan Zuckerman, Robin Chase, and Shoshana Zuboff. Upcoming in this series are:

Mark your calendars for those.

And, if you’d like homework to do before Monday, here you go:

See you there!

Twelve years ago, I posted The Data Bubble. It began,

The tide turned today. Mark it: 31 July 2010.

That’s when The Wall Street Journal published The Web’s Gold Mine: Your Secrets, subtitled A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series. It has ten links to other sections of today’s report. It’s pretty freaking amazing — and amazingly freaky when you dig down to the business assumptions behind it. Here is the rest of the list (sans one that goes to a link-proof Flash thing):

Here’s the gist:

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

It gets worse:

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen—tracking companies, data brokers and advertising networks—competing to meet the growing demand for data on individual behavior and interests.The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges. “It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.” The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, WSJ.com.) It then analyzed the tracking files and programs these sites downloaded onto a test computer. As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles. But over two-thirds—2,224—were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

Here’s what’s delusional about all this: There is no demand for tracking by individual customers. All the demand comes from advertisers — or from companies selling to advertisers. For now.

Here is the difference between an advertiser and an ordinary company just trying to sell stuff to customers: nothing. If a better way to sell stuff comes along — especially if customers like it better than this crap the Journal is reporting on — advertising is in trouble.

In fact, I had been calling the tracking-based advertising business (now branded adtech or ad-tech) a bubble for some time. For example, in Why online advertising sucks, and is a bubble (31 October 2008) and After the advertising bubble bursts (23 March 2009). But I didn’t expect my own small voice to have much effect. But this was different. What They Know was written by a crack team of writers, researchers, and data visualizers. It was led by Julia Angwin and truly Pulitzer-grade stuff. It  was so well done, so deep, and so sharp, that I posted a follow-up report three months later, called The Data Bubble II. In that one, I wrote,

That same series is now nine stories long, not counting the introduction and a long list of related pieces. Here’s the current list:

  1. The Web’s Gold Mine: What They Know About You
  2. Microsoft Quashed Bid to Boost Web Privacy
  3. On the Web’s Cutting Edge: Anonymity in Name Only
  4. Stalking by Cell Phone
  5. Google Agonizes Over Privacy
  6. Kids Face Intensive Tracking on Web
  7. ‘Scrapers’ Dig Deep for Data on the Web
  8. Facebook in Privacy Breach
  9. A Web Pioneer Profiles Users By Name

Related pieces—

Two things I especially like about all this. First, Julia Angwin and her team are doing a terrific job of old-fashioned investigative journalism here. Kudos for that. Second, the whole series stands on the side of readers. The second person voice (youyour) is directed to individual persons—the same persons who do not sit at the tables of decision-makers in this crazy new hyper-personalized advertising business.

To measure the delta of change in that business, start with John Battelle‘s Conversational Marketing series (post 1post 2post 3) from early 2007, and then his post Identity and the Independent Web, from last week. In the former he writes about how the need for companies to converse directly with customers and prospects is both inevitable and transformative. He even kindly links to The Cluetrain Manifesto (behind the phrase “brands are conversations”).

It was obvious to me that this fine work would blow the adtech bubble to a fine mist. It was just a matter of when.

Over the years since, I’ve retained hope, if not faith. Examples: The Data Bubble Redux (9 April 2016), and Is the advertising bubble finally starting to pop? (9 May 2016, and in Medium).

Alas, the answer to that last one was no. By 2016, Julia and her team had long since disbanded, and the original links to the What They Know series began to fail. I don’t have exact dates for which failed when, but I do know that the trusty master link, wjs.com/wtk, began to 404 at some point. Fortunately, Julia has kept much of it alive at https://juliaangwin.com/category/portfolio/wall-street-journal/what-they-know/. Still, by the late Teens it was clear that even the best journalism wasn’t going to be enough—especially since the major publications had become adtech junkies. Worse, covering their own publications’ involvement in surveillance capitalism had become an untouchable topic for journalists. (One notable exception is Farhad Manjoo of The New York Times, whose coverage of the paper’s own tracking was followed by a cutback in the practice.)

While I believe that most new laws for tech mostly protect yesterday from last Thursday, I share with many a hope for regulatory relief. I was especially jazzed about Europe’s GDPR, as you can read in GDPR will pop the adtech bubble (12 May 2018) and Our time has come (16 May 2018 in ProjectVRM).

But I was wrong then too. Because adtech isn’t a bubble. It’s a death star in service of an evil empire that destroys privacy through every function it funds in the digital world.

That’s why I expect the American Data Privacy and Protection Act (H.R. 8152), even if it passes through both houses of Congress at full strength, to do jack shit. Or worse, to make our experience of life in the digital world even more complicated, by requiring us to opt-out, rather than opt-in (yep, it’s in the law—as a right, no less), to tracking-based advertising everywhere. And we know how well that’s been going. (Read this whole post by Tom Fishburne, the Marketoonist, for a picture of how less than zero progress has been made, and how venial and absurd “consent” gauntlets on websites have become.) Do a search for https://www.google.com/search?q=gdpr+compliance to see how large the GDPR “compliance” business has become. Nearly all your 200+ million results will be for services selling obedience to the letter of the GDPR while death-star laser beams blow its spirit into spinning shards. Then expect that business to grow once the ADPPA is in place.

There is only thing that will save us from adtech’s death star.

That’s tech of our own. Our tech. Personal tech.

We did it in the physical world with the personal privacy tech we call clothing, shelter, locks, doors, shades, and shutters. We’ve barely started to make the equivalents for the digital world. But the digital world is only a few decades old. It will be around for dozens, hundreds, or thousands of decades to come. And adtech is still just a teenager. We can, must, and will do better.

All we need is the tech. Big Tech won’t do it for us. Nor will Big Gov.

The economics will actually help, because there are many business problems in the digital world that can only be solved from the customers’ side, with better signaling from demand to supply than adtech-based guesswork can ever provide. Customer Commons lists fourteen of those solutions, here. Privacy is just one of them.

Use the Force, folks.

That Force is us.

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

  • Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world
  • Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them
  • Every one of those sites and services has its own unique user interfaces, each with its own peculiarities
  • All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.


† We need to fix that Wikipedia page.

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy.

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want.

The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human interests while also failing to understand jack shit about personal boundaries.

This became clear when Facebook placed the ad above and others like it in major publications recently, and mostly made bad news for itself. We saw the same kind of thing in early 2014, when the IAB ran a similar campaign against Mozilla, using ads like this:

That one was to oppose Mozilla’s decision to turn on Do Not Track by default in its Firefox browser. Never mind that Do Not Track was never more than a polite request for websites to not be infected with a beacon, like those worn by marked animals, so one can be tracked away from the website. Had the advertising industry and its dependents in publishing simply listened to that signal, and respected it, we might never have had the GDPR or the CCPA, both of which are still failing at the same mission. (But, credit where due: the GDPR and the CCPA have at least forced websites to put up insincere and misleading opt-out popovers in front of every website whose lawyers are scared of violating the letter—but never the spirit—of those and other privacy laws.)

The IAB succeeded in its campaign against Mozilla and Do Not Track; but the the victory was Pyrrhic, because users decided to install ad blockers instead, which by 2015 was the largest boycott in human history. Plus a raft of privacy laws, with more in the pipeline.

We also got Apple on our side. That’s good, but not good enough.

What we need are working tools of our own. Examples: Global Privacy Control (and all the browsers and add-ons mentioned there), Customer Commons#NoStalking term, the IEEE’s P7012 – Standard for Machine Readable Personal Privacy Terms, and other approaches to solving business problems from the our side—rather than always from the corporate one.

In those movies, we’ll win.

Because if only Apple wins, we still lose.

Dammit, it’s still about what The Cluetrain Manifesto said in the first place, in this “one clue” published almost 21 years ago:

we are not seats or eyeballs or end users or consumers.
we are human beings — and out reach exceeds your grasp.
deal with it.

We have to make them deal. All of them. Not just Apple. We need code, protocols and standards, and not just regulations.

All the projects linked to above can use some help, plus others I’ll list here too if you write to me with them. (Comments here only work for Harvard email addresses, alas. I’m doc at searls dot com.)

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Because, two years after the GDPR became enforceable, privacy would now be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced.  It’s because it’s too easy for every damn site on the Web—and every damn business with an Internet connection—to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if there was a way for every establishment you entered to painlessly inject a load of tracking beacons into your bloodstream without you knowing it. And that these beacons followed you everywhere and reported your activities back to parties unknown. Would you be okay with that? And how would you like it if you couldn’t even enter without recording your agreement to accept being tracked—on a ledger kept only by the establishment, so you have no way to audit their compliance to the agreement, whatever it might be?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data… while doing the same. (By the way, those red sliders say I’m blocking those trackers. Were it not for Privacy Badger, I’d be allowing them.)

Yes, Google says you’re anonymized somehow in both DoubleClick and Google Analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily.” That’s what’s going on.)

The main problem with the GDPR is that it effectively requires that every visitor to every website opt out of being tracked, and to do so (thank you, insincere “compliance” systems) by going down stairs into the basements of website popovers to throw tracking choice toggles to “off” positions which are typically defaulted on when you get there.

Again, let’s be clear about this: There is no way for you to know exactly how you are being tracked or what is done with information gathered about you. That’s because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything. It’s not even clear that the site or its third parties have a record of that. All you’ve got is a cookie planted deep in your browser’s bowels, designed to announce itself to other parties everywhere you go on the Web. In sum, consenting to a cookie notice leaves nothing resembling an audit trail.

Oh, and the California Consumer Protection Privacy Act (CCPA) makes matters worse by embedding opt-out into law there, while also requiring shit like this in the opt-out basement of every website facing a visitor suspected of coming from that state:

CCPA notice

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR and the CCPA were made to thwart that kind of thing. But they have failed. Instead, they have made the experience of being tracked online a worse one.

Yes, that was not their intent. And yes, both have done some good. But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some tracking. (Though in very different ways, so your experience of not being followed is not a consistent one. Or even perceptible in many cases.)

So tracking remains worse than rampant: it’s defaulted practice for both advertising and site analytics. And will remain so until we have code, laws and enforcement to stop it.

So, nothing to celebrate. Not this Privmas.

Tags: , ,

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

covid sheep

Just learned of The Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates. This is in addition to the UK’s Coronavirus Bill 2020, which is (as I understand it) running the show there right now.

This new bill’s lead author is Prof Lilian Edwards, University of Newcastle. Other contributors: Dr Michael Veale, University College London; Dr Orla Lynskey, London School of Economics; Carly Kind, Ada Lovelace Institute; and Rachel Coldicutt, Careful Industries

Here’s the abstract:

This short Bill attempts to provide safeguards in relation to the symptom tracking and contact tracing apps that are currently being rolled out in the UK; and anticipates minimum safeguards that will be needed if we move on to a roll out of “immunity certificates” in the near future.

Although no one wants to delay or deter the massive effort to fight coronavirus we are all involved in, there are two clear reasons to put a law like this in place sooner rather than later:

(a) Uptake of apps, crucial to their success, will be improved if people feel confident their data will not be misused, repurposed or shared to eg the private sector (think insurers, marketers or employers) without their knowledge or consent, and that data held will be accurate.

(b) Connectedly, data quality will be much higher if people use these apps with confidence and do not provide false information to them, or withhold information, for fear of misuse or discrimination eg impact on immigration status.

(c) The portion of the population which is already digitally excluded needs reassurance that apps will not further entrench their exclusion.

While data protection law provides useful safeguards here, it is not sufficient. Data protection law allows gathering and sharing of data on the basis not just of consent but a number of grounds including the very vague “legitimate interests”. Even health data, though it is deemed highly sensitive, can be gathered and shared on the basis of public health and “substantial public interest”. This is clearly met in the current emergency, but we need safeguards that ensure that sharing and especially repurposing of data is necessary, in pursuit of public legitimate interests, transparent and reviewable.

Similarly, while privacy-preserving technical architectures which have been proposed are also useful, they are not a practically and holistically sufficient or rhetorically powerful enough solution to reassure and empower the public. We need laws as well.

Download it here.

More context, from some tabs I have open:

All of this is, as David Weinberger puts it in the title of his second-to-latest book, Too Big to Know. So, in faith that the book’s subtitle, Rethinking Knowledge Now that the Facts aren’t the Facts,Experts are Everywhere, and the Smartest Person in the Room is the Room, is correct, I’m sharing this with the room.

I welcome your thoughts.

zoom with eyes

[21 April 2020—Hundreds of people are arriving here from this tweet, which calls me a “Harvard researcher” and suggests that this post and the three that follow are about “the full list of the issues, exploits, oversights, and dubious choices Zoom has made.” So, two things. First, while I run a project at Harvard’s Berkman Klein Center, and run a blog that’s hosted by Harvard, I am not a Harvard employee, and would not call myself a “Harvard researcher.” Second, this post and the ones that follow—More on Zoom and Privacy, Helping Zoom, and Zoom’s new privacy policy—are focused almost entirely on Zoom’s privacy policy and how its need to explain the (frankly, typical) tracking-based marketing tech on its home page gives misleading suggestions about the privacy of Zoom’s whole service. If you’re interested in that, read on. (I suggest by starting at the end of the series, written after Zoom changed its privacy policy, and working back.) If you want research on other privacy issues around Zoom, look elsewhere. Thanks.]


As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

« Older entries