privacy

You are currently browsing the archive for the privacy category.

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

  • Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world
  • Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them
  • Every one of those sites and services has its own unique user interfaces, each with its own peculiarities
  • All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.


† We need to fix that Wikipedia page.

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy.

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want.

The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human interests while also failing to understand jack shit about personal boundaries.

This became clear when Facebook placed the ad above and others like it in major publications recently, and mostly made bad news for itself. We saw the same kind of thing in early 2014, when the IAB ran a similar campaign against Mozilla, using ads like this:

That one was to oppose Mozilla’s decision to turn on Do Not Track by default in its Firefox browser. Never mind that Do Not Track was never more than a polite request for websites to not be infected with a beacon, like those worn by marked animals, so one can be tracked away from the website. Had the advertising industry and its dependents in publishing simply listened to that signal, and respected it, we might never have had the GDPR or the CCPA, both of which are still failing at the same mission. (But, credit where due: the GDPR and the CCPA have at least forced websites to put up insincere and misleading opt-out popovers in front of every website whose lawyers are scared of violating the letter—but never the spirit—of those and other privacy laws.)

The IAB succeeded in its campaign against Mozilla and Do Not Track; but the the victory was Pyrrhic, because users decided to install ad blockers instead, which by 2015 was the largest boycott in human history. Plus a raft of privacy laws, with more in the pipeline.

We also got Apple on our side. That’s good, but not good enough.

What we need are working tools of our own. Examples: Global Privacy Control (and all the browsers and add-ons mentioned there), Customer Commons#NoStalking term, the IEEE’s P7012 – Standard for Machine Readable Personal Privacy Terms, and other approaches to solving business problems from the our side—rather than always from the corporate one.

In those movies, we’ll win.

Because if only Apple wins, we still lose.

Dammit, it’s still about what The Cluetrain Manifesto said in the first place, in this “one clue” published almost 21 years ago:

we are not seats or eyeballs or end users or consumers.
we are human beings — and out reach exceeds your grasp.
deal with it.

We have to make them deal. All of them. Not just Apple. We need code, protocols and standards, and not just regulations.

All the projects linked to above can use some help, plus others I’ll list here too if you write to me with them. (Comments here only work for Harvard email addresses, alas. I’m doc at searls dot com.)

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Because, two years after the GDPR became enforceable, privacy would now be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced.  It’s because it’s too easy for every damn site on the Web—and every damn business with an Internet connection—to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if there was a way for every establishment you entered to painlessly inject a load of tracking beacons into your bloodstream without you knowing it. And that these beacons followed you everywhere and reported your activities back to parties unknown. Would you be okay with that? And how would you like it if you couldn’t even enter without recording your agreement to accept being tracked—on a ledger kept only by the establishment, so you have no way to audit their compliance to the agreement, whatever it might be?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data… while doing the same. (By the way, those red sliders say I’m blocking those trackers. Were it not for Privacy Badger, I’d be allowing them.)

Yes, Google says you’re anonymized somehow in both DoubleClick and Google Analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily.” That’s what’s going on.)

The main problem with the GDPR is that it effectively requires that every visitor to every website opt out of being tracked, and to do so (thank you, insincere “compliance” systems) by going down stairs into the basements of website popovers to throw tracking choice toggles to “off” positions which are typically defaulted on when you get there.

Again, let’s be clear about this: There is no way for you to know exactly how you are being tracked or what is done with information gathered about you. That’s because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything. It’s not even clear that the site or its third parties have a record of that. All you’ve got is a cookie planted deep in your browser’s bowels, designed to announce itself to other parties everywhere you go on the Web. In sum, consenting to a cookie notice leaves nothing resembling an audit trail.

Oh, and the California Consumer Protection Privacy Act (CCPA) makes matters worse by embedding opt-out into law there, while also requiring shit like this in the opt-out basement of every website facing a visitor suspected of coming from that state:

CCPA notice

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR and the CCPA were made to thwart that kind of thing. But they have failed. Instead, they have made the experience of being tracked online a worse one.

Yes, that was not their intent. And yes, both have done some good. But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some tracking. (Though in very different ways, so your experience of not being followed is not a consistent one. Or even perceptible in many cases.)

So tracking remains worse than rampant: it’s defaulted practice for both advertising and site analytics. And will remain so until we have code, laws and enforcement to stop it.

So, nothing to celebrate. Not this Privmas.

Tags: , ,

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

covid sheep

Just learned of The Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates. This is in addition to the UK’s Coronavirus Bill 2020, which is (as I understand it) running the show there right now.

This new bill’s lead author is Prof Lilian Edwards, University of Newcastle. Other contributors: Dr Michael Veale, University College London; Dr Orla Lynskey, London School of Economics; Carly Kind, Ada Lovelace Institute; and Rachel Coldicutt, Careful Industries

Here’s the abstract:

This short Bill attempts to provide safeguards in relation to the symptom tracking and contact tracing apps that are currently being rolled out in the UK; and anticipates minimum safeguards that will be needed if we move on to a roll out of “immunity certificates” in the near future.

Although no one wants to delay or deter the massive effort to fight coronavirus we are all involved in, there are two clear reasons to put a law like this in place sooner rather than later:

(a) Uptake of apps, crucial to their success, will be improved if people feel confident their data will not be misused, repurposed or shared to eg the private sector (think insurers, marketers or employers) without their knowledge or consent, and that data held will be accurate.

(b) Connectedly, data quality will be much higher if people use these apps with confidence and do not provide false information to them, or withhold information, for fear of misuse or discrimination eg impact on immigration status.

(c) The portion of the population which is already digitally excluded needs reassurance that apps will not further entrench their exclusion.

While data protection law provides useful safeguards here, it is not sufficient. Data protection law allows gathering and sharing of data on the basis not just of consent but a number of grounds including the very vague “legitimate interests”. Even health data, though it is deemed highly sensitive, can be gathered and shared on the basis of public health and “substantial public interest”. This is clearly met in the current emergency, but we need safeguards that ensure that sharing and especially repurposing of data is necessary, in pursuit of public legitimate interests, transparent and reviewable.

Similarly, while privacy-preserving technical architectures which have been proposed are also useful, they are not a practically and holistically sufficient or rhetorically powerful enough solution to reassure and empower the public. We need laws as well.

Download it here.

More context, from some tabs I have open:

All of this is, as David Weinberger puts it in the title of his second-to-latest book, Too Big to Know. So, in faith that the book’s subtitle, Rethinking Knowledge Now that the Facts aren’t the Facts,Experts are Everywhere, and the Smartest Person in the Room is the Room, is correct, I’m sharing this with the room.

I welcome your thoughts.

zoom with eyes

[21 April 2020—Hundreds of people are arriving here from this tweet, which calls me a “Harvard researcher” and suggests that this post and the three that follow are about “the full list of the issues, exploits, oversights, and dubious choices Zoom has made.” So, two things. First, while I run a project at Harvard’s Berkman Klein Center, and run a blog that’s hosted by Harvard, I am not a Harvard employee, and would not call myself a “Harvard researcher.” Second, this post and the ones that follow—More on Zoom and Privacy, Helping Zoom, and Zoom’s new privacy policy—are focused almost entirely on Zoom’s privacy policy and how its need to explain the (frankly, typical) tracking-based marketing tech on its home page gives misleading suggestions about the privacy of Zoom’s whole service. If you’re interested in that, read on. (I suggest by starting at the end of the series, written after Zoom changed its privacy policy, and working back.) If you want research on other privacy issues around Zoom, look elsewhere. Thanks.]


As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

Journalism’s biggest problem (as I’ve said before) is what it’s best at: telling stories. That’s what Thomas B. Edsall (of Columbia and The New York Times) does in Trump’s Digital Advantage Is Freaking Out Democratic Strategists, published in today’s New York Times. He tells a story. Or, in the favored parlance of our time, a narrative, about what he sees Republicans’ superior use of modern methods for persuading voters:

Experts in the explosively growing field of political digital technologies have developed an innovative terminology to describe what they do — a lexicon that is virtually incomprehensible to ordinary voters. This language provides an inkling of the extraordinarily arcane universe politics has entered:

geofencingmass personalizationdark patternsidentity resolution technologiesdynamic prospectinggeotargeting strategieslocation analyticsgeo-behavioural segmentpolitical data cloudautomatic content recognitiondynamic creative optimization.

Geofencing and other emerging digital technologies derive from microtargeting marketing initiatives that use consumer and other demographic data to identify the interests of specific voters or very small groups of like-minded individuals to influence their thoughts or actions.

In fact the “arcane universe” he’s talking about is the direct marketing playbook, which was born offline as the junk mail business. In that business, tracking individuals and bothering them personally is a fine and fully rationalized practice. And let’s face it: political campaigning has always wanted to get personal. It’s why we have mass mailings, mass callings, mass textings and the rest of it—all to personal addresses, numbers and faces.

Coincidence: I just got this:

There is nothing new here other than (at the moment) the Trump team doing it better than any Democrat. (Except maybe Bernie.) Obama’s team was better at it in ’08 and ’12. Trump’s was better at it in ’16 and is better again in ’20.*

However, debating which candidates do the best marketing misdirects our attention away from the destruction of personal privacy by constant tracking of our asses online—including tracking of asses by politicians. This, I submit, is a bigger and badder issue than which politicians do the best direct marketing. It may even be bigger than who gets elected to what in November.

As issues go, personal privacy is soul-deep. Who gets elected, and how, are not.

As I put it here,

Surveillance of people is now the norm for nearly every website and app that harvests personal data for use by machines. Privacy, as we’ve understood it in the physical world since the invention of the loincloth and the door latch, doesn’t yet exist. Instead, all we have are the “privacy policies” of corporate entities participating in the data extraction marketplace, plus terms and conditions they compel us to sign, either of which they can change on a whim. Most of the time our only choice is to deny ourselves the convenience of these companies’ services or live our lives offline.

Worse is that these are proffered on the Taylorist model, meaning mass-produced.

There is a natural temptation to want to fix this with policy. This is a mistake for two reasons:

  1. Policy-makers are themselves part of the problem. Hell, most of their election campaigns are built on direct marketing. And law enforcement (which carries out certain forms of policy) has always regarded personal privacy as a problem to overcome rather than a solution to anything. Example.
  2. Policy-makers often screw things up. Exhibit A: the EU’s GDPR, which has done more to clutter the Web with insincere and misleading cookie notices than it has to advance personal privacy tech online. (I’ve written about this a lot. Here’s one sample.)

We need tech of our own. Terms and policies of our own. In the physical world, we have privacy tech in the forms of clothing, shelter, doors, locks and window shades. We have policies in the form of manners, courtesies, and respect for privacy signals we send to each other. We lack all of that online. Until we invent it, the most we’ll do to achieve real privacy online is talk about it, and inveigh for politicians to solve it for us. Which they won’t.

If you’re interested in solving personal privacy at the personal level, take a look at Customer Commons. If you want to join our efforts there, talk to me.

_____________
*The Trump campaign also has the enormous benefit of an already-chosen Republican ticket. The Democrats have a mess of candidates and a split in the party between young and old, socialists and moderates, and no candidate as interesting as is Trump. (Also, I’m not Joyce.)

At this point, it’s no contest. Trump is the biggest character in the biggest story of our time. (I explain this in Where Journalism Fails.) And he’s on a glide path to winning in November, just as I said he was in 2016.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers are their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it) and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools and each other.

Louis Brandeis and Samuel Warren visited the same problem more 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting thrugh to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional, because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example to unlock phones, to sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted in to the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

« Older entries