privacy

You are currently browsing the archive for the privacy category.

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Because, two years after the GDPR became enforceable, privacy would now be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced.  It’s because it’s too easy for every damn site on the Web—and every damn business with an Internet connection—to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if there was a way for every establishment you entered to painlessly inject a load of tracking beacons into your bloodstream without you knowing it. And that these beacons followed you everywhere and reported your activities back to parties unknown. Would you be okay with that? And how would you like it if you couldn’t even enter without recording your agreement to accept being tracked—on a ledger kept only by the establishment, so you have no way to audit their compliance to the agreement, whatever it might be?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data… while doing the same. (By the way, those red sliders say I’m blocking those trackers. Were it not for Privacy Badger, I’d be allowing them.)

Yes, Google says you’re anonymized somehow in both DoubleClick and Google Analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily.” That’s what’s going on.)

The main problem with the GDPR is that it effectively requires that every visitor to every website opt out of being tracked, and to do so (thank you, insincere “compliance” systems) by going down stairs into the basements of website popovers to throw tracking choice toggles to “off” positions which are typically defaulted on when you get there.

Again, let’s be clear about this: There is no way for you to know exactly how you are being tracked or what is done with information gathered about you. That’s because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything. It’s not even clear that the site or its third parties have a record of that. All you’ve got is a cookie planted deep in your browser’s bowels, designed to announce itself to other parties everywhere you go on the Web. In sum, consenting to a cookie notice leaves nothing resembling an audit trail.

Oh, and the California Consumer Protection Privacy Act (CCPA) makes matters worse by embedding opt-out into law there, while also requiring shit like this in the opt-out basement of every website facing a visitor suspected of coming from that state:

CCPA notice

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR and the CCPA were made to thwart that kind of thing. But they have failed. Instead, they have made the experience of being tracked online a worse one.

Yes, that was not their intent. And yes, both have done some good. But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some tracking. (Though in very different ways, so your experience of not being followed is not a consistent one. Or even perceptible in many cases.)

So tracking remains worse than rampant: it’s defaulted practice for both advertising and site analytics. And will remain so until we have code, laws and enforcement to stop it.

So, nothing to celebrate. Not this Privmas.

Tags: , ,

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

covid sheep

Just learned of The Coronavirus (Safeguards) Bill 2020: Proposed protections for digital interventions and in relation to immunity certificates. This is in addition to the UK’s Coronavirus Bill 2020, which is (as I understand it) running the show there right now.

This new bill’s lead author is Prof Lilian Edwards, University of Newcastle. Other contributors: Dr Michael Veale, University College London; Dr Orla Lynskey, London School of Economics; Carly Kind, Ada Lovelace Institute; and Rachel Coldicutt, Careful Industries

Here’s the abstract:

This short Bill attempts to provide safeguards in relation to the symptom tracking and contact tracing apps that are currently being rolled out in the UK; and anticipates minimum safeguards that will be needed if we move on to a roll out of “immunity certificates” in the near future.

Although no one wants to delay or deter the massive effort to fight coronavirus we are all involved in, there are two clear reasons to put a law like this in place sooner rather than later:

(a) Uptake of apps, crucial to their success, will be improved if people feel confident their data will not be misused, repurposed or shared to eg the private sector (think insurers, marketers or employers) without their knowledge or consent, and that data held will be accurate.

(b) Connectedly, data quality will be much higher if people use these apps with confidence and do not provide false information to them, or withhold information, for fear of misuse or discrimination eg impact on immigration status.

(c) The portion of the population which is already digitally excluded needs reassurance that apps will not further entrench their exclusion.

While data protection law provides useful safeguards here, it is not sufficient. Data protection law allows gathering and sharing of data on the basis not just of consent but a number of grounds including the very vague “legitimate interests”. Even health data, though it is deemed highly sensitive, can be gathered and shared on the basis of public health and “substantial public interest”. This is clearly met in the current emergency, but we need safeguards that ensure that sharing and especially repurposing of data is necessary, in pursuit of public legitimate interests, transparent and reviewable.

Similarly, while privacy-preserving technical architectures which have been proposed are also useful, they are not a practically and holistically sufficient or rhetorically powerful enough solution to reassure and empower the public. We need laws as well.

Download it here.

More context, from some tabs I have open:

All of this is, as David Weinberger puts it in the title of his second-to-latest book, Too Big to Know. So, in faith that the book’s subtitle, Rethinking Knowledge Now that the Facts aren’t the Facts,Experts are Everywhere, and the Smartest Person in the Room is the Room, is correct, I’m sharing this with the room.

I welcome your thoughts.

zoom with eyes

[21 April 2020—Hundreds of people are arriving here from this tweet, which calls me a “Harvard researcher” and suggests that this post and the three that follow are about “the full list of the issues, exploits, oversights, and dubious choices Zoom has made.” So, two things. First, while I run a project at Harvard’s Berkman Klein Center, and run a blog that’s hosted by Harvard, I am not a Harvard employee, and would not call myself a “Harvard researcher.” Second, this post and the ones that follow—More on Zoom and Privacy, Helping Zoom, and Zoom’s new privacy policy—are focused almost entirely on Zoom’s privacy policy and how its need to explain the (frankly, typical) tracking-based marketing tech on its home page gives misleading suggestions about the privacy of Zoom’s whole service. If you’re interested in that, read on. (I suggest by starting at the end of the series, written after Zoom changed its privacy policy, and working back.) If you want research on other privacy issues around Zoom, look elsewhere. Thanks.]


As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren’t as Private as You May Think. Here’s What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There’s too much to cover here, so I’ll narrow my inquiry down to the “Does Zoom sell Personal Data?” section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I’ll comment on the second one, starting here:

… Zoom does use certain standard advertising tools which require Personal Data…

What they mean by that is adtech. What they’re also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn’t know that’s happening because Zoom doesn’t tell them. There’s no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there’s a good chance it would be blocking trackers as well. But in the Zoom app, you can’t tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There’s no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger’s Re-Engineering Humanity and Shoshana Zuboff’s In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an “advertising experience,” personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the “sale” of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom’s purposes.

By “certain state laws” I assume they mean California’s new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a “Following the instructions of our users” section, addressing the CCPA, that’s as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn’t think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn’t they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a “sale”).

While advertising has been around since forever, tracking people’s eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without “advertising programs” that “historically operated in this manner.”

By the way, “this manner” is only called advertising. In fact it’s actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising’s Wheat and Chaff.

If you opt out of “sale” of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom  https://us04web.zoom.us/), and can’t find anything about opting out of “‘sale’ of your personal info.” (Later, I did get somewhere, and that’s in the next post, More on Zoom and Privacy.)

Here’s the thing: Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

What Zoom’s current privacy policy says is worse than “You don’t have any privacy here.” It says, “We expose your virtual necks to data vampires who can do what they will with it.”

Please fix it, Zoom.

As for Zoom’s competitors, there’s a great weakness to exploit here.

Next post on the topic: More on Zoom and Privacy.

 

 

 

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

Journalism’s biggest problem (as I’ve said before) is what it’s best at: telling stories. That’s what Thomas B. Edsall (of Columbia and The New York Times) does in Trump’s Digital Advantage Is Freaking Out Democratic Strategists, published in today’s New York Times. He tells a story. Or, in the favored parlance of our time, a narrative, about what he sees Republicans’ superior use of modern methods for persuading voters:

Experts in the explosively growing field of political digital technologies have developed an innovative terminology to describe what they do — a lexicon that is virtually incomprehensible to ordinary voters. This language provides an inkling of the extraordinarily arcane universe politics has entered:

geofencingmass personalizationdark patternsidentity resolution technologiesdynamic prospectinggeotargeting strategieslocation analyticsgeo-behavioural segmentpolitical data cloudautomatic content recognitiondynamic creative optimization.

Geofencing and other emerging digital technologies derive from microtargeting marketing initiatives that use consumer and other demographic data to identify the interests of specific voters or very small groups of like-minded individuals to influence their thoughts or actions.

In fact the “arcane universe” he’s talking about is the direct marketing playbook, which was born offline as the junk mail business. In that business, tracking individuals and bothering them personally is a fine and fully rationalized practice. And let’s face it: political campaigning has always wanted to get personal. It’s why we have mass mailings, mass callings, mass textings and the rest of it—all to personal addresses, numbers and faces.

Coincidence: I just got this:

There is nothing new here other than (at the moment) the Trump team doing it better than any Democrat. (Except maybe Bernie.) Obama’s team was better at it in ’08 and ’12. Trump’s was better at it in ’16 and is better again in ’20.*

However, debating which candidates do the best marketing misdirects our attention away from the destruction of personal privacy by constant tracking of our asses online—including tracking of asses by politicians. This, I submit, is a bigger and badder issue than which politicians do the best direct marketing. It may even be bigger than who gets elected to what in November.

As issues go, personal privacy is soul-deep. Who gets elected, and how, are not.

As I put it here,

Surveillance of people is now the norm for nearly every website and app that harvests personal data for use by machines. Privacy, as we’ve understood it in the physical world since the invention of the loincloth and the door latch, doesn’t yet exist. Instead, all we have are the “privacy policies” of corporate entities participating in the data extraction marketplace, plus terms and conditions they compel us to sign, either of which they can change on a whim. Most of the time our only choice is to deny ourselves the convenience of these companies’ services or live our lives offline.

Worse is that these are proffered on the Taylorist model, meaning mass-produced.

There is a natural temptation to want to fix this with policy. This is a mistake for two reasons:

  1. Policy-makers are themselves part of the problem. Hell, most of their election campaigns are built on direct marketing. And law enforcement (which carries out certain forms of policy) has always regarded personal privacy as a problem to overcome rather than a solution to anything. Example.
  2. Policy-makers often screw things up. Exhibit A: the EU’s GDPR, which has done more to clutter the Web with insincere and misleading cookie notices than it has to advance personal privacy tech online. (I’ve written about this a lot. Here’s one sample.)

We need tech of our own. Terms and policies of our own. In the physical world, we have privacy tech in the forms of clothing, shelter, doors, locks and window shades. We have policies in the form of manners, courtesies, and respect for privacy signals we send to each other. We lack all of that online. Until we invent it, the most we’ll do to achieve real privacy online is talk about it, and inveigh for politicians to solve it for us. Which they won’t.

If you’re interested in solving personal privacy at the personal level, take a look at Customer Commons. If you want to join our efforts there, talk to me.

_____________
*The Trump campaign also has the enormous benefit of an already-chosen Republican ticket. The Democrats have a mess of candidates and a split in the party between young and old, socialists and moderates, and no candidate as interesting as is Trump. (Also, I’m not Joyce.)

At this point, it’s no contest. Trump is the biggest character in the biggest story of our time. (I explain this in Where Journalism Fails.) And he’s on a glide path to winning in November, just as I said he was in 2016.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers are their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it) and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools and each other.

Louis Brandeis and Samuel Warren visited the same problem more 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting thrugh to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional, because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example to unlock phones, to sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted in to the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

black hole

Last night I watched The Great Hack a second time. It’s a fine documentary, maybe even a classic. (A classic in literature, I learned on this Radio Open Source podcast, is a work that “can only be re-read.” If that’s so, then perhaps a classic movie is one that can only be re-watched.*)

The movie’s message could hardly be more loud and clear: vast amounts of private information about each of us is gathered constantly in the digital world, and is being weaponized so our minds and lives can be hacked by others for commercial or political gain. Or both. The movie’s star, Professor David Carroll of the New School (@profcarroll), has been delivering that message for many years, as have many others, including myself.

But to what effect?

Sure, we have policy moves such as the GDPR, the main achievement of which (so far) has been to cause every website to put confusing and (in most cases) insincere cookie notices on their index pages, meant (again, in most cases) to coerce “consent” (which really isn’t) to exactly the unwanted tracking the regulation was meant to stop.

Those don’t count.

Ennui does. Apathy does.

On seeing The Great Hack that second time, I had exactly the same feeling my wife had on seeing it for her first: that the very act of explaining the problem also trivialized it. In other words, the movie worsened the very problem it solved. And it isn’t alone at this, because so has everything everybody has said, written or reported about it. Or so it sometimes seems. At least to me.

Okay, so: if I’m right about that, why might it be?

One reason is that there’s no story. See, every story requires three elements: character (or characters), problem (or problems), and movement toward resolution. (Find a more complete explanation here.) In this case, the third element—movement toward resolution—is absent. Worse, there’s almost no hope. “The Great Hack” concludes with a depressing summary that tends to leave one feeling deeply screwed, especially since the only victories in the movie are over the late Cambridge Analytica; and those victories were mostly within policy circles we know will either do nothing or give us new laws that protect yesterday from last Thursday… and then last another hundred years.

The bigger reason is that we are now in a media environment summarized by Marshall McLuhan in his book The Medium is the Massage: “every new medium works us over completely.” Our new medium is the Internet, which is a non-place absent of distance and gravity. The only institutions holding up there are ones clearly anchored in the physical world. Health care and law enforcement, for example. Others dealing in non-material goods, such as information and ideas, aren’t doing as well.

Journalism, for example. Worse, on the Internet it’s easy for everyone to traffic in thoughts and opinions, as well as in solid information. So now the world of thoughts and ideas, which preponderate on social media such as Twitter, Facebook and Instagram, are vast floods of everything from everybody. In the midst of all that, the news cycle, which used to be daily, now lasts about as long as a fart. Calling it all too much is a near-absolute understatement.

But David Carroll is right. Darkness is falling. I just wish all the light we keep trying to shed would do a better job of helping us all see that.

_________

*For those who buy that notion, I commend The Rewatchables, a great podcast from The Ringer.

Whither Linux Journal?

[16 August 2019…] Had a reassuring call yesterday with Ted Kim, CEO of London Trust Media. He told me the company plans to keep the site up as an archive at the LinuxJournal.com domain, and that if any problems develop around that, he’ll let us know. I told him we appreciate it very much—and that’s where it stands. I’m leaving up the post below for historical purposes.

On August 5th, Linux Journal‘s staff and contractors got word from the magazine’s parent company, London Trust Media, that everyone was laid off and the business was closing. Here’s our official notice to the world on that.

I’ve been involved with Linux Journal since before it started publishing in 1994, and have been on its masthead since 1996. I’ve also been its editor-in-chief since January of last year, when it was rescued by London Trust Media after nearly going out of business the month before. I say this to make clear how much I care about Linux Journal‘s significance in the world, and how grateful I am to London Trust Media for saving the magazine from oblivion.

London Trust Media can do that one more time, by helping preserve the Linux Journal website, with its 25 years of archives, so all its links remain intact, and nothing gets 404’d. Many friends, subscribers and long-time readers of Linux Journal have stepped up with offers to help with that. The decision to make that possible, however, is not in my hands, or in the hands of anyone who worked at the magazine. It’s up to London Trust Media. The LinuxJournal.com domain is theirs.

I have had no contact with London Trust Media in recent months. But I do know at least this much:

  1. London Trust Media has never interfered with Linux Journal‘s editorial freedom. On the contrary, it quietly encouraged our pioneering work on behalf of personal privacy online. Among other things, LTM published the first draft of a Privacy Manifesto now iterating at ProjectVRM, and recently published on Medium.
  2. London Trust Media has always been on the side of freedom and openness, which is a big reason why they rescued Linux Journal in the first place.
  3. Since Linux Journal is no longer a functioning business, its entire value is in its archives and their accessibility to the world. To be clear, these archives are not mere “content.” They are a vast store of damned good writing, true influence, and important history that search engines should be able to find where it has always been.
  4. While Linux Journal is no longer listed as one of London Trust Media’s brands, the website is still up, and its archives are still intact.

While I have no hope that Linux Journal can be rescued again as a subscriber-based digital magazine, I do have hope that the LinuxJournal.com domain, its (Drupal-based) website and its archives will survive. I base that hope on believing that London Trust Media’s heart has always been in the right place, and that the company is biased toward doing the right thing.

But the thing is up to them. It’s their choice whether or not to support the countless subscribers and friends who have stepped forward with offers to help keep the website and its archives intact and persistent on the Web. It won’t be hard to do that. And it’s the right thing to do.

The Spinner* (with the asterisk) is “a service that enables you to subconsciously influence a specific person, by controlling the content on the websites he or she usually visits.” Meaning you can hire The Spinner* to hack another person.

It works like this:

  1. You pay The Spinner* $29. For example, to urge a friend to stop smoking. (That’s the most positive and innocent example the company gives.)
  2. The Spinner* provides you with an ordinary link you then text to your friend. When that friend clicks on the link, they get a tracking cookie that works as a bulls-eye for The Spinner* to hit with 10 different articles written specifically to influence that friend. He or she “will be strategically bombarded with articles and media tailored to him or her.” Specifically, 180 of these things. Some go in social networks (notably Facebook) while most go into “content discovery platforms” such as Outbrain and Revcontent (best known for those clickbait collections you see appended to publishers’ websites).

The Spinner* is also a hack on journalism, designed like a magic trick to misdirect moral outrage toward The Spinner’s obviously shitty business, and away from the shitty business called adtech, which not only makes The Spinner possible, but pays for most of online journalism as well.

The magician behind The Spinner* is “Elliot Shefler.” Look that name up and you’ll find hundreds of stories. Here are a top few, to which I’ve added some excerpts and notes:

  • For $29, This Man Will Help Manipulate Your Loved Ones With Targeted Facebook And Browser Links, by Parmy Olson @parmy in Forbes. Excerpt: He does say that much of his career has been in online ads and online gambling. At its essence, The Spinner’s software lets people conduct a targeted phishing attack, a common approach by spammers who want to secretly grab your financial details or passwords. Only in this case, the “attacker” is someone you know. Shefler says his algorithms were developed by an agency with links to the Israeli military.
  • For $29, This Company Swears It Will ‘Brainwash’ Someone on Facebook, by Kevin Poulson (@kpoulson) in The Daily Beast. A subhead adds, A shadowy startup claims it can target an individual Facebook user to bend him or her to a client’s will. Experts are… not entirely convinced.
  • Facebook is helping husbands ‘brainwash’ their wives with targeted ads, by Simon Chandler (@_simonchandler_) in The Daily Dot. Excerpt: Most critics assume that Facebook’s misadventures relate only to its posting of ads paid for by corporations and agencies, organizations that aim to puppeteer the “average” individual. It turns out, however, that the social network also now lets this same average individual place ads that aim to manipulate other such individuals, all thanks to the mediation of a relatively new and little-known company…
  • Brainwashing your wife to want sex? Here is adtech at its worst., by Samuel Scott (@samueljscott) in The Drum. Alas, the piece is behind a registration wall that I can’t climb without fucking myself (or so I fear, since the terms and privacy policy total 32 pages and 10,688 words I’m not going to read), so I can’t quote from it.
  • Creepy company hopes ‘Inception’ method will get your wife in the mood, by Saqib Shah (@eightiethmnt) in The Sun, via The New York Post. Excerpt: “It’s unethical in many ways,” admitted Shefler, adding “But it’s the business model of all media. If you’re against it, you’re against all media.” He picked out Nike as an example, explaining that if you visit the brand’s website it serves you a cookie, which then tailors the browsing experience to you every time you come back. A shopping website would also use cookies to remember the items you’re storing in a virtual basket before checkout. And a social network might use cookies to track the links you click and then use that information to show you more relevant or interesting links in the future…The Spinner started life in January of this year. Shefler claims the company is owned by a larger, London-based “agency” that provides it with “big data” and “AI” tools.
  • Adtech-for-sex biz tells blockchain consent app firm, ‘hold my beer’, by Rebecca Hill (@beckyhill) in The Register. The subhead says, Hey love, just click on this link… what do you mean, you’re seeing loads of creepy articles?
  • New Service Promises to Manipulate Your Wife Into Having Sex With You, by Fiona Tapp (@fionatappdotcom) in Rolling Stone. Excerpt: The Spinner team suggests that there isn’t any difference, in terms of morality, from a big company using these means to influence a consumer to book a flight or buy a pair of shoes and a husband doing the same to his wife. Exactly.
  • The Spinner And The Faustian Bargain Of Anonymized Data, by Lauren Arevalo-Downes (whose Twitter link by the piece goes to a 404) in A List Daily. On that site, the consent wall that creeps up from the bottom almost completely blanks out the actual piece, and I’m not going to “consent,” so no excertoing here either.
  • Can you brainwash one specific person with targeted Facebook ads? in TripleJ Hack, by ABC.net.au. Excerpt: Whether or not the Spinner has very many users, whether or not someone is going to stop drinking or propose marriage simply because they saw a sponsored post in their feed, it seems feasible that someone can try to target and brainwash a single person through Facebook.
  • More sex, no smoking – even a pet dog – service promises to make you a master of manipulation, by Chris Keall (@ChrisKeall) in The New Zealand Herald. Excerpt: On one level, The Spinner is a jape, rolled out as a colour story by various publications. But on another level it’s a lot more sinister: apparently yet another example of Facebook’s platform being abused to invade privacy and manipulate thought.
  • The Cambridge Analytica of Sex: Online service to manipulate your wife to have sex with you, by Ishani Ghose in meaww. Excerpt: The articles are all real but the headlines and the descriptions have been changed by the Spinner team. The team manipulating the headlines of these articles include a group of psychologists from an unnamed university. As the prepaid ads run, the partner will see headlines such as “3 Reasons Why YOU Should Initiate Sex With Your Husband” or “10 Marriage Tips Every Woman Needs to Hear”.

Is Spinner for real?

“Elliot Shefler” is human for sure. But his footprint online is all PR. He’s not on Facebook, Twitter or Instagram. The word “Press” (as in coverage) at the top of the Spinner website is just a link to a Google search for Elliot Shefler, not to curated list such as a real PR person or agency might compile.

Fortunately, a real PR person, Rich Leigh (@RichLeighPR) did some serious digging (you know, like a real reporter) and presented his findings in his blog, PR Examples, in a post titled Frustrated husbands can ‘use micro-targeted native ads to influence their wives to initiate sex’ – surely a PR stunt? Please, a PR stunt? It ran last July 10th, the day after Rich saw this tweet by Maya Kosoff (@mekosoff):

—and this one:

The links to (and in) those tweets no longer work, but the YouTube video behind one of the links is still up. The Spinner itself produced the video, which is tricked to look like a real news story. (Rich does some nice detective work, figuring that out.) The image above is a montage I put together from screenshots of the video.

Here’s some more of what Rich found out:

  • Elliot – not his real name, incidentally, his real name is Halib, a Turkish name (he told me) – lives, or told me he lives, in Germany

  • When I asked him directly, he assured me that it was ‘real’, and when I asked him why it didn’t work when I tried to pay them money, told me that it would be a technical issue that would take around half an hour to fix, likely as a result of ‘high traffic. I said I’d try again later. I did – keep reading

  • It is emphatically ‘not’ PR or marketing for anything

  • He told me that he has 5-6,000 paying users – that’s $145,000 – $174,000, if he’s telling the truth

  • Halib said that Google Ads were so cheap as nobody was bidding on them for the terms he was going for, and they were picking up traffic for ‘one or two cents’

  • He banked on people hate-tweeting it. “I don’t mind what they feel, as long as they think something”, Halib said – which is scarily like something I’ve said in talks I’ve given about coming up with PR ideas that bang

  • The service ‘works’ by dropping a cookie, which enables it to track the person you’re trying to influence in order to serve specific content. I know we had that from the site, but it’s worth reiterating

Long post short, Rich says Habib and/or Elliot is real, and so is The Spinner.

But what matters isn’t whether or not The Spinner is real. It’s that The Spinner misdirects reporters’ attention away from what adtech is and does, which is spy on people for the purpose of aiming stuff at them. And that adtech isn’t just what funds all of Facebook and much of Google (both giant and obvious targets of journalistic scrutiny), but what funds nearly all of publishing online, including most reporters’ salaries.

So let’s look deeper, starting here: There is no moral difference between planting an unseen tracking beacon on a person’s digital self and doing the same on a person’s physical self.

The operational difference is that in the online world it’s a helluva lot easier to misdirect people into thinking they’re not being spied on. Also a helluva lot easier for spies and intermediaries (such as publishers) to plausibly deny that spying is what they’re doing. And to excuse it, saying for example “It’s what pays for the Free Internet!” Which is bullshit, because the Internet, including the commercial Web, got along fine for many years before adtech turned the whole thing into Mos Eisley. And it will get along fine without adtech after we kill it, or it dies of its own corruption.

Meanwhile the misdirection continues, and it’s away from a third rail that honest and brave journalists† need to grab: that adtech is also what feeds most of them.

______________

† I’m being honest here, but not brave. Because I’m safe. I don’t work for a publication that’s paid by adtech. At Linux Journal, we’re doing the opposite, by being the first publication ready to accept terms that our readers proffer, starting with Customer CommonsP2B1(beta), which says “Just show me ads not based on tracking me.”

« Older entries