Identity

You are currently browsing the archive for the Identity category.

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

This is about credit where due, and unwanted by the credited. I speak here of Kim Cameron, a man whose modesty was immense because it had to be, given the size of his importance to us all.

See, to the degree that identity matters, and disparate systems getting along with each other matters—in both cases for the sakes of each and all—Kim’s original wisdom and guidance matters. And that mattering is only beginning to play out.

But Kim isn’t here to shake his head at what I just said, because (as I reported in my prior post) he passed last week.

While I expect Kim’s thoughts and works to prove out over time, the point I want to make here is that it is possible for an open and generous person in a giant company to use its power for good, and not play the heavy doing it. That’s the example Kim set in the two decades he was the top architect of Microsoft’s approach to digital identity and meta systems (that is, systems that make disparate systems work as if just one).

I first saw him practice these powers at the inaugural meeting of a group that called itself the Identity Gang. That name was given to the group by Steve Gillmor, who hosted a Gillmor Gang podcast (here’s the audio) on the topic of digital identity, on December 31, 2004: New Years Eve. To follow that up, seven of the nine people in that podcast, plus about as many more, gathered during a break at Esther Dyson‘s PC Forum conference in Scottsdale, Arizona, on March 20, 2005. Here is an album of photos I shot of the Gang, sitting around an outside table. (The shot above is one of them.) There was a purpose to the meeting: deciding what we should do next, for all of the very different identity-related projects we were working on—and for all the other possible developments that also needed support.

Kim was the most powerful participant, owing both to his position at Microsoft and for having issued, one by one, Seven Laws of Identity, over the preceding months. Like the Ten Commandments, Kim’s laws are rules which, even if followed poorly, civilize the world.

Kim always insisted that his Laws were not carved on stone tablets and that he was no burning bush, but those laws were, and remain, enormously important. And I doubt that would be so without Kim’s 200-proof Canadian modesty.

The next time the Identity Gang met was in October of that year, in Berkeley. By then the gang had grown to about a hundred people. Organized by Kaliya (IdentityWoman) Young, Phil Windley, and myself (but mostly the other two), the next meeting was branded Internet Identity Workshop (IIW), and it has been held every Fall and Spring since then at the Computer History Museum (and, on three pandemic occasions, online), with hundreds, from all over the world, participating every time.

IIW is an open space workshop, meaning that it consists entirely of breakouts on topics chosen and led by the participants. There are no keynotes, no panels, no vendor booths. Sponsor involvement is limited to food, coffee, free wi-fi, projectors, and other graces that carry no other promotional value. (Thanks to Kim, it has long been a tradition for Microsoft to sponsor an evening at a local restaurant and bar.) Most importantly, the people attending from big companies and startups alike are those with the ability to engineer or guide technical developments that work for everyone and not for just those companies.

I’m biased, but I believe IIW is the most essential and productive conference of any kind, in the world. Conversations and developments of many kinds are moved forward at every one of them. Examples of developments that might not be the same today but for IIW include OAuth, OpenID, personal clouds, picosSSI, VRM, KERI, and distributed ledgers.

I am also sure that progress made around digital identity would not be the same (or as advanced) without Kim Cameron’s strong and gentle guidance. Hats off to his spirit, his laws, and his example.

 

 

Got word yesterday that Kim Cameron had passed.

Hit me hard. Kim was a loving and loved friend. He was also a brilliant and influential thinker and technologist.

That’s Kim, above, speaking at the 2018 EIC conference in Germany. His topics were The Laws of Identity on the Blockchain and Informational Self-Determination in a Post Facebook/Cambridge Analytica Era (in the Ownership of Data track).

The laws were seven:

  1. User control and consent
  2. Minimum disclosure for a constrained use
  3. Justifiable parties
  4. Directed identity (meaning pairwise, known only to the person and the other party)
  5. Pluralism of operators
  6. Human integration
  7. Consistent experience across contexts

He wrote these in 2004, when he was still early in his tenure as Microsoft’s chief architect for identity (one of several similar titles he held at the company). Perhaps more than anyone at Microsoft—or at any big company—Kim pushed constantly toward openness, inclusivity, compatibility, cooperation, and the need for individual agency and scale. His laws, and other contributions to tech, are still only beginning to have full influence. Kim was way ahead of his time, and its a terrible shame that his own is up. He died of cancer on November 30.

But Kim was so much more—and other—than his work. He was a great musician, teacher (in French and English), thinker, epicure, traveler, father, husband, and friend. As a companion, he was always fun, as well as curious, passionate, caring, gracious. Pick a flattering adjective and it likely applies.

I am reminded of what a friend said of Amos Tversky, another genius of seemingly boundless vitality who died too soon: “Death is unrepresentative of him.”

That’s one reason it’s hard to think of Kim in the past tense, and why I resisted the urge to update Kim’s Wikipedia page earlier today. (Somebody has done that now, I see.)

We all get our closing parentheses. I’ve gone longer without closing mine than Kim did before closing his. That also makes me sad, not that I’m in a hurry. Being old means knowing you’re in the exit line, but okay with others cutting in. I just wish this time it wasn’t Kim.

Britt Blaser says life is like a loaf of bread. It’s one loaf no matter how many slices are in it. Some people get a few slices, others many. For the sake of us all, I wish Kim had more.

Here is an album of photos of Kim, going back to 2005 at Esther Dyson’s PC Forum, where we had the first gathering of what would become the Internet Identity Workshop, the 34th of which is coming up next Spring. As with many other things in the world, it wouldn’t be the same—or here at all—without Kim.

Bonus links:

You don’t walk around wearing a name badge.  Except maybe at a conference, or some other enclosed space where people need to share their names and affiliations with each other. But otherwise, no.

Why is that?

Because you don’t need a name badge for people who know you—or for people who don’t.

Here in civilization we typically reveal information about ourselves to others on a need-to-know basis: “I’m over 18.” “I’m a citizen of Canada.” “Here’s my Costco card.” “Hi, I’m Jane.” We may or may not present credentials in these encounters. And in most we don’t say our names. “Michael” being a common name, a guy called “Mike” may tell a barista his name is “Clive” if the guy in front of him just said his name is “Mike.” (My given name is David, a name so common that another David re-branded me Doc. Later I learned that his middle name was David and his first name was Paul. True story.)

This is how civilization works in the offline world.

Kim Cameron wrote up how this ought to work, in Laws of Identity, first published in 2004. The Laws include personal control and consentminimum disclosure for a constrained usejustifiable parties, and plurality of operators. Again, we have those in here in the offline world where your body is reading this on a screen.

In the online world behind that screen, however, you have a monstrous mess. I won’t go into why. The results are what matter, and you already know those anyway.

Instead, I’d like to share what (at least for now) I think is the best approach to the challenge of presenting verifiable credentials in the digital world. It’s called KERI, and you can read about it here: https://keri.one/. If you’d like to contribute to the code work, that’s here: https://github.com/decentralized-identity/keri/.

I’m still just getting acquainted with it, in sessions at IIW. The main thing is that I’m sure it matters. So I’m sharing that sentiment, along with those links.

 

Deepfakes are a big thing, and a bad one.

On the big side, a Google search for deepfake brings up more than 23 billion results.

On the bad side, today’s top result in a search on Twitter for the hashtag #deepfake says, “Technology is slowly killing reality. I am worried of tomorrow’s truths that will be made in shops. This #deepfake is bothering my soul deeply.” In another of the top results, a Vice report is headlined Deepfake Porn Is Evolving to Give People Total Control Over Women’s Bodies.

Clearly we need an antidote here.

I suggest deepreal.

If deepfake lies at the bottom of the uncanny valley (as more than 37 thousand sites online suggest), deepreal should be just as highly out of that valley. As the graphic above (source) suggests, the deeply real (I added that) is fully human, and can elicit any level of emotional response, as real humans tend to do.

So what do we know that’s already deepreal?

Well, there’s reality itself, meaning the physical kind. A real person talking to you in the real world is undeniably human (at least until robots perfectly emulate human beings walking, talking and working among us, which will be icky and therefore deep in the uncanny valley). But what about the digital world? How can we be sure that a fully human being is also deeply real where the prevalent state is incorporeal—absent of physical existence?

The only way I know, so far, is with self-sovereign identity (SSI) technology, which gives us standardized ways of letting others know required facts about us (e.g. “I’m over 18,” “I’m a citizen of this country,” “I have my own car insurance,” “I live in this state,” “I’m a member of this club.”) Here’s some of what I’ve written and said about SSI:

  1. The Sovereign Identity Revolution (OneWorldIdentity, 21 February, 2017)
  2. New Hope for Digital Identity (Linux Journal, 9 November 2017)
  3. Some Thoughts About Self-Sovereign Identity (doc.blog, 16 March 2019)
  4. Some Perspective on  Self-Sovereign Identity (KuppingerCole, 20 April 2019)
  5. Thoughts at #ID2020 (Doc Searls Weblog, 19 September 2019)

As I put it in #4 above, “The time has come to humanize identity in the networked world by making it as personal as it has been all along in the natural one.” I believe it is only by humanizing identity in the networked world that we can start to deal with deepfakes and other ways we are being dehumanized online. (And, if you’re skeptical about SSI, as are some I shouted out to here, what other means to you suggest? It’s still early, and the world of possibility is large.)

I also look forward to discussing this with real people here online—and in the physical world. Toward that, here are some identity tech gatherings coming up in 2020:

I also look forward to playing whack-a-mole with robots faking interest in this post; and which, because I’ll succeed, you’ll not see in the comment section below. (You probably also won’t see comments by humans, because humans prefer conversational venues not hogged by robots.)

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers is their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it), and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short-term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools, and each other.

Louis Brandeis and Samuel Warren visited the same problem more than 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording, and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used by law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, which was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting through to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time, appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example, to unlock phones, sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age, or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted into the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

Thoughts at #ID2020

I’m at the ID2020 (@ID2020) Summit in New York. The theme is “Rising to the Good ID Challenge.” My notes here are accumulating at the bottom, not the top. Okay, here goes…

At that last link it says, “The ID2020 Alliance is setting the course of digital ID through a multi-stakeholder partnership, ensuring digital ID is responsibly implemented and widely accessible.”

I find myself wondering if individuals are among the stakeholders. Also this:

There is also a manifesto. It says, among other things, “The ability to prove one’s identity is a fundamental and universal human right.” and “We live in a digital era. Individuals need a trusted, verifiable way to prove who they are, both in the physical world and online.”

That’s good. I’d also want more than one way, which may be the implication here.

The first speaker is from Caribou Digital. What follows is from her talk.

“1. It’s about the user, not just the use case.”

Hmm… I believe identity needs to be about independent human beings, not just “users” of systems.

“2. Intermediaries are still critical.”

The focus here is on family and institutional intermediaries, especially in the less developed world. Which is fine; but people should not need intermediaries in all cases. If you tell someone your name, or give them a business card no intermediary is involved. That same convention should be available online.

“3. It’s not just about an ‘ID.’ It’s not even about an identity system. It’s about an identification ecosystem.”

This is fine, but identification is about what systems do, not about what individuals do or have; and by itself tends to exclude self-sovereign identity. Self-sovereign is how identity works in the physical world. Here we are nameless (literally, anonymous) to most others, and reveal information about who we are (business cards, student ID, drivers license) on an as-needed basis that obeys Kim Cameron’s Laws of Identity, notably “minimum disclosure for a constrained use,” “justifiable parties” and “personal control and consent.”

4. “A human-centered, inclusive, respectful vision for the next stage of identification in a digital age.”

We need human-driven. -centered is something organizations do. I visited the difference long ago here and here.

That’s over and the first panel is on now. Most of it is inaudible where I sit. The topic now is self-sovereign and decentralized. The audience seems to be pushing that. @MatthewDavie just said something sensible, I think, but don’t have a quote.

This:

And this. Read the thread that follows. There are disagreements and explanations.

Here’s the ID2020 search on Twitter.

Background, at least on where I’m coming from: https://www.google.com/search?q=”doc+searls”+identity.

For the interested, @identitywoman, @windley and I (@dsearls) put on the Internet Identity Workshop, October 1-3 at the Computer History Museum in Silicon Valley. This one will be our 29th. (The first was in 2005 and there are two per year.) It’s an unconference: no keynotes or panels, just breakouts on topics attendees choose and lead. It’s the most consequential conference I know.

@MatthewDavie: “If we do this, and it doesn’t work with the current power players, we’re going to end up with a second-class system.” I suspect this makes sense, but I’m not sure what “this” is.

“Sovereign ownership of data” just came up from the audience. I think it’s possible for individuals to act in a self-sovereign way in sharing identity data, but not that this data is exclusively own-able. Some thoughts on that from Elizabeth Renieris (@HackyLawyER). Mine agree.

The second panel is on now. It’s mostly inauduble.

Now Dakota Gruener (@DakotaGruener), Executive Director of ID2020 is speaking. She’s telling a moving story about a homeless neighbor, Colin, who is denied services for lack of official ID(s).

New panel: Decentralization in National ID Programs.

Kim Cameron is on the panel now: “I spent thirty years building the world’s identity systems.” There were gasps. I yelled “It’s true.” He continued: “I’m now trying to rile up the world’s populations…”

John Jordan just made a point about how logins are a screwed up way to do things online and don’t map to what we know well and do in the everyday world. (I think that’s close. The sound system is dim at this end of the room.)

Kim just sourced my wife (who is here and now deeper than I am in this identity stuff), adding that “people know something is wrong” when they mention shoes somewhere and then see ads for shoes online. “We have technology. We have consciousness. We have will. So let’s do something.”

John: “What we want is to be in control of our relationships. Those are ours. Those are decentralized… People are decentralized.”

Kim: “What it means is recognizing that identity is who we are. It begins with us. .. only we know the aggregate of these attributes. In daily life we reveal some of those attributes, but never the aggregate. We need a system that begins with the indi and recognizes that they are in control, and choose what they reveal separately. We don’t want aggregates of ourselves to be everywhere. We need systems that recognize that, and are based on control by the individual, consent of the individual.”

“We do need assertions from people other than ourselves. The government can provide useful claims about a person. So can a university, or a bank. I can say somebody is a great guy. The identity fabric is all these claims.” Not quite verbatim, but close.

John: “Personal data should never be presented in a non-cryptographic way.” Something like that.

Kim on the GDPR: “We have it because the population demanded it… what will happen is this vision of people in control of their identity, and the Internet becoming reliable and trustworthy and probabilistic (meaning you’re being guessed at) rather than fully useful. Let’s give people their own wallets, let them run their own lives, make up their own minds… the world of legislation will grow, and it will do that around the will of people. … they need an identity system based on individuals rather than institutions overstepping their bounds… and we will see conflicts around this, with both good and bad government interventions.”

John: “I’d like to see legislation that forbids companies from holding personal information they don’t have to.” (Not verbatim, but maybe close. Again, hard to hear.)

Kim: “The current identity systems of the world are collapsing… you will have major institutions switching over to these decentralized identity systems, not from altruism, but from liability.”

Elizabeth heard and tweeted about one of the thing that was inaudible to me at this end of the room: “Thank you @LudaBujoreanu for addressing the deep disconnect between the reality on the ground of those without ID and the privileged POV from which many of these #digitalid systems are built @ID2020’s #id2020summit cc @WomeninID

Next panel: “Cities Driving Innovation in Good ID.”

Scott David from the audience just talked about “Turning troubles into problems,” and the challenge of doing that for individuals in an identity context.”

This reminds me of what Gideon Litchfield said about the difference between debates and conflicts, and I expanded on a bit here. Our point was that there are some issues that become locked in conflict with no real debate between sides. Scott’s distinction is toward a way out. Interesting. I’d like to know more.

Ken Banks tweets, “It’s an increasingly crowded space… #digitalidentity #ID2020″:

Image
He adds, “Already lots of talk of putting people first. Hopefully the #digitalidentity community will deliver, and not fall into the trap of saying one thing and doing another, a common issue with in the tech-for-development/#ICT4D sector. #ID2020 #GoodID

Two tweets…

@Gavi: “Government representatives, tech experts & civil society will gather at #UNGA74 today to discuss the potential of #DigitalID. Biometric ID data could help us better monitor which children need to be vaccinated and when. #ID2020

Image

 

Now I can’t find the other one. It argued that there is a 2-3% error rate for biomentric.

For lunch David Carroll (@ProfCarroll) of The New School (@thenewschool) is talking. Title: A data quest: holding tech to account. He starred in The Great Hack, on Netflix.

He’s sourcing Democracy Disrupted, by the UK ICO. “the sortable, addressable… algorithmic democracy. “Couterveillance: advertisers get all the privacy. We get none.”

“Parable of the great hadk: data rights must externd to digital creditoship. Identity depends on it.”

200 million America has no access to data held about them, by, for istance, Acxiom.

“A simple bill of data rights. Creditorship, objection, control, knowledge.” (Here’s something that’s not it, but interesting enough for me to flag for later reading.)

Now a panel moderated by Raffi Kirkorian. Also Cameron Birge of Microsoft and the Emerson Collective, Karen Ottoni, Demora Compari, Matthew Yarger and Christine Leong. (Again the sound is weak at this end of the room. Not picking up much here.)

Okay, that’s it. I’ll say more after I pull some pix together and complete these public notes…

Well, I have the pix, but the upload thing here in WordPress gives me an “http error” when I try to upload them. And now I’ve gotta drive to Boston, so that’ll have to wait.

Power of the People is a great grabber of a headline, at least for me. But it’s a pitch for a report that requires filling out the form here on the right:

You see a lot of these: invitations to put one’s digital ass on mailing list, just to get a report that should have been public in the first place, but isn’t so personal data can be harvested and sold or given away to God knows who.

And you do more than just “agree to join” a mailing list. You are now what marketers call a “qualified lead” for countless other parties you’re sure to be hearing from.

And how can you be sure? Read the privacy policy,. This one (for Viantinc.com) begins,

If you choose to submit content to any public area of our websites or services, your content will be considered “public” and will be accessible by anyone, including us, and will not be subject to the privacy protections set forth in this Privacy Policy unless otherwise required by law. We encourage you to exercise caution when making decisions about what information you disclose in such public areas.

Is the form above one of those “public areas”? Of course. What wouldn’t be? And are they are not discouraging caution by requiring you to fill out all the personal data fields marked with a *? You betcha. See here:

III. How we use and share your information

A. To deliver services

In order to facilitate our delivery of advertising, analytics and other services, we may use and/or share the information we collect, including interest-based segments and user interest profiles containing demographic information, location information, gender, age, interest information and information about your computer, device, or group of devices, including your IP address, with our affiliates and third parties, such as our service providers, data processors, business partners and other third parties.

B. With third party clients and partners

Our online advertising services are used by advertisers, websites, applications and other companies providing online or internet connected advertising services. We may share information, including the information described in section III.A. above, with our clients and partners to enable them to deliver or facilitate the delivery of online advertising. We strive to ensure that these parties act in accordance with applicable law and industry standards, but we do not have control over these third parties. When you opt-out of our services, we stop sharing your interest-based data with these third parties. Click here for more information on opting out.

No need to bother opting out, by the way, because there’s this loophole too:

D. To complete a merger or sale of assets

If we sell all or part of our business or make a sale or transfer of our assets or are otherwise involved in a merger or transfer of all or a material part of our business, or participate in any other similar business combination (including, without limitation, in connection with any bankruptcy or similar proceeding), we may transfer all or part of our data to the party or parties involved in the transaction as part of that transaction. You acknowledge that such transfers may occur, and that we and any purchaser of our business or assets may continue to collect, use and disclose your information in compliance with this Privacy Policy.

Okay, let’s be fair: this is boilerplate. Every marketing company—hell, every company period—puts jive like this in their privacy policies.

And Viant isn’t one of marketing’s bad guys. Or at least that’s not how they see themselves. They do mean well, kinda, if you forget they see no alternative to tracking people.

If you want to see what’s in that report without leaking your ID info to the world, the short cut is New survey by people-based marketer Viant promotes marketing to identified users in @Martech_Today.

What you’ll see there is a company trying to be good to users in a world where those users have no more power than marketers give them. And giving marketers that ability is what Viant does.

Curious… will Viant’s business persist after the GDPR trains heavy ordnance on it?

See, the GDPR  forbids gathering personal data about an EU citizen without that person’s clear permission—no matter where that citizen goes in the digital world, meaning to any site or service anywhere. It arrives in full force, with fines of up to 4% of global revenues in the prior fiscal year, on 25 May of this year: about three months from now.

In case you’ve missed it, I’m not idle here.

To help give individuals fresh GDPR-fortified leverage, and to save the asses of companies like Viant (which probably has lawyers working overtime on GDPR compliance), I’m working with Customer Commons (on the board of which I serve) on terms individuals can proffer and companies can agree to, giving them a form of protection, and agreeable companies a path toward GDPR compliance. And companies should like to agree, because those terms will align everyone’s interests from the start.

I’m also working with Linux Journal (where I’ve recently been elevated to editor-in-chief) to make it one of the first publishers to agree to friendly terms its readers proffer. That’s why I posted Every User a Neo there. Other metaphors: turning everyone on the Net into an Archimedes, with levers to move the world, and turning the whole marketplace in to a Marvel-like universe where all of us are enhanced.

If you want to help with any of that, talk to me.

 

Ingeyes Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking, @JuliaAngwin and @ProPublica unpack what the subhead says well already: “Google is the latest tech company to drop the longstanding wall between anonymous online ad tracking and user’s names.”

So here’s a message from humanity to Google and all the other spy organizations in the surveillance economy: Tracking is no less an invasion of privacy in apps and browsers than it is in homes, cars, purses, pants and wallets.

That’s because our apps and browsers, like the devices on which we use them, are personal and private. Simple as that. (HT to @Apple for digging that fact.)

To help the online advertising business understand what ought to be obvious (but isn’t yet), let’s clear up some misconceptions:

  1. Tracking people without their clear and conscious permission is wrong. (Meaning The Castle Doctrine should apply online no less than it does in the physical world.)
  2. Assuming that using a browser or an app constitutes some kind of “deal” to allow tracking is wrong. (Meaning implied consent is not the real thing. See The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation, by Joseph Turow, Ph.D. and the Annenberg School for Communication at the University of Pennsylvania.)
  3. Claiming that advertising funds the “free” Internet is wrong. (The Net has been free for the duration. Had it been left up to the billing companies of the world, we never would have had it, and they never would have made their $trillions on it. More at New Clues.)

What’s right is civilization, which relies on manners. Advertisers, their agencies and publishers haven’t learned manners yet.

But they will.

At the very least, regulations will force companies harvesting personal data to obey those they harvest it from, with fines for not obeying. Toward that end, Europe’s General Data Protection Regulation already has compliance offices at large corporations shaking in their boots, for good reason: “a fine up to 20,000,000 EUR, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher (Article 83, Paragraph 5 & 6).” Those come into force in 2018. Stay tuned.

Companies harvesting personal data also shouldn’t be surprised to find themselves re-classified as fiduciaries, no less responsible than accountants, brokers and doctors for confidentiality on behalf of the people they collect data from. (Thank you, professors Balkin and Zittrain, for that legal and rhetorical hack. Brilliant, and well done. Or begun.)

The only way to fully fix publishing, advertising and surveillance-corrupted business in general is to equip individuals with terms they can assert in dealing with others online — and to do it at scale. Meaning we need terms that work the same way across all the companies we deal with. That’s why Customer Commons and Kantara are working on exactly those terms. For starters. And these will be our terms — not separate and different ones that live at each company we deal with. Those aren’t working now, and never will work, because they can’t. And they can’t because when you have to deal with as many different terms as there are parties supplying them, the problem becomes unmanageable, and you get screwed. That’s why —

There’s a new sheriff on the Net, and it’s the individual. Who isn’t a “user,” by the way. Or a “consumer.” With new terms of our own, we’re the first party. The companies we deal with are second parties. Meaning that they are the users, and the consumers, of our legal “content.” And they’ll like it too, because we actually want to do good business with good companies, and are glad to make deals that work for both parties. Those include expressions of true loyalty, rather than the coerced kind we get from every “loyalty” card we carry in our purses and wallets.

When we are the first parties, we also get scale. Imagine changing your terms, your contact info, or your last name, for every company you deal with — and doing that in one move. That can only happen when you are the first party.

So here’s a call to action.

If you want to help blow up the surveillance economy by helping develop much better ways for demand and supply to deal with each other, show up next week at the Computer History Museum for VRM Day and the Internet Identity Workshop, where there are plenty of people already on the case.

Then follow the work that comes out of both — as if your life depends on it. Because it does.

And so does the economy that will grow atop true privacy online and the freedoms it supports. Both are a helluva lot more leveraged than the ill-gotten data gains harvested by the Lumascape doing unwelcome surveillance.

Bonus links:

  1. All the great research Julia Angwin & Pro Publica have been doing on a problem that data harvesting companies have been causing and can’t fix alone, even with government help. That’s why we’re doing the work I just described.
  2. What Facebook Knows About You Can Matter Offline, an OnPoint podcast featuring Julia, Cathy O’Neill and Ashkan Soltani.
  3. Everything by Shoshana Zuboff. From her home page: “’I’ve dedicated this part of my life to understanding and conceptualizing the transition to an information civilization. Will we be the masters of information, or will we be its slaves? There’s a lot of work to be done, if we are to build bridges to the kind of future that we can call “home.” My new book on this subject, Master or Slave? The Fight for the Soul of Our Information Civilization, will be published by Public Affairs in the U.S. and Eichborn in Germany in 2017.” Can’t wait.
  4. Don Marti’s good thinking and work with Aloodo and other fine hacks.

no-ads-trackingHere is a list of pieces I’ve written on what has come to be known as the “adblock wars.” That term applies most to #22 (written August of ’15) those that follow. But the whole series works as a coherent whole that might make a good book if a publisher is interested.

  1. Why online advertising sucks, and is a bubble (31 October 2008)
  2. After the advertising bubble bursts (23 March 2009)
  3. The Data Bubble (31 July 2010)
  4. The Data Bubble II (30 October 2010)
  5. A sense of bewronging (2 April 2011)
  6. For personal data, use value beats sale value (13 February 2012)
  7. Stop making cows. Quit being calves. (21 February 2012)
  8. An olive branch to advertising (12 September 2012, on the ProjectVRM blog)
  9. What could/should advertising look like in 2020, and what do we need to do now for this future? (Wharton’s Future of Advertising project, 13 November 2012)
  10. Bringing manners to marketing (12 January 2013 in Customer Commons)
  11. Thoughts on Privacy (31 August 2013)
  12. What the ad biz needs is to evict direct marketing (6 October 2013)
  13. We are not fish and advertising is not food (23 January 2014 in Customer Commons)
  14. Earth to Mozilla: Come back home (12 April 2014)
  15. Why to avoid advertising as a business model (25 June 2014, re-running Open Letter to Meg Whitman, which ran on 15 October 2000 in my old blog)
  16. Time for digital emancipation (27 July 2014)
  17. Privacy is personal (2 July 2014 in Linux Journal)
  18. On marketing’s terminal addiction to data fracking and bad guesswork (10 January 2015)
  19. Thoughts on tracking based advertising (18 February 2015)
  20. Because freedom matters (26 March 2015)
  21. On taking personalized ads personally (27 March 2015)
  22. Captivity rules (29 March 2015)
  23. Separating advertising’s wheat and chaff (12 August 2015)
  24. Apple’s content blocking is chemo for the cancer of adtech (26 August 2015)
  25. Will content blocking push Apple into advertising’s wheat business? (29 August 2015)
  26. If marketing listened to markets, they’d hear what ad blocking is telling them (8 September 2015)
  27. Debugging adtext assumptions (18 September 2015)
  28. How adtech, not ad blocking, breaks the social contract (23 September 2015)
  29. A way to peace in the adblock war (21 September 2015, on the ProjectVRM blog)
  30. Beyond ad blocking — the biggest boycott in human history (28 Septemper 2015)
  31. Dealing with Boundary Issues (1 October 2015 in Linux Journal)
  32. Helping publishers and advertisers move past the ad blockade (11 October on the ProjectVRM blog)
  33. How #adblocking matures from #NoAds to #SafeAds (22 October 2015)
  34. How Will the Big Data Craze Play Out (1 November 2015 in Linux Journal)
  35. Ad Blockers and the Next Chapter of the Internet (5 November in Harvard Business Review)
  36. At last, Cluetrain’s time has come (5 December 2015)
  37. The End of Internet Advertising as We’ve Known It (11 December 2015 in MIT Technology Review)
  38. More thoughts on privacy (13 December 2015)
  39. Why ad blocking is good (17 December 2015 talk at the U. of Michigan)
  40. What we can do with ad blocking’s leverage (1 January 2016 in Linux Journal)
  41. Rethinking John Wanamaker (18 January 2016)

There are others, but those will do for now.

« Older entries