Category: Business (page 1 of 2)

What SSI needs


Self-sovereign identity (SSI) is hot stuff.  Look it up and see how many results you get. As of today, I get 627,000 on Google.  By that measure alone, SSI is the biggest thing in the VRM development world. Nothing I know has more promise to give individuals leverage for dealing with the organizations of the world, especially in business.

Here’s how SSI works: rather than presenting your “ID” when some other party wants to know something about you, you present a verifiable credential that tells them no more than they need to know.

In other words, if someone wants to know if you are over 18, a member of Costco, a college graduate, or licensed to drive a car, you present a verifiable credential that tells the other party no more than that, but in a way they can trust. The interaction also leaves a trail, so you can both look back and remember what credentials you presented, and how the credential was accepted.

So, how do you do this? With a tool.

The easiest tool to imagine is a wallet, or a wallet app (here’s one) with some kind of dashboard. That’s what I try to illustrate with the image above: a way to present credentials and to keep track of how those play in the relevant parts of your life.

What matters is that you need to be in charge of your verifiable credentials, how they’re presented,  and how the history of interactions is recorded and auditable. You’re not just a “user,” or a pinball in some company’s machine. You’re the independent and sovereign self, selectively interacting with others who need some piece of “ID.”

There is no need for this to be complicated—at least not at the UI level. In fact, most of it can be automated, especially if the business ends of Me2B engagements are ready to work with verifiable credentials.

As it happens, almost all development in the SSI world is at the business end. This is very good, but it’s not enough.

To me it looks like SSI development today is where Web was in the early ’90s, before the invention of graphical browsers. Back then we knew the Web was there; but most of us couldn’t see or use it. We needed a graphical browser for that.  (Mosaic was the first, in 1993.)

For SSI to work, it needs to be the equivalent of a graphical browser. Maybe it’s a wallet, or maybe it’s something else. (I have an idea; but I want to see how SSI developers respond to this post first.)

The individual’s tool or tools (those equivalents of a browser) also don’t need to have a business model. In fact, it will be best if they don’t.

It should help to remember that Microsoft beat Netscape in the browser business by giving Internet Explorer away while Netscape charged for Navigator. Microsoft did that because they knew a free browser would be generative. It also helped that browsers were substitutable, meaning you could choose among many different ones.

What you look for here are because effects. That’s when you make money because of something rather than with it. Examples are the open protocols and standards beneath the Internet and the Web, free and open source code, and patents (such as Ethernet’s) that developers are left free to ignore.

If we don’t get that tool (whatever we call it), and SSI remains mostly a B2B thing, it’s doomed to niches at best.

I can’t begin to count how many times VRM developers have started out wanting to empower individuals and have ended up selling corporate services to companies, because that’s all they could imagine or sell—or that investors wanted. Let’s not let that happen here.

Let’s give people the equivalent of a browser, and then watch SSI truly succeed.

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?

What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.


Every construction project has a punch list of to-be-done items.  Since we’ve been at this for a dozen years, and have a rather long list of development works in progress on our wiki,  now seems like a good time and place to list what still needs to be done, but from the individual’s point of view. In other words, things they need but don’t have yet.

So  here is a  punch list of those things, in the form of a static page rather than a post such as this one. There is also a shortcut to the punch list in the menu above.

For the record, here’s that list as it stands today:

  1. Make companies agree to our terms, rather than the other way around.
  2. Have real relationships with companies, based on open standards and code, rather than relationships trapped inside corporate silos, each with their own very different ways of managing customer relationships (CRM), “delivering” a “customer experience” (aka CX), leading us on a “journey” or having us “join the conversation.”
  3. Standardizing the ways we relate to the service sides of companies, both for requesting service and for exchanging useful data in the course of owning a product or renting a service, so market intelligence flows both ways, and the “customer journey” becomes a virtuous cycle.
  4. Control our own self-sovereign identities, and give others what little they need to know about us on an as-needed basis.
  5. Get rid of logins and passwords.
  6. Change our personal details (surname, phone number, email or home address) in the records of all the organizations we deal with, in one move.
  7. Pay what we want, where we want, for whatever we want, in our own ways.
  8. Call for service or support in one simple and straightforward way of our own, rather than in as many ways as there are 800 numbers to call and numbers to punch into a phone before we wait on hold while bad music plays.
  9. Express loyalty in our own ways, which are genuine rather than coerced.
  10. Have an Internet of MY Things, which each of us controls for ourselves, and in which every thing we own has its own cloud, which we control as well.
  11. Own and control all our health and fitness records, and how others use them.
  12. Have wallets of our own, rather than only those provided by platforms.
  13. Have shopping carts of our own, which we can take from store to store and site to site online, rather than being tied to ones provided only by the stores themselves.
  14. Have personal devices of our own (such as this one) that aren’t cells in a corporate silo, or suction cups on corporate tentacles. (Alas, that’s what we still have with all Apple iOS phones and tablets, and all Android devices with embedded Google apps.)
  15. Remake education around the power we all have to teach ourselves and lean from each other, making optional at most the formal educational systems built more for maintaining bell curves than liberating the inherent genius of every student.

Please help us improve and correct it.

[The photo is from this collection.]


A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from  community at two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:


Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

Our time has come

For the first time since we launched ProjectVRM, we have a wave we can ride to a shore.

That wave is the GDPR: Europe’s General Data Protection Regulation. Here’s how it looks to Google Trends:

It crests just eight days from now, on May 25th.

To prep for the GDPR (and to avoid its potentially massive fines), organizations everywhere are working like crazy to get ready, especially in Europe. (Note: the GDPR protects the privacy of EU citizens, and applies worldwide.)

Thanks to the GDPR, there’s a stink on surveillance capitalism, and companies everywhere that once feasted on big data are now going on starvation diets.

Here’s one measure of that wave: my post “GDPR will pop the adtech bubble” got more than 50,000 after it went up during the weekend, when it also hit #1 on Hacker News and Techmeme. And this Hacker News comment thread about the piece is more than 30,000 words long. So far.

The GDPR dominates all conversations here at KuppingerCole‘s EIC conference in Munich where my keynote Tuesday was titled How Customers Will Lead Companies to GDPR Compliance and Beyond. (That’s the video.)

Ten years ago at this same conference, KuppingerCole gaveEIC award ProjectVRM an award (there on the right) that was way ahead of its time.

Back then we really thought the world was ready for tools that would make individuals both independent and better able to engage—and that these tools that would prove a thesis: that free customers are more valuable than captive ones.

But then social media happened, and platforms grew so big and powerful that it was hard to keep imagining a world online where each of us are truly free.

But we did more than imagine. We worked on customertech that would vastly increase personal agency for each of us, and turn the marketplace into a Marvel-like universe in which all of us are enhanced:

In this liberated marketplace, we would be able to

  1. Make companies agree to our terms, rather than the other way around.
  2. Control our own self-sovereign identities, and manage all the ways we are known to the administrative systems of the world. This means we will be able to —
  3. Get rid of logins and passwords, so we are simply known to others we grace with that privilege. Which we can also withdraw.
  4. Change our email or our home address in the records of every company we deal with, in one move.
  5. Pay what we want, where we want, for whatever we want, in our own ways.
  6. Call for service or support in one simple and straightforward way of our own, rather than in as many ways as there are 800 numbers to call and punch numbers into a phone before we wait on hold while bad music plays.
  7. Express loyalty in our own ways, which are genuine rather than coerced.
  8. Have an Internet of MY Things, which each of us controls for ourselves, and in which every thing we own has its own cloud, which we control as well.
  9. Own and control all our health and fitness records, and how others use them.
  10. Help companies by generously sharing helpful facts about how we use their products and services — but in our own ways, through standard tools that work the same for every company we deal with.
  11. Have wallets of our own, rather than only those provided by platforms.
  12. Have shopping carts of our own, which we could take from store to store and site to site online, rather than ones provided only by the stores themselves.
  13. Have real relationships with companies, based on open standards and code, rather than relationships trapped inside corporate silos.
  14. Remake education around the power we all have to teach ourselves and lean from each other, making optional at most the formal educational systems built more for maintaining bell curves than liberating the inherent genius of every student.

We’ve done a lot of work on most of those things. (Follow the links.) Now we need to work together to bring attention and interest to all our projects by getting behind what Customer Commons, our first and only spin-off, is doing over the next nine days.

First is a campaign to make an annual celebration of the GDPR, calling May 25th #Privmas.

As part of that (same link), launching a movement to take control of personal privacy online by blocking third party cookies. Hashtag #NoMore3rds. Instructions are here, for six browsers. (It’s easy. I’ve been doing it for weeks on all mine, to no ill effects.)

This is in addition to work following our Hack Day at MIT several weeks ago. Stay tuned for more on that.

Meanwhile, all hands on deck. We need more action than discussion here. Let’s finish getting started making VRM work for the world.

A positive look at Me2B

Somehow Martin Geddes and I were both at PIE2017 in London a few days ago and missed each other. That bums me because nobody in tech is more thoughtful and deep than Martin, and it would have been great to see him there. Still, we have his excellent report on the conference, which I highly recommend.

The theme of the conference was #Me2B, a perfect synonym (or synotag) for both #VRM and #CustomerTech, and hugely gratifying for us at ProjectVRM. As Martin says in his report,

This conference is an important one, as it has not sold its soul to the identity harvesters, nor rejected commercialism for utopian social visions by excluding them. It brings together the different parts and players, accepts the imperfection of our present reality, and celebrates the genuine progress being made.

Another pull-quote:

…if Facebook (and other identity harvesting companies) performed the same surveillance and stalking actions in the physical world as they do online, there would be riots. How dare you do that to my children, family and friends!

On the other hand, there are many people working to empower the “buy side”, helping people to make better decisions. Rather than identity harvesting, they perform “identity projection”, augmenting the power of the individual over the system of choice around them.

The main demand side commercial opportunity at the moment are applications like price comparison shopping. In the not too distant future is may transform how we eat, and drive a “food as medicine” model, paid for by life insurers to reduce claims.

The core issue is “who is my data empowering, and to what ends?”. If it is personal data, then there needs to be only one ultimate answer: it must empower you, and to your own benefit (where that is a legitimate intent, i.e. not fraud). Anything else is a tyranny to be avoided.

The good news is that these apparently unreconcilable views and systems can find a middle ground. There are technologies being built that allow for every party to win: the user, the merchant, and the identity broker. That these appear to be gaining ground, and removing the friction from the “identity supply chain”, is room for optimism.

Encouraging technologies that enable the individual to win is what ProjectVRM is all about. Same goes for Customer Commons, our nonprofit spin-off. Nice to know others (especially ones as smart and observant as Martin) see them gaining ground.

Martin also writes,

It is not merely for suppliers in the digital identity and personal information supply chain. Any enterprise can aspire to deliver a smart customer journey using smart contracts powered by personal information. All enterprises can deliver a better experience by helping customers to make better choices.


The only problem with companies delivering better experiences by themselves is that every one of them is doing it differently, often using the same back-end SaaS systems (e.g. from Salesforce, Oracle, IBM, et. al.).

We need ways customers can have their own standard ways to change personal data settings (e.g. name, address, credit card info), call for support and supply useful intelligence to any of the companies they deal with, and to do any of those in one move.

See, just as companies need scale across all the customers they deal with, customers need scale across all the companies they deal with. I visit the possibilities for that here, here, here, and here.

On the topic of privacy, here’s a bonus link.

And, since Martin takes a very useful identity angle in his report, I invite him to come to the next Internet Identity Workshop, which Phil Windley, Kaliya @IdentityWoman and I put on twice a year at the Computer History Museum. The next, our 26th, is 3-5 April 2018.



Good news for publishers and advertisers fearing the GDPR

The GDPR (General Data Protection Regulation) is the world’s most heavily weaponized law protecting personal privacy. It is aimed at companies that track people without asking, and its ordnance includes fines of up to 4% of worldwide revenues over the prior year.

The law’s purpose is to blow away the (mostly US-based) surveillance economy, especially tracking-based “adtech,” which supports most commercial publishing online.

The deadline for compliance is 25 May 2018, just a couple hundred days from now.

There is no shortage of compliance advice online, much of it coming from the same suppliers that talked companies into harvesting lots of the “big data” that security guru Bruce Schneier calls a toxic asset. (Go to and see whose ads come up.)

There is, however, an easy and 100% GDPR-compliant way for publishers to continue running ads and for companies to continue advertising. All the publisher needs to do is agree with this request from readers:

That request, along with its legal and machine-readable expressions, will live here:

The agreements themselves can be recorded anywhere.

There is not an easier way for publishers and advertisers to avoid getting fined by the EU for violating the GDPR. Agreeing to exactly what readers request puts both in full compliance.

Some added PR for advertisers is running what I suggest they call #Safeds. If markets are conversations (as marketers have been yakking about since  The Cluetrain Manifesto), #SafeAds will be a great GDPR conversation for everyone to have:

Here are some #SafeAds benefits that will make great talking points, especially for publishers and advertisers:

  1. Unlike adtech, which tracks eyeballs off a publisher’s site and then shoot ads at those eyeballs anywhere they can be found (including the Web’s cheapest and shittiest sites), #SafeAds actually sponsor the publisher. They say “we value this publication and the readers it brings to us.”
  2. Unlike adtech, #SafeAds carry no operational overhead for the publisher and no cognitive overhead for readers—because there are no worries for either party about where an ad comes from or what it’s doing behind the scenes. There’s nothing tricky about it.
  3. Unlike adtech, #SafeAds carry no fraud or malware, because they can’t. They go straight from the publisher or its agency to the publication, avoiding the corrupt four-dimensional shell game adtech has become.
  4. #SafeAds carry full-power creative and economic signals, which adtech can’t do at all, for the reasons just listed. It’s no coincidence that nearly every major brand you can name was made by #SafeAds, while adtech has not produced a single one. In fact adtech has an ugly history of hurting brands by annoying people with advertising that is unwelcome, icky, or both.
  5. Perhaps best of all for publishers, advertisers will pay more for #SafeAds, because those ads are worth more.

#NoStalking and #SafeAds can also benefit social media platforms now in a world of wonder and hurt (example: this Zuckerberg hostage video). The easiest thing for them to do is go freemium, with little or no ads (and only safe ones on the paid side, and nothing but #SafeAds on the free side, in obedience to #NoStalking requests, whether expressed or not.

If you’re a publisher, an advertiser, a developer, an exile from the adtech world, or anybody else who wants to help out, talk to us. That deadline is a hard one, and it’s coming fast.

Customertech Will Turn the Online Marketplace Into a Marvel-Like Universe in Which All of Us are Enhanced


We’ve been thinking too small.

Specifically, we’ve been thinking about data as if it ought to be something big, when it’s just bits.

Your life in the networked world is no more about data than your body is about cells.

What matters most to us online is agency, not data. Agency is the capacity, condition, or state of acting or of exerting power (Merriam-Webster).

Nearly all the world’s martech and adtech assumes we have no more agency in the marketplace than marketing provides us, which is kind of the way ranchers look at cattle. That’s why bad marketers assume, without irony, that it’s their sole responsibility to provide us with an “experience” on our “journey” down what they call a “funnel.”

What can we do as humans online that isn’t a grace of Apple, Amazon, Facebook or Google?

Marshall McLuhan says every new technology is “an extension of ourselves.” Another of his tenets is “we shape our tools and thereafter our tools shape us.” Thus Customertech—tools for customers—will inevitably enlarge our agency and change us in the process.

For example, with customertech, we can—

Compared to what we have in the offline world, these are superpowers. When customertech gives us these superpowers, the marketplace will become a Marvel-like universe filled with enhanced individuals. Trust me: this will be just as good for business as it will be for each of us.

We can’t get there if all we’re thinking about is data.

By the way, I made this same case to Mozilla in December 2015, on the last day I consulted the company that year. I did it through a talk called Giving Users Superpowers at an all-hands event called Mozlando. I don’t normally use slides, but this time I did, leveraging the very slides Mozilla keynoters showed earlier, which I shot with my phone from the audience. Download the slide deck here, and be sure to view it with the speaker’s notes showing. The advice I give in it is still good.

BTW, a big HT to @SeanBohan for the Superpowers angle, starting with the title (which he gave me) for the Mozlando talk.



Older posts

© 2020 ProjectVRM

Theme by Anders NorenUp ↑