Category: Customer Commons

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from  community at email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  That’s all fine, but I also think it steers attention away from a far more important issue it would be best to solve first: personal agency.

I see two reasons why personal agency matters more than personal data.

The first reason is that we have far too little agency in the networked world, mostly because we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party. Fortunately the Net’s and the Web’s base protocols remain mostly peer-to-peer, by design. We can still build on those. It’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from, for example sticking their hands inside our clothes without permission.

The fact that adtech plants tracking beacons on our naked digital selves and tracks us like animals across the digital frontier may be a norm for now, but it is also morally wrong, massively rude and now illegal under the  GDPR.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons is all about.)

Businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties is a design flaw in every standing “agreement” we “accept,” and we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on the absolute. (It’s no coincidence that more than a year ago, 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. It simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get and the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control.  For real.

If we don’t do that first, adtech will stay in contol. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

Our time has come

For the first time since we launched ProjectVRM, we have a wave we can ride to a shore.

That wave is the GDPR: Europe’s General Data Protection Regulation. Here’s how it looks to Google Trends:

It crests just eight days from now, on May 25th.

To prep for the GDPR (and to avoid its potentially massive fines), organizations everywhere are working like crazy to get ready, especially in Europe. (Note: the GDPR protects the privacy of EU citizens, and applies worldwide.)

Thanks to the GDPR, there’s a stink on surveillance capitalism, and companies everywhere that once feasted on big data are now going on starvation diets.

Here’s one measure of that wave: my post “GDPR will pop the adtech bubble” got more than 50,000 after it went up during the weekend, when it also hit #1 on Hacker News and Techmeme. And this Hacker News comment thread about the piece is more than 30,000 words long. So far.

The GDPR dominates all conversations here at KuppingerCole‘s EIC conference in Munich where my keynote Tuesday was titled How Customers Will Lead Companies to GDPR Compliance and Beyond. (The video is up, alas behind a registration wall. I’ll see if we can fix that.)

Ten years ago at this same conference, KuppingerCole gaveEIC award ProjectVRM an award (there on the right) that was way ahead of its time.

Back then we really thought the world was ready for tools that would make individuals both independent and better able to engage—and that these tools that would prove a thesis: that free customers are more valuable than captive ones.

But then social media happened, and platforms grew so big and powerful that it was hard to keep imagining a world online where each of us are truly free.

But we did more than imagine. We worked on customertech that would vastly increase personal agency for each of us, and turn the marketplace into a Marvel-like universe in which all of us are enhanced:

In this liberated marketplace, we would be able to

  1. Make companies agree to our terms, rather than the other way around.
  2. Control our own self-sovereign identities, and manage all the ways we are known to the administrative systems of the world. This means we will be able to —
  3. Get rid of logins and passwords, so we are simply known to others we grace with that privilege. Which we can also withdraw.
  4. Change our email or our home address in the records of every company we deal with, in one move.
  5. Pay what we want, where we want, for whatever we want, in our own ways.
  6. Call for service or support in one simple and straightforward way of our own, rather than in as many ways as there are 800 numbers to call and punch numbers into a phone before we wait on hold while bad music plays.
  7. Express loyalty in our own ways, which are genuine rather than coerced.
  8. Have an Internet of MY Things, which each of us controls for ourselves, and in which every thing we own has its own cloud, which we control as well.
  9. Own and control all our health and fitness records, and how others use them.
  10. Help companies by generously sharing helpful facts about how we use their products and services — but in our own ways, through standard tools that work the same for every company we deal with.
  11. Have wallets of our own, rather than only those provided by platforms.
  12. Have shopping carts of our own, which we could take from store to store and site to site online, rather than ones provided only by the stores themselves.
  13. Have real relationships with companies, based on open standards and code, rather than relationships trapped inside corporate silos.

We’ve done a lot of work on most of those things. (Follow the links.) Now we need to work together to bring attention and interest to all our projects by getting behind what Customer Commons, our first and only spin-off, is doing over the next nine days.

First is a campaign to make an annual celebration of the GDPR, calling May 25th #Privmas.

As part of that (same link), launching a movement to take control of personal privacy online by blocking third party cookies. Hashtag #NoMore3rds. Instructions are here, for six browsers. (It’s easy. I’ve been doing it for weeks on all mine, to no ill effects.)

This is in addition to work following our Hack Day at MIT several weeks ago. Stay tuned for more on that.

Meanwhile, all hands on deck. We need more action than discussion here. Let’s finish getting started making VRM work for the world.

GDPR Hack Day at MIT

Our challenge in the near term is to make the GDPR work for us “data subjects” as well as for the “data processors” and “data controllers” of the world—and to start making it work before the GDPR’s “sunrise” on May 25th. That’s when the EU can start laying fines—big ones—on those data processors and controllers, but not on us mere subjects. After all, we’re the ones the GDPR protects.

Ah, but we can also bring some relief to those processors and controllers, by automating, in a way, our own consent to good behavior on their part, using a consent cookie of our own baking. That’s what we started working on at IIW on April 5th. Here’s the whiteboard:

Here are the session notes. And we’ll continue at a GDPR Hack Day, next Thursday, April 26th, at MIT. Read more about and sign up here. You don’t need to be a hacker to participate.

The most leveraged VRM Day yet

VRM Day is coming up soon: Monday, 2 April.

Register at that link. Or, if it fails, this one. (Not sure why, but we get reports of fails with the first link on Chrome, but not other browsers. Go refigure.)

Why this one is more leveraged than any other, so far:::

Thanks to the GDPR, there is more need than ever for VRM, and more interest than ever in solutions to compliance problems that can only come from the personal side.

For example, the GDPR invites this question: What can we do as individuals that can put all the companies we deal with in compliance with the GDPR because they’re in compliance withour terms and our privacy policies? We have some answers, and we’ll talk about those.

We also have two topics we need to dive deeply into, starting at VRM Day and continuing over the following three days at IIW, also at the Computer History Museum. These too are impelled by the GDPR.

First is lexicon, or what the techies call ontology: “a formal naming and definition of the types, properties, and interrelationships of the entities that really exist in a particular domain of discourse.” In other words, What are we saying in VRM that CRM can understand—and vice versa? We’re at that point now—where VRM meets CRM. On the table will be not just be the tools and services customers will use to make themselves understood by the corporate systems of the world, but the protocols, standard code bases, ontologies and other necessities that will intermediate between the two.

Second is cooperation. The ProjectVRM wiki now has a page called Cooperative Work that needs to be substantiated by actual cooperation, now that the GDPR is approaching. How can we support each other?

Bring your answers.

See you there.

A positive look at Me2B

Somehow Martin Geddes and I were both at PIE2017 in London a few days ago and missed each other. That bums me because nobody in tech is more thoughtful and deep than Martin, and it would have been great to see him there. Still, we have his excellent report on the conference, which I highly recommend.

The theme of the conference was #Me2B, a perfect synonym (or synotag) for both #VRM and #CustomerTech, and hugely gratifying for us at ProjectVRM. As Martin says in his report,

This conference is an important one, as it has not sold its soul to the identity harvesters, nor rejected commercialism for utopian social visions by excluding them. It brings together the different parts and players, accepts the imperfection of our present reality, and celebrates the genuine progress being made.

Another pull-quote:

…if Facebook (and other identity harvesting companies) performed the same surveillance and stalking actions in the physical world as they do online, there would be riots. How dare you do that to my children, family and friends!

On the other hand, there are many people working to empower the “buy side”, helping people to make better decisions. Rather than identity harvesting, they perform “identity projection”, augmenting the power of the individual over the system of choice around them.

The main demand side commercial opportunity at the moment are applications like price comparison shopping. In the not too distant future is may transform how we eat, and drive a “food as medicine” model, paid for by life insurers to reduce claims.

The core issue is “who is my data empowering, and to what ends?”. If it is personal data, then there needs to be only one ultimate answer: it must empower you, and to your own benefit (where that is a legitimate intent, i.e. not fraud). Anything else is a tyranny to be avoided.

The good news is that these apparently unreconcilable views and systems can find a middle ground. There are technologies being built that allow for every party to win: the user, the merchant, and the identity broker. That these appear to be gaining ground, and removing the friction from the “identity supply chain”, is room for optimism.

Encouraging technologies that enable the individual to win is what ProjectVRM is all about. Same goes for Customer Commons, our nonprofit spin-off. Nice to know others (especially ones as smart and observant as Martin) see them gaining ground.

Martin also writes,

It is not merely for suppliers in the digital identity and personal information supply chain. Any enterprise can aspire to deliver a smart customer journey using smart contracts powered by personal information. All enterprises can deliver a better experience by helping customers to make better choices.

True.

The only problem with companies delivering better experiences by themselves is that every one of them is doing it differently, often using the same back-end SaaS systems (e.g. from Salesforce, Oracle, IBM, et. al.).

We need ways customers can have their own standard ways to change personal data settings (e.g. name, address, credit card info), call for support and supply useful intelligence to any of the companies they deal with, and to do any of those in one move.

See, just as companies need scale across all the customers they deal with, customers need scale across all the companies they deal with. I visit the possibilities for that here, here, here, and here.

On the topic of privacy, here’s a bonus link.

And, since Martin takes a very useful identity angle in his report, I invite him to come to the next Internet Identity Workshop, which Phil Windley, Kaliya @IdentityWoman and I put on twice a year at the Computer History Museum. The next, our 26th, is 3-5 April 2018.

 

 

Good news for publishers and advertisers fearing the GDPR

The GDPR (General Data Protection Regulation) is the world’s most heavily weaponized law protecting personal privacy. It is aimed at companies that track people without asking, and its ordnance includes fines of up to 4% of worldwide revenues over the prior year.

The law’s purpose is to blow away the (mostly US-based) surveillance economy, especially tracking-based “adtech,” which supports most commercial publishing online.

The deadline for compliance is 25 May 2018, just a couple hundred days from now.

There is no shortage of compliance advice online, much of it coming from the same suppliers that talked companies into harvesting lots of the “big data” that security guru Bruce Schneier calls a toxic asset. (Go to https://www.google.com/search?q=GDPR and see whose ads come up.)

There is, however, an easy and 100% GDPR-compliant way for publishers to continue running ads and for companies to continue advertising. All the publisher needs to do is agree with this request from readers:

That request, along with its legal and machine-readable expressions, will live here:

The agreements themselves can be recorded anywhere.

There is not an easier way for publishers and advertisers to avoid getting fined by the EU for violating the GDPR. Agreeing to exactly what readers request puts both in full compliance.

Some added PR for advertisers is running what I suggest they call #Safeds. If markets are conversations (as marketers have been yakking about since  The Cluetrain Manifesto), #SafeAds will be a great GDPR conversation for everyone to have:

Here are some #SafeAds benefits that will make great talking points, especially for publishers and advertisers:

  1. Unlike adtech, which tracks eyeballs off a publisher’s site and then shoot ads at those eyeballs anywhere they can be found (including the Web’s cheapest and shittiest sites), #SafeAds actually sponsor the publisher. They say “we value this publication and the readers it brings to us.”
  2. Unlike adtech, #SafeAds carry no operational overhead for the publisher and no cognitive overhead for readers—because there are no worries for either party about where an ad comes from or what it’s doing behind the scenes. There’s nothing tricky about it.
  3. Unlike adtech, #SafeAds carry no fraud or malware, because they can’t. They go straight from the publisher or its agency to the publication, avoiding the corrupt four-dimensional shell game adtech has become.
  4. #SafeAds carry full-power creative and economic signals, which adtech can’t do at all, for the reasons just listed. It’s no coincidence that nearly every major brand you can name was made by #SafeAds, while adtech has not produced a single one. In fact adtech has an ugly history of hurting brands by annoying people with advertising that is unwelcome, icky, or both.
  5. Perhaps best of all for publishers, advertisers will pay more for #SafeAds, because those ads are worth more.

#NoStalking and #SafeAds can also benefit social media platforms now in a world of wonder and hurt (example: this Zuckerberg hostage video). The easiest thing for them to do is go freemium, with little or no ads (and only safe ones on the paid side, and nothing but #SafeAds on the free side, in obedience to #NoStalking requests, whether expressed or not.

If you’re a publisher, an advertiser, a developer, an exile from the adtech world, or anybody else who wants to help out, talk to us. That deadline is a hard one, and it’s coming fast.

© 2018 ProjectVRM

Theme by Anders NorenUp ↑