Category: VRM+CRM (page 1 of 9)

Toward real market conversations

A friend pointed me to this video of a slide presentation by Bixy, because it looked to him kinda like VRM.  I thought so too…. at first. Here’s an image from the deck:

bixy slide

Here is what I wrote back, updated and improved a bit:

These are my notes on slides within the deck/video.

1) It looks to me like a CRM refresh rather than VRM. There have been many of these. And, while Bixy looks better than any others I can remember (partly because I can’t remember any… it’s all a blur), it’s still pitching into the CRM market. Nothing wrong with that: it’s a huge market, with side categories all around it. It’s just not VRM, which is the customer hand CRM shakes. (And no, a CRM system giving the customer a hand to shake the CRM’s with isn’t VRM. It’s just gravy on a loyalty card.)

2) The notion that customers  (I dislike the word “consumers”) want relationships with brands is a sell-side fantasy. Mostly customers are looking to buy something they’ve already searched for, or to keep what they already own working, or to replace one thing with another that won’t fail—and to get decent service when something does fail. (For more on this subject, I suggest reading the great Bob Hoffman, for example here.)

3) While it’s true that customers don’t want to be tracked, annoyed and manipulated, and that those practices have led to dislike of businesses and icky legislation (bulls eye on all of those), and that “relationships are based on trust, value, attention, respect and communication,” none of those five things mean much to the customer if all of them are locked into a company’s one-to-many system, which is what we have with 100% of all CRM, CX and XX (pick your initialism) systems—all of them different, which means  a customer needs to have as many different ways to trust, value, attend to, respect and communicate as there are company systems for providing the means.

4) Bixy’s idea here (and what the graphic above suggests, is that the customer can express likes and dislikes to many Brands’ Salesforce CRM systems. They call this “sharing for value in return.” But there is far appetite for this than than marketing thinks.  Customers share as little as they can when they are fully required to do so, and would rather share zero when they go about their ordinary surfing online or shopping anywhere. Worse, marketing in general (follow the news)—and adtech/martech in particular—continue to believe that customers “share” data gathered about them by surveillance, and that this is “exchanged” for free services, discounts and other goodies. This is one of the worst rationalizations in the history of business.

5) “B2C conversations” that are “transparent, personalized and informative” is more a marketing fantasy than a customer desire. What customers would desire, if they were available, are tools that enhance them with superpowers.  For example, the power to change their last name, email address or credit card for every company they deal with, in one move. This is real scale: customer scale.  We call these superpowers customertech:

CRM is vendortech.

6) Some percentage of Adidas customers (the example in that video) may be willing to fill out a “conversational” form to arrive at a shoe purchase, but I suspect a far larger percentage would regard the whole exercise as a privacy-risking journey down a sales funnel that they’d rather not be in. So long as the world lacks standard ways for people to prevent surveillance of their private spaces and harvesting of personal data, to make non-coercive two-way agreements with others, and ways to monitor person data use and agreement compliance, there is no way trustworthy “conversations” of the kind Bixy proposes can happen.

7) Incumbent “loyalty” programs are, on the whole, expensive and absurd.

Take Peet’s Coffee, a brand I actually do love. I’ve been a customer of Peet’s for, let’s see… 35 years. I have a high-end (like in a coffee shop) espresso machine at my house, with a high-end grinder to match. All I want from Peet’s here at home are two kinds of Peet’s beans: Garuda and Major Dickason Decaf. That’s it. I’ve sampled countless single-origin beans and blends from many sources, and those are my faves. I used to buy one-pound bags of those at Peet’s stores; but in COVID time I subscribe to have those delivered. Which isn’t easy, because Peet’s has made buying coffee online remarkably hard. Rather than just showing me all the coffees they have, they want to drag me every time through a “conversational” discovery process—and that’s after the customary (for every company) popover pitch to sign up as a member, which I already am, and to detour through a login-fail password-recovery ditch (with CAPTCHAs, over and over, clicking on busses and traffic lights and crosswalks) that show up every. damn. time. On arrival at the membership home page, “My Dashboard” all but covers the home screen, and tells me I’m 8 points away from my next reward (always a free coffee, which is not worth the trouble, and not why I’m loyal). Under the Shop menu (the only one I might care about) there are no lists of coffee types. Instead there’s “Find Your Match,” which features two kinds of coffee I don’t want and a “take your quiz” game. Below that are “signature blends” that list nothing of ingredients but require one to “Find My Coffee” through a “flavor wheel” that gives one a choice of five flavors (“herbal/earthy,” “bright/citrus”…). I have to go waaay the hell down a well of unwanted and distracting choices to get to the damn actual coffee I know I like.

My point: here is a company that is truly loved (or hell, at least liked) by its customers, mostly because it’s better than Starbucks. They’re in a seller’s market. They don’t need a loyalty program, or the high operational and cognitive overhead involved (e.g. “checking in” at stores with a QR code on a phone app). They could make shopping online a lot simpler with a nice list of products and prices. But instead they decided, typically (for marketing), that they needed all this bullshit to suck customers down sales funnels. When they don’t. If Peet’s dumped its app and made their website and subscription system simpler, they wouldn’t lose one customer and they’d save piles of money.

Now, back to the Adidas example. I am sure anybody who plays sports or runs, or does anything in athletic shoes, would rather just freaking shop for shoes than be led by a robot through a conversational maze that more than likely will lead to a product the company is eager to sell instead of one the customer would rather buy.

7) I think most customers would be creeped to reveal how much they like to run and other stuff like that, when they have no idea how that data will be used—which is also still the typical “experience” online. Please: just show them the shoes, say what they’re made of, what they’re good for, and (if it matters) what celeb jocks like them or have co-branded them.

8) The “value exchange” that fully matters is money for goods. “Relationship” beyond that is largely a matter of reputation and appreciation, which is earned by the products and services themselves, and by human engagement. Not by marketing BS.

8) Bixy’s pitch about “90% of conversation” occurring “outside the app as digital widgets via publisher and marketer SDKs” and “omnichannel personalization” through “buy rewards, affiliate marketing, marketer insights, CRM & CDP, email, ads, loyalty, eCommerce personalization, brand & retailer apps and direct mail” is just more of the half-roboticized marketing world we have, only worse. (It also appears to require the kind of tracking the video says up front that customers don’t want.)

9) The thought of “licensing my personal information to brands for additional royalties and personalization” also creeps me out.

10) I don’t think this is “building relationships from the consumer point of view.” I think it’s a projection of marketing fantasy on a kind of customer that mostly doesn’t exist. I also don’t think “reducing the sales cycle” is any customer’s fantasy.

To sum up, I don’t mean to be harsh. In fact I’m glad to talk with Bixy if they’re interested in helping with what we’re trying to do here at ProjectVRM—or at Customer Commons, the Me2B Alliance and MyData.

I also don’t think Cluetrain‘s first thesis (“Markets are conversations“) can be proven by tools offered only by sellers and made mostly to work for sellers. If we want real market conversations, we need to look at solving market problems from the customers’ side. Look here and here for ways to do that.

The true blue ocean

“Blue ocean strategy challenges companies to break out of the red ocean of bloody competition by creating uncontested market space that makes the competition irrelevant.”

That’s what  W. Chan Kim and Renee Mauborgne say in the original preface to  Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant, published by Harvard Business Review Press in 2005.  Since then the red/blue ocean metaphor has become business canon.

The problem with that canon is that it looks at customers the way a trawler looks at fish.

To understand the problem here, it helps to hear marketing talk to itself. Customers, it says, are targets to herd on a journey into a funnel through which they are acquired, managed, controlled and locked in.

This is the language of ranching and slavery. Not way to talk about human beings.

Worse, every business is a separate trawler, and handles customers in its hold differently, even if they’re using the same CRM, CX and other systems to do all the stuff listed two paragraphs up. (Along with other mudanities: keeping records, following leads, forecasting sales, crunching numbers, producing analytics, and other stuff customers don’t care about until they’re forced to deal with it, usually when a problem shows up.)

In fact, these systems can’t help holding customers captive. Because the way these systems are sold and deployed means there are as many different ways for customers to “relate” to those companies as there are companies.

And, as long as companies are the only parties able to (as the GDPR puts it) operate as a “data controller” or “data processor,” the (literally) damned customer remains nothing more than a “data subject” in countless separate databases and name spaces, each with separate logins and passwords.

This is why, from the customer’s perspective, the whole ocean of CRM and CX are opaque with rutilance.

Worse, all CRM and CX systems operate on the assumption that it is up to them to know everything about a customer, a prospect, or a user. And most of that knowledge these days is obtained early in the (literally) damned “journey” through exactly the kind of tracking that has caused—

  1. Ad blocking, which (though it had been around since 2004) hockey-sticked in 2013, when the adtech fecosystem gave the middle finger to Do Not Track, and which by 2015 was the biggest boycott in world history
  2. Regulation, most notably the GDPR and the CCPA, which never would have happened had marketing not wanted to track everyone like marked animals
  3. Tracking protection, now getting built into browsers (e.g. Safari, Firefox, Brave, Edge) because the market (that big blue ocean) demands it

Stop and think for a minute how much the market actually knows—meaning how much customers actually know about what they own, use, want, wish for, regret, and the rest of it.

The simple fact is that companies’ customers and users know far more about the products and services they own and use than the companies do. Those people are also in a far better position to share that knowledge than any CRM, CX or other system for “relating” to customers can begin to guess at, much less comprehend. Especially when every company has its own separate and isolated ways of doing both.

But customers today still mostly lack ways of their own to share that knowledge, and do it selectively and safely. Those ways are in the category we call VRM (when it shakes hands with CRM), or Me2B  (when it’s dealing broadly across everything a company does with customers and users).

VRM and Me2B are what make as free as can be, outside any company’s nets, funnels and teeming holds in trawler’s hulls.

It’s also much bigger than the red ocean of CRM/CX by themselves, because it’s where customers share far more—and better—information than they can inside existing CRM/CX systems. Or will, once VRM and Me2B tools and services stand up.

For example, there’s—

  • What customers actually want to buy (rather than what companies can at best only guess at)
  • What customers already own, and how they’re actually using it (meaning what’s their Internet of their things)
  • What companies, products and service customers are actually loyal to, and why
  • How customers would  like to share their experiences
  • What relevant credentials they carry, for identity and other purposes. And who their preferred agents or intermediaries might be
  • What their terms, conditions and privacy policies are, and how compliance with those can be assured and audited
  • What their tools are, for making all those things work, across the board, with all the companies and other organizations they engage

The list is endless, because there is no limit to what customers can say to companies (or how they relate to companies) if companies are willing to deal with customers who have as much scale across corporate systems as those systems wish to have across all of their customers.

Being “customer centric” won’t cut it. That’s just a gloss on the same old thing. If companies wish to be truly customer-driven, they need to be dealing with free-range human beings. Not captives.

So: how?

There is already code for doing much of what’s listed in the seven bullets above.  Services too. (Examples.) There could be a lot more.

There are also nonprofits working to foster development in that big blue ocean. Customer Commons is ProjectVRM’s own spin-off. The Me2B Alliance is a companion effort. So are MyData and the Sovrin Foundation. All of them could use some funding.

What matters for business is that all of them empower free-range customers and give them scale: real leverage across companies and markets, for the good of all.

That’s the real blue ocean.

Without VRM and Me2B working there, the most a company can do with its CRM or CX system is look at it.

Bonus link. Pull quote: “People must own root authority, before a system transmutes your personal life into a consumer. Before you need the system to exist, you are whole.”

 

Where VRM fits

VRM is the hand CRM shakes.

That’s the simplest way of putting it. That’s what we wanted it to be when we started ProjectVRM in 2006, and that’s how we described it in 2011, when I gave this talk at SugarCRM‘s SugarCon conference:

Those “ways” are tools that belong to each customer and give them global scale: meaning they should work the same way for every company’s CRM system. Just like the customer’s phone, email and browser shake hands with every company already.

This is, as the marketers say, positioning. And it’s important, now that a number of significant .orgs have stepped up to take care of other work we helped start with ProjectVRM. Most notable are Customer Commons (a ProjectVRM spin-off), the Me2B Alliance, and MyData Global. There are others, but those are foremost on the ProjectVRM list.

The space we’re building out here is immense, so there is not only room for everybody, but more work than even everybody can do. Meanwhile it is essential that we clarify what all the roles are. Hence this post.

Markets as conversations with robots

From the Google AI blogTowards a Conversational Agent that Can Chat About…Anything:

In “Towards a Human-like Open-Domain Chatbot”, we present Meena, a 2.6 billion parameter end-to-end trained neural conversational model. We show that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots. Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.

A chat between Meena (left) and a person (right).

Meena
Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.
 
Concretely, Meena has a single Evolved Transformer encoder block and 13 Evolved Transformer decoder blocks as illustrated below. The encoder is responsible for processing the conversation context to help Meena understand what has already been said in the conversation. The decoder then uses that information to formulate an actual response. Through tuning the hyper-parameters, we discovered that a more powerful decoder was the key to higher conversational quality.
So how about turning this around?

What if Google sold or gave a Meena model to people—a model Google wouldn’t be able to spy on—so people could use it to chat sensibly with robots or people at companies?

Possible?

If, in the future (which is now—it’s freaking 2020 already), people will have robots of their own, why not one for dealing with companies, which themselves are turning their sales and customer service systems over to robots anyway?

People are the real edge

You Need to Move from Cloud Computing to Edge Computing Now!, writes Sabina Pokhrel in Towards Data Science. The reason, says her subhead, is that “Edge Computing market size is expected to reach USD 29 billion by 2025.” (Source: Grand View Research.) The second person “You” in the headline is business. Not the people at the edge. At least not yet.

We need to fix that.

By we, I mean each of us—as independent individuals and as collected groups—and with full agency in both roles. The Edge Computing is both.

The article  illustrates the move to Edge Computing this way:

The four items at the bottom (taxi, surveillance camera, traffic light, and smartphone) are at the edges of corporate systems. That’s what the Edge Computing talk is about. But one of those—the phone—is also yours. In fact it is primarily yours. And you are the true edge, because you are an independent actor.

More than any device in the world, that phone is the people’s edge, because connected device is more personal. Our phones are, almost literally, extensions of ourselves—to a degree that being without one in the connected world is a real disability.

Given phones importance to us, we need to be in charge of whatever edge computing happens there. Simple as that. We cannot be puppets at the ends of corporate strings.

I am sure that this is not a consideration for most of those working on cloud computing, edge computing, or moving computation from one to the other.

So we need to make clear that our agency over the computation in our personal devices is a primary design consideration. We need to do that with tech, with policy, and with advocacy.

This is not a matter of asking companies and governments to please give us some agency. We need to create that agency for ourselves, much as we’ve learned to walk, talk and act on our own. We don’t have “Walking as a Service” or “Talking as a Service.” Because those are only things an individual human being can do. Likewise there should be things only an individual human with a phone can do. On their own. At scale. Across all companies and governments.

Pretty much everything written here and tagged VRM describes that work and ways to approach that challenge.

Recently some of us (me included) have been working to establish Me2B as a better name for VRM than VRM.  It occurs to me, in reading this piece, that the e in Me2B could stand for edge. Just a thought.

If we succeed, there is no way edge computing gets talked about, or worked on, without respecting the Me’s of the world, and their essential roles in operating, controlling, managing and otherwise making the most of those edges—for the good of the businesses they deal with as well as themselves.

 

 

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/s… — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from  community at email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  While there are good reasons to challenge whether or not data can be property (see Jefferson and  Renieris), I want to focus on a different problem: the one best to solve first: the need for personal agency in the online world.

I see two reasons why personal agency matters more than personal data.

The first reason we have far too little agency in the networked world is that we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party: dependent, subordinate, secondary. In defaulted regulatory terms, we clients are mere “data subjects,” and only server operators are privileged to be “data controllers,” “data processors,” or both.

Fortunately, the Net’s and the Web’s base protocols remain peer-to-peer, by design. We can still build on those. And it’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from doing rude things, such as sticking their hands inside our clothes without permission.

We don’t yet have those norms online, because we have no clothing there. The browser should have been clothing, but instead it became an easy way for adtech and its dependents in digital publishing to plant tracking beacons on our naked digital selves, so they could track us like marked animals across the digital frontier. That this normative is no excuse. Tracking people without their conscious and explicit invitation—or a court order—is morally wrong, massively rude, and now (at least hopefully) illegal under the GDPR and other privacy laws.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons, is about.)

It is the height of fatuity for websites and services to say their cookie notice settings are “your privacy choices” when you have no power to offer, or to make, your own privacy choices, with records of those choices that you keep.

The simple fact of the matter is that businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties in all cases is a design flaw in every standing “agreement” we “accept.” And we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on absolute. (It’s no coincidence that more than a year ago, up to 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and by personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. Top-down privacy simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. Good and helpful though it may be, it is the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

Older posts

© 2020 ProjectVRM

Theme by Anders NorenUp ↑