Page 2 of 39

Is being less tasty vegetables our best strategy?

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,

Let’s zero-base zero-party data

Forrester Research has gifted marketing with a hot buzzphrase: zero-party data, which they define as “data that a customer intentionally and proactively shares with a brand, which can include preference center data, purchase intentions, personal context, and how the individual wants the brand to recognize her.”

Salesforce, the CRM giant (that’s now famously buying Slack), is ambitious about the topic, and how it can “fuel your personalized marketing efforts.” The second person you is Salesforce’s corporate customer.

It’s important to unpack what Salesforce says about that fuel, because Salesforce is a tech giant that fully matters. So here’s text from that last link. I’ll respond to it in chunks. (Note that zero, first and third party data is about you, no matter who it’s from.)

What is zero-party data?

Before we define zero-party data, let’s back up a little and look at some of the other types of data that drive personalized experiences.

First-party data: In the context of personalization, we’re often talking about first-party behavioral data, which encompasses an individual’s site-wide, app-wide, and on-page behaviors. This also includes the person’s clicks and in-depth behavior (such as hovering, scrolling, and active time spent), session context, and how that person engages with personalized experiences. With first-party data, you glean valuable indicators into an individual’s interests and intent. Transactional data, such as purchases and downloads, is considered first-party data, too.

Third-party data: Obtained or purchased from sites and sources that aren’t your own, third-party data used in personalization typically includes demographic information, firmographic data, buying signals (e.g., in the market for a new home or new software), and additional information from CRM, POS, and call center systems.

Zero-party data, a term coined by Forrester Research, is also referred to as explicit data.

They then go on to quote Forrester’s definition, substituting “[them]” for “her.”

The first party in that definition the site harvesting “behavioral” data about the individual. (It doesn’t square with the legal profession’s understanding of the term, so if you know that one, try not to be confused.)

It continues,

why-is-zero-party-data-important

Forrester’s Fatemeh Khatibloo, VP principal analyst, notes in a video interview with Wayin (now Cheetah Digital) that zero-party data “is gold. … When a customer trusts a brand enough to provide this really meaningful data, it means that the brand doesn’t have to go off and infer what the customer wants or what [their] intentions are.”

Sure. But what if the customer has her own way to be a precious commodity to a brand—one she can use at scale with all the brands she deals with? I’ll unpack that question shortly.

There’s the privacy factor to keep in mind too, another reason why zero-party data – in enabling and encouraging individuals to willingly provide information and validate their intent – is becoming a more important part of the personalization data mix.

Two things here.

First, again, individuals need their own ways to protect their privacy and project their intentions about it.

Second, having as many ways for brands to “enable and encourage” disclosure of private information as there are brands to provide them is hugely inefficient and annoying. But that is what Salesforce is selling here.

As industry regulations such as GDPR and the CCPA put a heightened focus on safeguarding consumer privacy, and as more browsers move to phase out third-party cookies and allow users to easily opt out of being tracked, marketers are placing a greater premium and reliance on data that their audiences knowingly and voluntarily give them.

Not if the way they “knowingly and voluntarily” agree to be tracked is by clicking “AGREE” on website home page popovers. Those only give those sites ways to adhere to the letter of the GDPR and the CCPA while also violating those laws’ spirit.

Experts also agree that zero-party data is more definitive and trustworthy than other forms of data since it’s coming straight from the source. And while that’s not to say all people self-report accurately (web forms often show a large number of visitors are accountants, by profession, which is the first field in the drop-down menu), zero-party data is still considered a very timely and reliable basis for personalization.

Self-reporting will be a lot more accurate if people have real relationships with brands, rather (again) than ones that are “enabled and encouraged” in each brand’s own separate way.

Here is a framework by which that can be done. Phil Windley provides some cool detail for operationalizing the whole thing here, here, here and here.

Even if the countless separate ways are provided by one company (e.g. Salesforce),  every brand will use those ways differently, giving each brand scale across many customers, but giving those customers no scale across many companies. If we want that kind of scale, dig into the links in the paragraph above.

With great data comes great responsibility.

You’re not getting something for nothing with zero-party data. When customers and prospects give and entrust you with their data, you need to provide value right away in return. This could take the form of: “We’d love you to take this quick survey, so we can serve you with the right products and offers.”

But don’t let the data fall into the void. If you don’t listen and respond, it can be detrimental to your cause. It’s important to honor the implied promise to follow up. As a basic example, if you ask a site visitor: “Which color do you prefer – red or blue?” and they choose red, you don’t want to then say, “Ok, here’s a blue website.” Today, two weeks from now, and until they tell or show you differently, the website’s color scheme should be red for that person.

While this example is simplistic, the concept can be applied to personalizing content, product recommendations, and other aspects of digital experiences to map to individuals’ stated preferences.

This, and what follows in that Salesforce post, is a pitch for brands to play nice and use surveys and stuff like that to coax private information out of customers. It’s nice as far as it can go, but it gives no agency to customers—you and me—beyond what we can do inside each company’s CRM silo.

So here are some questions that might be helpful:

  • What if the customer shows up as somebody who already likes red and is ready to say so to trusted brands? Or, better yet, if the customer arrives with a verifiable claim that she is already a customer, or that she has good credit, or that she is ready to buy something?
  • What if she has her own way of expressing loyalty, and that way is far more genuine, interesting and valuable to the brand than the company’s current loyalty system, which is full of gimmicks, forms of coercion, and operational overhead?
  • What if the customer carries her own privacy policy and terms of engagement (ones that actually protect the privacy of both the customer and the brand, if the brand agrees to them)?

All those scenarios yield highly valuable zero-party data. Better yet, they yield real relationships with values far above zero.

Those questions suggest just a few of the places we can go if we zero-base customer relationships outside standing CRM systems: out in the open market where customers want to be free, independent, and able to deal with many brands with tools and services of their own, through their own CRM-friendly VRM—Vendor Relationship Management—tools.

VRM reaching out to CRM implies (and will create)  a much larger middle market space than the closed and private markets isolated inside every brand’s separate CRM system.

We’re working toward that. See here.

 

Toward real market conversations

A friend pointed me to this video of a slide presentation by Bixy, because it looked to him kinda like VRM.  I thought so too…. at first. Here’s an image from the deck:

bixy slide

Here is what I wrote back, updated and improved a bit:

These are my notes on slides within the deck/video.

1) It looks to me like a CRM refresh rather than VRM. There have been many of these. And, while Bixy looks better than any others I can remember (partly because I can’t remember any… it’s all a blur), it’s still pitching into the CRM market. Nothing wrong with that: it’s a huge market, with side categories all around it. It’s just not VRM, which is the customer hand CRM shakes. (And no, a CRM system giving the customer a hand to shake the CRM’s with isn’t VRM. It’s just gravy on a loyalty card.)

2) The notion that customers  (I dislike the word “consumers”) want relationships with brands is a sell-side fantasy. Mostly customers are looking to buy something they’ve already searched for, or to keep what they already own working, or to replace one thing with another that won’t fail—and to get decent service when something does fail. (For more on this subject, I suggest reading the great Bob Hoffman, for example here.)

3) While it’s true that customers don’t want to be tracked, annoyed and manipulated, and that those practices have led to dislike of businesses and icky legislation (bulls eye on all of those), and that “relationships are based on trust, value, attention, respect and communication,” none of those five things mean much to the customer if all of them are locked into a company’s one-to-many system, which is what we have with 100% of all CRM, CX and XX (pick your initialism) systems—all of them different, which means  a customer needs to have as many different ways to trust, value, attend to, respect and communicate as there are company systems for providing the means.

4) Bixy’s idea here (and what the graphic above suggests, is that the customer can express likes and dislikes to many Brands’ Salesforce CRM systems. They call this “sharing for value in return.” But there is far appetite for this than than marketing thinks.  Customers share as little as they can when they are fully required to do so, and would rather share zero when they go about their ordinary surfing online or shopping anywhere. Worse, marketing in general (follow the news)—and adtech/martech in particular—continue to believe that customers “share” data gathered about them by surveillance, and that this is “exchanged” for free services, discounts and other goodies. This is one of the worst rationalizations in the history of business.

5) “B2C conversations” that are “transparent, personalized and informative” is more a marketing fantasy than a customer desire. What customers would desire, if they were available, are tools that enhance them with superpowers.  For example, the power to change their last name, email address or credit card for every company they deal with, in one move. This is real scale: customer scale.  We call these superpowers customertech:

CRM is vendortech.

6) Some percentage of Adidas customers (the example in that video) may be willing to fill out a “conversational” form to arrive at a shoe purchase, but I suspect a far larger percentage would regard the whole exercise as a privacy-risking journey down a sales funnel that they’d rather not be in. So long as the world lacks standard ways for people to prevent surveillance of their private spaces and harvesting of personal data, to make non-coercive two-way agreements with others, and ways to monitor person data use and agreement compliance, there is no way trustworthy “conversations” of the kind Bixy proposes can happen.

7) Incumbent “loyalty” programs are, on the whole, expensive and absurd.

Take Peet’s Coffee, a brand I actually do love. I’ve been a customer of Peet’s for, let’s see… 35 years. I have a high-end (like in a coffee shop) espresso machine at my house, with a high-end grinder to match. All I want from Peet’s here at home are two kinds of Peet’s beans: Garuda and Major Dickason Decaf. That’s it. I’ve sampled countless single-origin beans and blends from many sources, and those are my faves. I used to buy one-pound bags of those at Peet’s stores; but in COVID time I subscribe to have those delivered. Which isn’t easy, because Peet’s has made buying coffee online remarkably hard. Rather than just showing me all the coffees they have, they want to drag me every time through a “conversational” discovery process—and that’s after the customary (for every company) popover pitch to sign up as a member, which I already am, and to detour through a login-fail password-recovery ditch (with CAPTCHAs, over and over, clicking on busses and traffic lights and crosswalks) that show up every. damn. time. On arrival at the membership home page, “My Dashboard” all but covers the home screen, and tells me I’m 8 points away from my next reward (always a free coffee, which is not worth the trouble, and not why I’m loyal). Under the Shop menu (the only one I might care about) there are no lists of coffee types. Instead there’s “Find Your Match,” which features two kinds of coffee I don’t want and a “take your quiz” game. Below that are “signature blends” that list nothing of ingredients but require one to “Find My Coffee” through a “flavor wheel” that gives one a choice of five flavors (“herbal/earthy,” “bright/citrus”…). I have to go waaay the hell down a well of unwanted and distracting choices to get to the damn actual coffee I know I like.

My point: here is a company that is truly loved (or hell, at least liked) by its customers, mostly because it’s better than Starbucks. They’re in a seller’s market. They don’t need a loyalty program, or the high operational and cognitive overhead involved (e.g. “checking in” at stores with a QR code on a phone app). They could make shopping online a lot simpler with a nice list of products and prices. But instead they decided, typically (for marketing), that they needed all this bullshit to suck customers down sales funnels. When they don’t. If Peet’s dumped its app and made their website and subscription system simpler, they wouldn’t lose one customer and they’d save piles of money.

Now, back to the Adidas example. I am sure anybody who plays sports or runs, or does anything in athletic shoes, would rather just freaking shop for shoes than be led by a robot through a conversational maze that more than likely will lead to a product the company is eager to sell instead of one the customer would rather buy.

7) I think most customers would be creeped to reveal how much they like to run and other stuff like that, when they have no idea how that data will be used—which is also still the typical “experience” online. Please: just show them the shoes, say what they’re made of, what they’re good for, and (if it matters) what celeb jocks like them or have co-branded them.

8) The “value exchange” that fully matters is money for goods. “Relationship” beyond that is largely a matter of reputation and appreciation, which is earned by the products and services themselves, and by human engagement. Not by marketing BS.

8) Bixy’s pitch about “90% of conversation” occurring “outside the app as digital widgets via publisher and marketer SDKs” and “omnichannel personalization” through “buy rewards, affiliate marketing, marketer insights, CRM & CDP, email, ads, loyalty, eCommerce personalization, brand & retailer apps and direct mail” is just more of the half-roboticized marketing world we have, only worse. (It also appears to require the kind of tracking the video says up front that customers don’t want.)

9) The thought of “licensing my personal information to brands for additional royalties and personalization” also creeps me out.

10) I don’t think this is “building relationships from the consumer point of view.” I think it’s a projection of marketing fantasy on a kind of customer that mostly doesn’t exist. I also don’t think “reducing the sales cycle” is any customer’s fantasy.

To sum up, I don’t mean to be harsh. In fact I’m glad to talk with Bixy if they’re interested in helping with what we’re trying to do here at ProjectVRM—or at Customer Commons, the Me2B Alliance and MyData.

I also don’t think Cluetrain‘s first thesis (“Markets are conversations“) can be proven by tools offered only by sellers and made mostly to work for sellers. If we want real market conversations, we need to look at solving market problems from the customers’ side. Look here and here for ways to do that.

What SSI needs

wallet

Self-sovereign identity (SSI) is hot stuff.  Look it up and see how many results you get. As of today, I get 627,000 on Google.  By that measure alone, SSI is the biggest thing in the VRM development world. Nothing I know has more promise to give individuals leverage for dealing with the organizations of the world, especially in business.

Here’s how SSI works: rather than presenting your “ID” when some other party wants to know something about you, you present a verifiable credential that tells them no more than they need to know.

In other words, if someone wants to know if you are over 18, a member of Costco, a college graduate, or licensed to drive a car, you present a verifiable credential that tells the other party no more than that, but in a way they can trust. The interaction also leaves a trail, so you can both look back and remember what credentials you presented, and how the credential was accepted.

So, how do you do this? With a tool.

The easiest tool to imagine is a wallet, or a wallet app (here’s one) with some kind of dashboard. That’s what I try to illustrate with the image above: a way to present credentials and to keep track of how those play in the relevant parts of your life.

What matters is that you need to be in charge of your verifiable credentials, how they’re presented,  and how the history of interactions is recorded and auditable. You’re not just a “user,” or a pinball in some company’s machine. You’re the independent and sovereign self, selectively interacting with others who need some piece of “ID.”

There is no need for this to be complicated—at least not at the UI level. In fact, most of it can be automated, especially if the business ends of Me2B engagements are ready to work with verifiable credentials.

As it happens, almost all development in the SSI world is at the business end. This is very good, but it’s not enough.

To me it looks like SSI development today is where Web was in the early ’90s, before the invention of graphical browsers. Back then we knew the Web was there; but most of us couldn’t see or use it. We needed a graphical browser for that.  (Mosaic was the first, in 1993.)

For SSI to work, it needs to be the equivalent of a graphical browser. Maybe it’s a wallet, or maybe it’s something else. (I have an idea; but I want to see how SSI developers respond to this post first.)

The individual’s tool or tools (those equivalents of a browser) also don’t need to have a business model. In fact, it will be best if they don’t.

It should help to remember that Microsoft beat Netscape in the browser business by giving Internet Explorer away while Netscape charged for Navigator. Microsoft did that because they knew a free browser would be generative. It also helped that browsers were substitutable, meaning you could choose among many different ones.

What you look for here are because effects. That’s when you make money because of something rather than with it. Examples are the open protocols and standards beneath the Internet and the Web, free and open source code, and patents (such as Ethernet’s) that developers are left free to ignore.

If we don’t get that tool (whatever we call it), and SSI remains mostly a B2B thing, it’s doomed to niches at best.

I can’t begin to count how many times VRM developers have started out wanting to empower individuals and have ended up selling corporate services to companies, because that’s all they could imagine or sell—or that investors wanted. Let’s not let that happen here.

Let’s give people the equivalent of a browser, and then watch SSI truly succeed.

The true blue ocean

“Blue ocean strategy challenges companies to break out of the red ocean of bloody competition by creating uncontested market space that makes the competition irrelevant.”

That’s what  W. Chan Kim and Renee Mauborgne say in the original preface to  Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant, published by Harvard Business Review Press in 2005.  Since then the red/blue ocean metaphor has become business canon.

The problem with that canon is that it looks at customers the way a trawler looks at fish.

To understand the problem here, it helps to hear marketing talk to itself. Customers, it says, are targets to herd on a journey into a funnel through which they are acquired, managed, controlled and locked in.

This is the language of ranching and slavery. Not a way to talk about human beings.

Worse, every business is a separate trawler, and handles customers in its hold differently, even if they’re using the same CRM, CX and other systems to do all the stuff listed two paragraphs up. (Along with other mudanities: keeping records, following leads, forecasting sales, crunching numbers, producing analytics, and other stuff customers don’t care about until they’re forced to deal with it, usually when a problem shows up.)

In fact, these systems can’t help holding customers captive. Because the way these systems are sold and deployed means there are as many different ways for customers to “relate” to those companies as there are companies.

And, as long as companies are the only parties able to (as the GDPR puts it) operate as a “data controller” or “data processor,” the (literally) damned customer remains nothing more than a “data subject” in countless separate databases and name spaces, each with separate logins and passwords.

This is why, from the customer’s perspective, the whole ocean of CRM and CX are opaque with rutilance.

Worse, all CRM and CX systems operate on the assumption that it is up to them to know everything about a customer, a prospect, or a user. And most of that knowledge these days is obtained early in the (literally) damned “journey” through exactly the kind of tracking that has caused—

  1. Ad blocking, which (though it had been around since 2004) hockey-sticked in 2013, when the adtech fecosystem gave the middle finger to Do Not Track, and which by 2015 was the biggest boycott in world history
  2. Regulation, most notably the GDPR and the CCPA, which never would have happened had marketing not wanted to track everyone like marked animals
  3. Tracking protection, now getting built into browsers (e.g. Safari, Firefox, Brave, Edge) because the market (that big blue ocean) demands it

Stop and think for a minute how much the market actually knows—meaning how much customers actually know about what they own, use, want, wish for, regret, and the rest of it.

The simple fact is that companies’ customers and users know far more about the products and services they own and use than the companies do. Those people are also in a far better position to share that knowledge than any CRM, CX or other system for “relating” to customers can begin to guess at, much less comprehend. Especially when every company has its own separate and isolated ways of doing both.

But customers today still mostly lack ways of their own to share that knowledge, and do it selectively and safely. Those ways are in the category we call VRM (when it shakes hands with CRM), or Me2B  (when it’s dealing broadly across everything a company does with customers and users).

VRM and Me2B are what make as free as can be, outside any company’s nets, funnels and teeming holds in trawler’s hulls.

It’s also much bigger than the red ocean of CRM/CX by themselves, because it’s where customers share far more—and better—information than they can inside existing CRM/CX systems. Or will, once VRM and Me2B tools and services stand up.

For example, there’s—

  • What customers actually want to buy (rather than what companies can at best only guess at)
  • What customers already own, and how they’re actually using it (meaning what’s their Internet of their things)
  • What companies, products and service customers are actually loyal to, and why
  • How customers would  like to share their experiences
  • What relevant credentials they carry, for identity and other purposes. And who their preferred agents or intermediaries might be
  • What their terms, conditions and privacy policies are, and how compliance with those can be assured and audited
  • What their tools are, for making all those things work, across the board, with all the companies and other organizations they engage

The list is endless, because there is no limit to what customers can say to companies (or how they relate to companies) if companies are willing to deal with customers who have as much scale across corporate systems as those systems wish to have across all of their customers.

Being “customer centric” won’t cut it. That’s just a gloss on the same old thing. If companies wish to be truly customer-driven, they need to be dealing with free-range human beings. Not captives.

So: how?

There is already code for doing much of what’s listed in the seven bullets above.  Services too. (Examples.) There could be a lot more.

There are also nonprofits working to foster development in that big blue ocean. Customer Commons is ProjectVRM’s own spin-off. The Me2B Alliance is a companion effort. So are MyData and the Sovrin Foundation. All of them could use some funding.

What matters for business is that all of them empower free-range customers and give them scale: real leverage across companies and markets, for the good of all.

That’s the real blue ocean.

Without VRM and Me2B working there, the most a company can do with its CRM or CX system is look at it.

Bonus link. Pull quote: “People must own root authority, before a system transmutes your personal life into a consumer. Before you need the system to exist, you are whole.”

 

Where VRM fits

VRM is the hand CRM shakes.

That’s the simplest way of putting it. That’s what we wanted it to be when we started ProjectVRM in 2006, and that’s how we described it in 2011, when I gave this talk at SugarCRM‘s SugarCon conference:

Those “ways” are tools that belong to each customer and give them global scale: meaning they should work the same way for every company’s CRM system. Just like the customer’s phone, email and browser shake hands with every company already.

This is, as the marketers say, positioning. And it’s important, now that a number of significant .orgs have stepped up to take care of other work we helped start with ProjectVRM. Most notable are Customer Commons (a ProjectVRM spin-off), the Me2B Alliance, and MyData Global. There are others, but those are foremost on the ProjectVRM list.

The space we’re building out here is immense, so there is not only room for everybody, but more work than even everybody can do. Meanwhile it is essential that we clarify what all the roles are. Hence this post.

What if we called cookies “worms”?

While you ponder that, read Exclusive: New York Times phasing out all 3rd-party advertising data, by Sara Fischer in Axios.

The cynic in me translates the headline as “Leading publishers cut out the middle creep to go direct with tracking-based advertising.” In other words, same can, nicer worms.

But maybe that’s wrong. Maybe we’ll only be tracked enough to get put into one of those “45 new proprietary first-party audience segments” or  “at least 30 more interest segments.” And maybe only tracked on site.

But we will be tracked, presumably. Something needs to put readers into segments. What else will do that?

So, here’s another question: Will these publishers track readers off-site to spy on their “interests” elsewhere? Or will tracking be confined to just what the reader does while using the site?

Anyone know?

In a post on the ProjectVRM list, Adrian Gropper says this about the GDPR (in response to what I posted here): “GDPR, like HIPAA before it, fails because it allows an unlimited number of dossiers of our personal data to be made by unlimited number of entities. Whether these copies were made with consent or without consent through re-identification, the effect is the same, a lack of transparency and of agency.”

So perhaps it’s progress that these publishers (the Axios story mentions The Washington Post and Vox as well as the NYTimes) are only keeping limited dossiers on their readers alone.

But that’s not progress enough.

We need global ways to say to every publisher how little we wish them to know about us. Also ways to keep track of what they actually do with the information they have. (And we’re working on those. )

Being able to have one’s data back (e.g. via the CCPA) is a kind of progress (as is the law’s discouragement of collection in the first place), but we need technical as well as legal mechanisms for projecting personal agency online. (Models for this are Archimedes and Marvel heroes.)  Not just more ways to opt out of being observed more than we’d like—especially when we still lack ways to audit what others do with the permissions we give them.

That’s the only way we’ll get rid of the worms.

Bonus link.

Markets as conversations with robots

From the Google AI blogTowards a Conversational Agent that Can Chat About…Anything:

In “Towards a Human-like Open-Domain Chatbot”, we present Meena, a 2.6 billion parameter end-to-end trained neural conversational model. We show that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots. Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.

A chat between Meena (left) and a person (right).

Meena
Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.
 
Concretely, Meena has a single Evolved Transformer encoder block and 13 Evolved Transformer decoder blocks as illustrated below. The encoder is responsible for processing the conversation context to help Meena understand what has already been said in the conversation. The decoder then uses that information to formulate an actual response. Through tuning the hyper-parameters, we discovered that a more powerful decoder was the key to higher conversational quality.
So how about turning this around?

What if Google sold or gave a Meena model to people—a model Google wouldn’t be able to spy on—so people could use it to chat sensibly with robots or people at companies?

Possible?

If, in the future (which is now—it’s freaking 2020 already), people will have robots of their own, why not one for dealing with companies, which themselves are turning their sales and customer service systems over to robots anyway?

People are the real edge

You Need to Move from Cloud Computing to Edge Computing Now!, writes Sabina Pokhrel in Towards Data Science. The reason, says her subhead, is that “Edge Computing market size is expected to reach USD 29 billion by 2025.” (Source: Grand View Research.) The second person “You” in the headline is business. Not the people at the edge. At least not yet.

We need to fix that.

By we, I mean each of us—as independent individuals and as collected groups—and with full agency in both roles. The Edge Computing is both.

The article  illustrates the move to Edge Computing this way:

The four items at the bottom (taxi, surveillance camera, traffic light, and smartphone) are at the edges of corporate systems. That’s what the Edge Computing talk is about. But one of those—the phone—is also yours. In fact it is primarily yours. And you are the true edge, because you are an independent actor.

More than any device in the world, that phone is the people’s edge, because connected device is more personal. Our phones are, almost literally, extensions of ourselves—to a degree that being without one in the connected world is a real disability.

Given phones importance to us, we need to be in charge of whatever edge computing happens there. Simple as that. We cannot be puppets at the ends of corporate strings.

I am sure that this is not a consideration for most of those working on cloud computing, edge computing, or moving computation from one to the other.

So we need to make clear that our agency over the computation in our personal devices is a primary design consideration. We need to do that with tech, with policy, and with advocacy.

This is not a matter of asking companies and governments to please give us some agency. We need to create that agency for ourselves, much as we’ve learned to walk, talk and act on our own. We don’t have “Walking as a Service” or “Talking as a Service.” Because those are only things an individual human being can do. Likewise there should be things only an individual human with a phone can do. On their own. At scale. Across all companies and governments.

Pretty much everything written here and tagged VRM describes that work and ways to approach that challenge.

Recently some of us (me included) have been working to establish Me2B as a better name for VRM than VRM.  It occurs to me, in reading this piece, that the e in Me2B could stand for edge. Just a thought.

If we succeed, there is no way edge computing gets talked about, or worked on, without respecting the Me’s of the world, and their essential roles in operating, controlling, managing and otherwise making the most of those edges—for the good of the businesses they deal with as well as themselves.

 

 

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

« Older posts Newer posts »

© 2021 ProjectVRM

Theme by Anders NorenUp ↑