Category: Privacy (page 1 of 5)

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from  community at email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  While there are good reasons to challenge whether or not data can be property (see Jefferson and  Renieris), I want to focus on a different problem: the one best to solve first: the need for personal agency in the online world.

I see two reasons why personal agency matters more than personal data.

The first reason we have far too little agency in the networked world is that we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party: dependent, subordinate, secondary. Fortunately, the Net’s and the Web’s base protocols remain peer-to-peer, by design. We can still build on those. And it’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from, for example sticking their hands inside our clothes without permission.

The fact that adtech plants tracking beacons on our naked digital selves and tracks us like animals across the digital frontier may be a norm for now, but it is also morally wrong, massively rude and now illegal under the  GDPR.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons is all about.)

Businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties is a design flaw in every standing “agreement” we “accept,” and we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on the absolute. (It’s no coincidence that more than a year ago, 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. It simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control.  For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

The most leveraged VRM Day yet

VRM Day is coming up soon: Monday, 2 April.

Register at that link. Or, if it fails, this one. (Not sure why, but we get reports of fails with the first link on Chrome, but not other browsers. Go refigure.)

Why this one is more leveraged than any other, so far:::

Thanks to the GDPR, there is more need than ever for VRM, and more interest than ever in solutions to compliance problems that can only come from the personal side.

For example, the GDPR invites this question: What can we do as individuals that can put all the companies we deal with in compliance with the GDPR because they’re in compliance withour terms and our privacy policies? We have some answers, and we’ll talk about those.

We also have two topics we need to dive deeply into, starting at VRM Day and continuing over the following three days at IIW, also at the Computer History Museum. These too are impelled by the GDPR.

First is lexicon, or what the techies call ontology: “a formal naming and definition of the types, properties, and interrelationships of the entities that really exist in a particular domain of discourse.” In other words, What are we saying in VRM that CRM can understand—and vice versa? We’re at that point now—where VRM meets CRM. On the table will be not just be the tools and services customers will use to make themselves understood by the corporate systems of the world, but the protocols, standard code bases, ontologies and other necessities that will intermediate between the two.

Second is cooperation. The ProjectVRM wiki now has a page called Cooperative Work that needs to be substantiated by actual cooperation, now that the GDPR is approaching. How can we support each other?

Bring your answers.

See you there.

A positive look at Me2B

Somehow Martin Geddes and I were both at PIE2017 in London a few days ago and missed each other. That bums me because nobody in tech is more thoughtful and deep than Martin, and it would have been great to see him there. Still, we have his excellent report on the conference, which I highly recommend.

The theme of the conference was #Me2B, a perfect synonym (or synotag) for both #VRM and #CustomerTech, and hugely gratifying for us at ProjectVRM. As Martin says in his report,

This conference is an important one, as it has not sold its soul to the identity harvesters, nor rejected commercialism for utopian social visions by excluding them. It brings together the different parts and players, accepts the imperfection of our present reality, and celebrates the genuine progress being made.

Another pull-quote:

…if Facebook (and other identity harvesting companies) performed the same surveillance and stalking actions in the physical world as they do online, there would be riots. How dare you do that to my children, family and friends!

On the other hand, there are many people working to empower the “buy side”, helping people to make better decisions. Rather than identity harvesting, they perform “identity projection”, augmenting the power of the individual over the system of choice around them.

The main demand side commercial opportunity at the moment are applications like price comparison shopping. In the not too distant future is may transform how we eat, and drive a “food as medicine” model, paid for by life insurers to reduce claims.

The core issue is “who is my data empowering, and to what ends?”. If it is personal data, then there needs to be only one ultimate answer: it must empower you, and to your own benefit (where that is a legitimate intent, i.e. not fraud). Anything else is a tyranny to be avoided.

The good news is that these apparently unreconcilable views and systems can find a middle ground. There are technologies being built that allow for every party to win: the user, the merchant, and the identity broker. That these appear to be gaining ground, and removing the friction from the “identity supply chain”, is room for optimism.

Encouraging technologies that enable the individual to win is what ProjectVRM is all about. Same goes for Customer Commons, our nonprofit spin-off. Nice to know others (especially ones as smart and observant as Martin) see them gaining ground.

Martin also writes,

It is not merely for suppliers in the digital identity and personal information supply chain. Any enterprise can aspire to deliver a smart customer journey using smart contracts powered by personal information. All enterprises can deliver a better experience by helping customers to make better choices.

True.

The only problem with companies delivering better experiences by themselves is that every one of them is doing it differently, often using the same back-end SaaS systems (e.g. from Salesforce, Oracle, IBM, et. al.).

We need ways customers can have their own standard ways to change personal data settings (e.g. name, address, credit card info), call for support and supply useful intelligence to any of the companies they deal with, and to do any of those in one move.

See, just as companies need scale across all the customers they deal with, customers need scale across all the companies they deal with. I visit the possibilities for that here, here, here, and here.

On the topic of privacy, here’s a bonus link.

And, since Martin takes a very useful identity angle in his report, I invite him to come to the next Internet Identity Workshop, which Phil Windley, Kaliya @IdentityWoman and I put on twice a year at the Computer History Museum. The next, our 26th, is 3-5 April 2018.

 

 

Our radical hack on the whole marketplace

In Disruption isn’t the whole VRM story, I visited the Tetrad of Media Effects, from Laws of Media: the New Science, by Marshall and Eric McLuhan. Every new medium (which can be anything from a stone arrowhead to a self-driving car), the McLuhans say, does four things, which they pose as questions that can have multiple answers, and they visualize this way:

tetrad-of-media-effects

The McLuhans also famously explained their work with this encompassing statement: We shape our tools and thereafter they shape us.

This can go for institutions, such as businesses, and whole marketplaces, as well as people. We saw that happen in a big way with contracts of adhesion: those one-sided non-agreements we click on every time we acquire a new login and password, so we can deal with yet another site or service online.

These were named in 1943 by the law professor Friedrich “Fritz” Kessler in his landmark paper, “Contracts of Adhesion: Some Thoughts about Freedom of Contract.” Here is pretty much his whole case, expressed in a tetrad:

contracts-of-adhesion

Contracts of adhesion were tools industry shaped, was in turn shaped by, and in turn shaped the whole marketplace.

But now we have the Internet, which by design gives everyone on it a place to stand, and, like Archimedes with his lever, move the world.

We are now developing that lever, in the form of terms any one of us can assert, as a first party, and the other side—the businesses we deal with—can agree to, automatically. Which they’ll do it because it’s good for them.

I describe our first two terms, both of which have potentials toward enormous changes, in two similar posts put up elsewhere: 

— What if businesses agreed to customers’ terms and conditions? 

— The only way customers come first

And we’ll work some of those terms this week, fittingly, at the Computer History Museum in Silicon Valley, starting tomorrow at VRM Day and then Tuesday through Thursday at the Internet Identity Workshop. I host the former and co-host the latter, our 24th. One is free and the other is cheap for a conference.

Here is what will come of our work:
personal-terms

Trust me: nothing you can do is more leveraged than helping make this happen.

See you there.

 

We’re done with Phase One

Here’s a picture that’s worth more than a thousand words:

maif-vrm

He’s with MAIF, the French insurance company, speaking at MyData 2016 in Helsinki, a little over a month ago. Here’s another:

sean-vrm

That’s Sean Bohan, head of our steering committee, expanding on what many people at the conference already knew.

I was there too, giving the morning keynote on Day 2:

cupfu1hxeaa4thh

It was an entirely new talk. Pretty good one too, especially since  I came up with it the night before.

See, by the end of Day 1, it was clear that pretty much everybody at the conference already knew how market power was shifting from centralized industries to distributed individuals and groups (including many inside centralized industries). It was also clear that most of the hundreds of people at the conference were also familiar with VRM as a market category. I didn’t need to talk about that stuff any more. At least not in Europe, where most of the VRM action is.

So, after a very long journey, we’re finally getting started.

In my own case, the journey began when I saw the Internet coming, back in the ’80s.  It was clear to me that the Net would change the world radically, once it allowed commercial activity to flow over its pipes. That floodgate opened on April 30, 1995. Not long after that, I joined the fray as an editor for Linux Journal (where I still am, by the way, more than 20 years later). Then, in 1999, I co-wrote The Cluetrain Manifesto, which delivered this “one clue” above its list of 95 Theses:

not

And then, one decade ago last month, I started ProjectVRM, because that clue wasn’t yet true. Our reach did not exceed the grasp of marketers in the world. If anything, the Net extended marketers’ grasp a lot more than it did ours. (Shoshana Zuboff says their grasp has metastacized into surveillance capitalism. ) In respect to Gibson’s Law, Cluetrain proclaimed an arrived future that was not yet distributed. Our job was to distribute it.

Which we have. And we can start to see results such as those above. So let’s call Phase One a done thing. And start thinking about Phase Two, whatever it will be.

To get that work rolling, here are a few summary facts about ProjectVRM and related efforts.

First, the project itself could hardly be more lightweight, at least administratively. It consists of:

Second, we have a spin-off: Customer Commons, which will do for personal terms of engagement (one each of us can assert online) what Creative Commons (another Berkman-Klein spinoff) did for copyright.

Third, we have a list of many dozens of developers, which seem to be concentrated in Europe and Australia/New Zealand.  Two reasons for that, both speculative:

  1. Privacy. The concept is much more highly sensitive and evolved in Europe than in the U.S. The reason we most often get goes, “Some of our governments once kept detailed records of people, and those records were used to track down and kill many of them.” There are also more evolved laws respecting privacy. In Australia there have been privacy laws for several years requiring those collecting data about individuals to make it available to them, in forms the individual specifies. And in Europe there is the General Data Protection Regulation, which will impose severe penalties for unwelcome data gathering from individuals, starting in 2018.
  2. Enlightened investment. Meaning investors who want a startup to make a positive difference in the world, and not just give them a unicorn to ride out some exit. (Which seems to have become the default model in the U.S., especially Silicon Valley.)

What we lack is research. And by we I mean the world, and not just ProjectVRM.

Research is normally the first duty of a project at the Berkman Klein Center, which is chartered as a research organization. Research was ProjectVRM’s last duty, however, because we had nothing to research at first. Or, frankly, until now. That’s why we were defined as a development & research project rather than the reverse.

Where and how research on VRM and related efforts happens is a wide open question. What matters is that it needs to be done, starting soon, while the “before” state still prevails in most of the world, and the future is still on its way in delivery trucks. Who does that research matters far less than the research itself.

So we are poised at a transitional point now. Let the conversations about Phase Two commence.

 

 

 

Save

Save

Save

Save

Save

Save

Save

VRM at MyData2016

mydata2016-image

As it happens I’m in Helsinki right now, for MyData2016, where I’ll be speaking on Thursday morning. My topic: The Power of the Individual. There is also a hackathon (led by DataBusiness.fi) going on during the show, starting at 4pm (local time) today. In no order of priority, here are just some of the subjects and players I’ll be dealing with,  talking to, and talking up (much as I can):

Please let me know what others belong on this list. And see you at the show.

Save

Humanizing the Great Ad Machine

This is a comment I couldn’t publish under this post before my laptop died. (Fortunately I sent it to my wife first, so I’m posting it here, from her machine.)

OMMA’s theme is “Humanizing the Great Ad Machine”  Good one. Unfortunately, the agenda and speaker list suggest that industry players are the only ones in a position to do that. They aren’t..

The human targets of the Great Ad Machine are actually taking the lead—by breaking it.

Starting with ad blocking and tracking protection.

I see no evidence of respect for that fact, however, in the posts and tweets (at #MPOMMA) coming out of the conference so far. Maybe we can change that.

Let’s start by answering the question raised by the headline in Ad Blocking and DVRs: How Similar? I can speak as an operator of both technologies, and as a veteran marketer as well. So look at the rest of this post as the speech I’d give if I was there at OMMA…

Ad blocking and DVRs have four main things in common.

1) They are instruments of personal independence;

2) They answer demand for avoiding advertising. That demand exists because most advertising wastes time and space in people’s lives, and people value those two things more than whatever good advertising does for the “content” economy;

3) Advertising agents fail to grok this message; which is why—

4) Advertising agents and the “interactive” ad industry cry foul and blame the messengers (including the makers of ad blockers and other forms of tracking protection), rather than listening to, or respecting, what the market tells them, loudly and clearly.

Wash, rinse and repeat.

The first wash was VCRs. Those got rinsed out by digital TV. The second wash was DVRs. Those are being rinsed out right now by the Internet. The third wash is ad blocking.

The next rinse will happen after ad blocking succeeds as chemo for the cancer of ads that millions on the receiving end don’t want.

The next wash will be companies spending their marketing money on listening for better signals of demand from the marketplace, and better ways of servicing existing customers after the sale.

This can easily happen because damn near everybody is on the Net now, or headed there. Not trapped on TV or any other closed, one-way, top-down, industry-controlled distribution system.

On the Net, everybody has a platform of their own. There is no limit to what can be built on that platform, including much better instruments for expressing demand, and much better control over private personal spaces and the ways personal data are used by others. Ad blocking is just the first step in that direction.

The adtech industry (including dependent publishers) can come up with all the “solutions” they want to the ad blocking “problem.” All will fail, because ad blocking is actually a solution the market—hundreds of millions of real human beings—demands. Every one of adtech’s “solutions” is a losing game of whack-a-mole where the ones with hammers bang their own heads.

For help looking past that game, consider these:

1) The Interent as we know it is 21 years old. Commercial activity on it has only been possible since April 30, 1995. The history of marketing on the Net since then has been a series of formative moments and provisional systems, not a permanent state. In other words, marketing on the Net isn’t turtles all the way down, it’s scaffolding. Facebook, Google and the rest of the online advertising world exist by the grace of provisional models that have been working for only a few years, and can easily collapse if something better comes along. Which it will. Inevitably. Because…

2) When customers can signal demand better than adtech can manipulate it or guess at it, adtech will collapse like a bad soufflé.

3) Plain old brand advertising, which has always been aimed at populations rather than people, isn’t based on surveillance, and has great brand-building value, will carry on, free of adtech, doing what only it can do. (See the Ad Contrarian for more on that.)

In the long run (which may be short) winners will be customers and the companies that serve them  respectfully. Not more clueless and manipulative surveillance-based marketing schemes.

Winning companies will respect customers’ independence and intentions. Among those intentions will be terms that specify what can be done with shared personal data. Those terms will be supplied primarily by customers, and companies will agree to those terms because they will be friendly, work well for both sides, and easily automated.

Having standard ways for signaling demand and controlling use of personal data will give customers the same kind of scale companies have always had across many customers. On the Net, scale can work in both directions.

Companies that continue to rationalize spying on and abusing people, at high costs to everybody other than those still making hay while the sun shines, will lose. The hay-makers will also lose as soon as the light of personal tolerance for abuse goes out, which will come when ad blocking and tracking protection together approach ubiquity.

But the hay-makers can still win if they start listening to high-value signals coming from customers. It won’t be hard, and it will pay off.

The market is people, folks. Everybody with a computer or a smart mobile device is on the Net now. They are no longer captive “consumers” at the far ends of one-way plumbing systems for “content.” The Net was designed in the first place for everybody, not just for marketers who build scaffolding atop customer dislike and mistake it for solid ground.

It should also help to remember that the only business calling companies “advertisers” is advertising. No company looks in the mirror and sees an advertiser there. That’s because no company goes into business just so they can advertise. They see a car maker, a shoe store, a bank, a brewer, or a grocer. Advertising is just overhead for them. I learned this lesson the hard way as a partner for 20 years in a very successful ad agency. Even if our clients loved us, they could cut their ad budget to nothing in an instant, or on a whim.

There’s a new world of marketing waiting to happen out there in the wide-open customer-driven marketplace. But it won’t grow out of today’s Great Ad Machine. It’ll grow out of new tech built on the customers’ side, with ad blocking and tracking protection as the first examples. Maybe some of that tech is visible at OMMA. Or at least maybe there’s an open door to it. If either is there, let’s see it. Hashtag: #VRM. (For more on that, see https://en.wikipedia.org/wiki/Vendor_relationship_management.)

If not, you can still find developers here .

Older posts

© 2019 ProjectVRM

Theme by Anders NorenUp ↑