On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

 

 

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

VRM is Me2B

Most of us weren’t at the latest VRM Day or IIW (both of which happened in the week before last), so I’ll fill you in on a cool development there: a working synonym for VRM that makes a helluva lot more sense and may have a lot more box office.

That synonym is Me2B.

And by “we” I mean Lisa Lavasseur, who in addition to everything behind that link, runs the new Me2B Alliance, which features the graphic there on the right, (suggesting an individual in a driver’s seat). She is also is the Vice Chair of the  IEEE 7012 Standard for Machine Readable Personal Privacy Terms, a new effort with which some of us are also involved.

Lisa led many sessions at IIW, mostly toward solidifying what the Me2B Alliance will do. If you stay tuned to me2b.us, you can see how that work grows and evolves.

The main thing for me, in the here and now, is to share how much I like Me2B as a synonym for VRM.

It is  also a synonym for C2B, of course; but it’s more personal. I also think it may have what it takes to imply Archimedes-grade leverage for individuals in the marketplace. For more on what I mean by that, see any or all of these:EIC award

I’m also putting this up to help me prep for mentioning Me2B tomorrow during this talk at the  2019 European Identity & Cloud Conference. It was at this same conference in 2008 that ProjectVRM won its first award. That’s it there on the right.

It’s becoming clear now that we were were way ahead of a time that finally seems to be arriving.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

A citizen-sovereign way to pay for news—or for any creative work

The Aspen Institute just published a 180-page report by the Knight Commission on Trust, Media and Democracy titled  (in all caps) CRISIS IN DEMOCRACY: RENEWING TRUST IN AMERICA. Its Call to Action concludes,
This is good. Real good.  Having  Aspen and Knight endorse personal sovereignty as a necessity for solving the crises of democracy and trust also means they endorse what we’ve been pushing forward here for more than a dozen years.

Since the report says (under Innovation, on page 73) we need to “use technology to enhance journalism’s roles in fostering democracy,” and that “news companies need to embrace technology to support their mission and achieve sustainability,” it should help to bring up the innovation we proposed in an application for a Knight News Challenge grant in 2011. This innovation was, and still is, called EmanciPay. It’s a citizen-sovereign way to pay for news, plus all forms of creative production where there is both demand and failing or absent sources of funding.

We have not only needed this for a long time, but it is for lack of it (or of any original and market-based approach to paying for creative work) that the EU is poised to further break our one Internet into four or more parts and destroy the open Web that has done so much to bring the world together and generate near-boundless forms of new wealth and productivity. The EU’s hammer for breaking the open Web is the EU Copyright Directive,  which has been under consideration and undergoing steady revision since 2016. Cory Doctorow, writing for the EFF, says The Final Version of the EU’s Copyright Directive Is the Worst One Yet. One offense (among too many to list here):

Under the final text, any online community, platform or service that has existed for three or more years, or is making €10,000,001/year or more, is responsible for ensuring that no user ever posts anything that infringes copyright, even momentarily. This is impossible, and the closest any service can come to it is spending hundreds of millions of euros to develop automated copyright filters. Those filters will subject all communications of every European to interception and arbitrary censorship if a black-box algorithm decides their text, pictures, sounds or videos are a match for a known copyrighted work. They are a gift to fraudsters and criminals, to say nothing of censors, both government and private.

There are much better ways of getting the supply and demand sides of creative markets together. EmanciPay is one of them, and deserves another airing.

Perhaps now that Knight and Aspen are cheering the citizen-sovereign bandwagon, it’s worth open-sourcing our original EmanciPay proposal.

So here it is, copied and pasted out of the last draft before we submitted it. Since much has changed since then (other than the original idea, which is the same as ever only more timely), I’ve added a bunch of notes at the end, and a call for action. Before reading it, please note two things: 1) we are not asking for money now (we were then, but not now); and 2) while this proposal addresses the challenge of paying for news, it applies much more broadly to all creative work.

10:00pm Monday 31 January 2011

Project Title:

EmanciPay: a user-driven system for generating revenue and managing relationships

Requested amount from Knight News Challenge

$325,000

Describe your project:

EmanciPay is the first user-driven payment system for news and information media. It is also the first system by which the consumers of media can create and participate in relationships with media — and the first system to reform the legal means by which those relationships are created and sustained.

With EmanciPay, users can easily choose to pay whatever they like, whenever they like, however they like — on their own terms and not just those controlled by the media’s supply side. EmanciPay will also provide means for building genuine two-way relationships, rather than relationships defined by each organization’s subscription and membership systems. As with Creative Commons, terms will be expressed in text and symbols that can be read easily by both software and people.

While there is no limit to payment choice options with EmanciPay, we plan to test these one at a time. The first planned trials are with Tipsy, which is being developed in alongside EmanciPay, and which also has a Knight News Challenge application. The two efforts are cooperative and coordinated.

EmanciPay belongs to a growing family of VRM (Vendor Relationship Management) tools. Both EmanciPay and VRM grew out of work in ProjectVRM, which I launched in 2006, at the start of my fellowship with Harvard’s Berkman Center for Internet & Society. In the past four years the VRM development community has grown internationally and today involves many allied noncommercial and commercial efforts. Here is the current list of VRM development projects.

How will your project improve the delivery of news and information to geographic communities?:

Two ways.

First is with a new business model. Incumbent local and regional media currently have three business models: paid delivery (subscriptions and newsstand sales), advertising, and (in the case of noncommercial media) appeals for support. All of these have well-known problems and limitations. They are also controlled in a top-down way by media organizations. By reducing friction and lowering the threshold of payment, EmanciPay will raise the number of customers while also providing direct and useful intelligence about the size and nature of demand. This supports geographic customisation of news and information goods.

Second is by providing ways for both individuals and news organizations to create and sustain relationships that go beyond “membership” (which in too many cases means little more than “we gave money”). EmanciPay will also help consumers of news participate in the news development process. Because EmanciPay is based on open source code and open standards, it can be widely adopted and adapted to meet local needs. CRM (customer relationship management) software companies, many of which supply CRM systems to media organizations, are also awaiting VRM developments. (The cover and much of this CRM Magazine are devoted to VRM.)

What unmet need does your proposal answer?:

EmanciPay meets the need for maximum freedom and flexibility in paying for news and information, and for a media business model that does not depend only on advertising, membership systems, large donors or government grants. (This last one is of special interest at a time when cutting government funding of public broadcasting is a campaign pledge by many freshly elected members of Congress.)

Right now most news and information is free of charge on the Web, even when the same goods are sold on newsstands or through cable TV subscriptions. This fact, plus cumbersome and widely varied membership, pledging and payment systems, serves to discourage payment by media users. Even the membership systems of public broadcasting stations exclude vast numbers of people who would contribute “if it was easy”. EmanciPay overcomes these problems by making it easy for consumers of news to become customers of news. It also allows users to initiate real and productive relationships with news organizations, whether or not they pay those organizations.

How is your idea new?:

Equipping individuals with their own tools for choosing what and how to pay, and for creating and maintaining relationships, is a new idea. Nearly all other sustainability ideas involve creating new intermediators or working on improving services on the supply side.

Tying sustainability to meaningful relationships (rather than just “membership” is also new). So is creating means by which individuals can assert their own “terms of service” — and match those terms with those on the supply side.

EmanciPay is also new in the scope of its ambition. Beyond creating a large new source of revenues, and scaffolding meaningful relationships between supply and demand, EmanciPay intends to remove legal frictions from the marketplace as well. What lawyers call contracts of adhesion (ones in which the dominant party is free to change what they please while the submissive party is nailed to whatever the dominant party dictates) have been pro forma on the Web since the invention of the cookie in 1995. EmanciPay is the first and only system intended to obsolete and replace these onerous “agreements” (which really aren’t).

Once in place and working, EmanciPay’s effects should exceed even those of Creative Commons, because EmanciPay addresses the demand as well as the supply side of the marketplace. And, like Creative Commons, EmanciPay does not require changes in standing law.

Finally, EmanciPay is new in the sense that it is not centralized, and does not require an intermediary. As with email (the protocols of which are open and decentralized, by design), EmanciPay supports both self-hosting and hosting in “the cloud.” It is also both low-level and flexible enough to provide base-level building material for any number of new businesses and services.

What will you have changed by the end of the project?:

First, we will have changed the habits and methods by which people pay for the media goods they receive, starting with news and information.

Second, we will have introduced relationship systems that are not controlled by the media, but driven instead by the individuals who are each at the centers of their own relationships with many different entities. Thus relationships will be user-driven and not just organization-driven.

Third, we will have created a new legal framework for agreements between buyers and sellers on the Web and in the networked world, eliminating many of the legal frictions involved in today’s e-commerce systems.

Fourth, we will have introduced to the world an intention economy, based on the actual intentions of buyers, rather than on guesswork by sellers about what customers might buy. (The latter is the familiar “attention economy” of advertising and promotion.)

Why are you the right person or team to complete this project?:

I know how to get ideas and code moving in the world. I’ve done that while running ProjectVRM for the last four years. As of today VRM tools are being developed in many places, by many programmers, in both commercial and noncommercial capacities, around the world, Those places include BostonLondonJohannesburgDubuqueSantiagoBelfastSalt Lake CitySanta BarbaraVienna, and Seattle. Much of this work has also been advanced at twice-yearly IIW (Internet Identity Workshop) events, which I co-founded in 2005 and continue to help organize.

As Senior Editor of Linux Journal, I’ve been covering open source code development since 1996, contributing to its understanding and widespread adoption. For that and related work, I received a Google-O’Reilly Open Source Award for “Best Communicator” in 2005.

I helped reform both markets and marketing as a co-author of The Cluetrain Manifesto, a business bestseller in 2000 that has since become part of the marketing canon. (As of today, Cluetrain is cited by more than 5300 other books.) I also coined Cluetrain’s most-quoted line, “Markets are conversations.”

I helped popularize blogging, a subject to which I have been contributing original thinking and writing since 1999. I also have more than 12,000 followers on Twitter.

EmanciPay is also my idea, and one I have been working on for some time. This includes collaboration with PRX and other members of the public radio community on ListenLog (the brainchild of Keith Hopper at NPR), which can be found today on the Public Radio Player, an iPhone app that has been downloaded more than 2 million times. I am also working on EmanciPay with students at MIT/CSAIL and Kings College London. The MIT/CSAIL collaboration is led by David Karger of the MIT faculty, and ties in with work he and students are doing with Haystack and Tipsy.

I’ve also contributed to other VRM development efforts — on identity and trust frameworks, on privacy assurance, on selective disclosure of personal data, and on personal data stores (PDSes), all of which will help support EmanciPay as it is deployed.

What terms best describe your project?:

Bold, original, practical, innovative and likely to succeed.

What tasks/benchmarks need to be accomplished to develop your project and by when will you complete them? (500 words)

1) Engaging of programmers at MIT and other institutions within two months.

2) Establishment of Customer Commons (similar to Creative Commons) within two months.

3) Getting EmanciPay into clinical law case study by classes at law schools, one semester after the grant money arrives.

4) Beta-level code within six months.

5) Recruitment of first-round participating media entities (journals, sites, blogs, broadcast stations — completed within six months.

6) Relationships established with PayPal, Google Checkout and other payment intermediators within six months.

7) Tipsy trials within three months after beta-level code is ready.

8) Full EmanciPay trials within six months after beta-level code is ready.

9) Research protocols completed by the time beta code is ready.

How will you measure progress?: (500 words)

1) Involvement in open source code development by programmers other than those already paid or engaged (for example, as students) for the work

2) Completion of code

3) Deployment in target software and devices

4) Cooperation by allied development .orgs and .coms

5) Adoption and use by individuals

6) Direct financial benefit for news organizations.

All are measurable. We can count programmers working on code bases, as well as patches and lines of code submitted and added. We can see completed code in downloadable and installable form in the appropriate places. We can see and document cooperation by organizations. We can count downloads and monitor activities by users (with their permission). And we can see measurable financial benefits to news and information organizations. Researching each of these will be part of the project. For example, we will need to provide on our website, or directly, descriptions of accounting methods for the organizations that will benefit directly from individual contributions.

Do you see any risk in the development of your project?: (500 words)

EmanciPay is likely to be seen as disruptive by organizations that are highly vested in existing forms of funding. One example is public broadcasting, which has relied on fund drives for decades.

There is also a fear that EmanciPay will raise the number of contributors while lowering the overall funding dollar amount spent by contributors. I don’t expect that to happen. What I do expect is for the market to decide — and for EmanciPay to provide the means. Fortunately, EmanciPay also provides means for non-monetary relationships to grow as well, which will raise the perception of value by users and customers, and the likelihood that more users will become customers.

How will people learn about what you are doing?: (500 words) remaining

We will blog about it, talk about it at conferences, tweet about it, and use every other personal and social medium to spread the word. And we will use traditional media relations as well — which shouldn’t be too hard, since we will be working to bring more income to those media.

We have a good story about an important cause. I’m good at communicating and driving stories forward, and I and have no doubt that the effort will succeed.

Is this a one-time experiment or do you think it will continue after the grant?: (500 words)

EmanciPay will continue after the grant because it will become institutionalized within the fabric of the economy, as will its allied efforts.

In addition to the Knight News Challenge, does your project rely on other revenue sources? (Choose all that apply):

[ ] Advertising
[ ] Paid Subscriptions
[ x ] Crowd-Funded
[ ] Earned Income
[ ] Syndication
[ ] Other

Here’s what happened after that.

  1. Customer Commons was incorporated as a California-based 501(3)(c) nonprofit shortly after this was submitted.  (It is also currently cited in this CNN story  and this one by Fox News.) Almost entirely bootstrapped, Customer Commons has established itself as a Creative Commons-like place where model personal privacy policies and terms of engagement that individuals proffer as first parties can live. Those terms are among a number of other tools for exercising citizen sovereign powers. “CuCo” also plans, immodestly, to be a worldwide membership organization, comprised entirely of customers (possible slogan: “We’re the hundred percent”). In that capacity, it will hold events, publish, develop customer-side code that’s good for both customers and businesses (e.g. a shopping cart of your own that you can take from site to site), and lobby for policies that respect the natural sovereignty and power of customers in the digital world. After years of prep, and not much asking, Customer Commons is at last ready to accept funding, and to start scaling up. If you have money to invest in grass roots citizen-sovereign work, that’s a good place to do it.
  2. Commercial publishers, including nearly all the world’s websites (or so it seems) became deeply dependent on adtech—tracking based advertising—for income. (I reviewed that history here in 2015.) We’ve been fighting that. So have governments. Both the GDPR in Europe and the California Consumer Privacy Act were called to existence by privacy abuses funded by adtech. (Seriously, without adtech, those laws wouldn’t have happened.)
  3. The current VRM developments list is a large and growing one. So is our list of participants.
  4. Some of the allied projects mentioned in the proposal are gone or have morphed. But some are still there, and there are many other potential collaborators.
  5. Fintech has become a thing, along with blockchain, distributed ledgers and other person-driven solutions to the problem of excessive centralization.
  6. The word cluetrain is now mentioned in more than 13,000 books. And, twenty years since it was first uttered, cluetrain is also tweeted almost constantly.
  7. I am now editor-in-chief of Linux Journal, the first publication ready  to accept terms proffered by readers, starting with a Customer Commons one dubbed #NoStalking.

That list could go on, but it’s not what matters.

What matters is that EmanciPay was a great idea when we proposed it in the first place, and a better idea now. With the right backing, it can scale.

If you want to solve the problem of paying for news (or all of journalism), there is not a more democratic, fair, trust-causing and potentially massive idea on the table, for doing exactly that, than EmanciPay. Nor one better potentiated to address lots of other problems and  goals laid out in that Knight Commission report. One example: An immodest proposal for the music industry.

If you’re interested, talk to me.

VRM TBDs

Every construction project has a punch list of to-be-done items.  Since we’ve been at this for a dozen years, and have a rather long list of development works in progress on our wiki,  now seems like a good time and place to list what still needs to be done, but from the individual’s point of view. In other words, things they need but don’t have yet.

So  here is a  punch list of those things, in the form of a static page rather than a post such as this one. There is also a shortcut to the punch list in the menu above.

For the record, here’s that list as it stands today:

  1. Make companies agree to our terms, rather than the other way around.
  2. Have real relationships with companies, based on open standards and code, rather than relationships trapped inside corporate silos, each with their own very different ways of managing customer relationships (CRM), “delivering” a “customer experience” (aka CX), leading us on a “journey” or having us “join the conversation.”
  3. Standardizing the ways we relate to the service sides of companies, both for requesting service and for exchanging useful data in the course of owning a product or renting a service, so market intelligence flows both ways, and the “customer journey” becomes a virtuous cycle.
  4. Control our own self-sovereign identities, and give others what little they need to know about us on an as-needed basis.
  5. Get rid of logins and passwords.
  6. Change our personal details (surname, phone number, email or home address) in the records of all the organizations we deal with, in one move.
  7. Pay what we want, where we want, for whatever we want, in our own ways.
  8. Call for service or support in one simple and straightforward way of our own, rather than in as many ways as there are 800 numbers to call and numbers to punch into a phone before we wait on hold while bad music plays.
  9. Express loyalty in our own ways, which are genuine rather than coerced.
  10. Have an Internet of MY Things, which each of us controls for ourselves, and in which every thing we own has its own cloud, which we control as well.
  11. Own and control all our health and fitness records, and how others use them.
  12. Have wallets of our own, rather than only those provided by platforms.
  13. Have shopping carts of our own, which we can take from store to store and site to site online, rather than being tied to ones provided only by the stores themselves.
  14. Have personal devices of our own (such as this one) that aren’t cells in a corporate silo, or suction cups on corporate tentacles. (Alas, that’s what we still have with all Apple iOS phones and tablets, and all Android devices with embedded Google apps.)
  15. Remake education around the power we all have to teach ourselves and lean from each other, making optional at most the formal educational systems built more for maintaining bell curves than liberating the inherent genius of every student.

Please help us improve and correct it.

[The photo is from this collection.]

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/s… — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from  community at email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

Privacy = personal agency + respect by others for personal dignity

Privacy is a state each of us enjoys to the degrees others respect it.

And they respect what economists call signals. We send those signals through our behavior (hand signals, facial expressions) and technologies. Both are expressions of agency: the ability to act with effect in the world.

So, for example, we signal a need not to reveal our private parts  by wearing clothes. We signal a need not to have our private spaces invaded by buttoning our clothes, closing doors, setting locks on those doors, and pulling closed curtains or shades. We signal a need not to be known by name to everybody by not wearing name tags as we walk about the world. (That we are naturally anonymous is a civic grace, but a whole ‘nuther thread.)

All of this has been well understood in the physical world for as long as we’ve had civilization—and perhaps longer. It varies by culture, but remained remarkably non-controversial—until we added the digital world to the physical one.

The digital world, like the physical one, came without privacy. We had to invent privacy in the physical world with technologies (clothing, shelter, doors, locks) and norms such as respect for the simple need for personal dignity.

We have not yet done the same in the digital world. We did, however, invent administrative identities for people, because administrative systems need to know who they’re interested in and dealing with.

These systems are not our own. They belong to administrative entities: companies, government agencies, churches, civic groups, whatever. Nearly 100% of conversation about both identity and privacy take place inside the administrative context. All questions  come down to “How can this system with ways of identifying us give us privacy?” Even Privacy By Design (PbD) is about administrative systems. It is not something you and I have. Not in the way we have clothes.

And that’s what we need: the digital equivalents of clothing and ways of signaling what’s okay and what’s not okay.  Norms should follow, and then laws and regulations restricting violations of those norms.

Unfortunately, we got the laws (e.g. the EU’s GDPR and California’s AB 375) before we got the tech and the norms.

But I’m encouraged about getting both, for two reasons. One is the work going on here among VRM-ish developers. The other is that @GregAEngineer gave a talk this morning on exactly this topic, at the IEEE #InDITA conference in Bangalore.

Oh, and lest we think privacy matters only to those in the fully privileged world, watch Privacy on the Line, a video just shared here.

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  While there are good reasons to challenge whether or not data can be property (see Jefferson and  Renieris), I want to focus on a different problem: the one best to solve first: the need for personal agency in the online world.

I see two reasons why personal agency matters more than personal data.

The first reason we have far too little agency in the networked world is that we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party: dependent, subordinate, secondary. Fortunately, the Net’s and the Web’s base protocols remain peer-to-peer, by design. We can still build on those. And it’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from, for example sticking their hands inside our clothes without permission.

The fact that adtech plants tracking beacons on our naked digital selves and tracks us like animals across the digital frontier may be a norm for now, but it is also morally wrong, massively rude and now illegal under the  GDPR.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons is all about.)

Businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties is a design flaw in every standing “agreement” we “accept,” and we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on the absolute. (It’s no coincidence that more than a year ago, 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. It simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control.  For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

« Older posts

© 2019 ProjectVRM

Theme by Anders NorenUp ↑