Author: Doc Searls (Page 5 of 41)

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

What law might clear the way for VRM/Me2B development?

VRM/Me2B developers shouldn’t have to wait for laws to pave the way through a wall-like status quo.  (And we say that in our Privacy Manifesto.) But a good law or two should help.

That was I had hoped—even expected—the GDPR to do.  Specifically, I called it  “the world’s most heavily weaponized law protecting personal privacy,” said it was “aimed at companies that track people without asking” and that it would “blow away the (mostly US-based) surveillance economy, especially tracking-based ‘adtech,’ which supports most commercial publishing online.”

That hasn’t happened.

It has been sixteen months since the GDPR went into effect (May 2018), and violation of personal privacy online today remains as pervasive as ever. Worse, violators take advantage of a loophole* in the GDPR that allows them to continue tracking people by requiring (or appearing to require) “consent” to  cookies and other means of tracking (so you can get “personalized,” “interest-based” or “relevant” advertising, the perpetrators say). As long as various EU countries’ Data Protection Authorities (who enforce the GDPR) fail to focus on simple fact that nearly every website and its third parties are doing the same bad things Google and Facebook are accused of doing, the practice will continue, and the GDPR will remain a failure at stopping widespread spying-based adtech.

Meanwhile, many privacy advocates in the U.S. (including me) have invested hope in the California Consumer Privacy Act (CCPA), which will go into effect on January 1, 2020.  I invite you to visit the operative language in that law, starting  here. As legalese goes, it’s remarkably readable. Meanwhile, Wikipedia compresses these rather well under the heading Intentions of the Act:

The intentions of the Act are to provide California residents with the right to:

  1. Know what personal data is being collected about them.
  2. Know whether their personal data is sold or disclosed and to whom.
  3. Say no to the sale of personal data.
  4. Access their personal data.
  5. Request a business to delete any personal information about a consumer collected from that consumer.
  6. Not be discriminated against for exercising their privacy rights.

Note that this presumes that nearly all agency resides on the data collectors’ side, and that the only agency possible on the individual’s side is asking to know or say no to what others who collected personal data can do with it.

That’s not enough.

Making matters worse is that we are mere “consumers” to the CCPA, “data subjects” to the GDPR and “users” to the computer industry—in each case with no more freedom and agency than what potential violators of our privacy (e.g. the websites and services of the world) separately grant us, through their countless, lengthy and infinitely varied privacy policies, terms and “agreements.”

In other words, we’re still at Square Zero, and Square One is neither the CCPA nor the GDPR. Those are relevant in the ways that guard rails are relevant to a winding road; but we don’t have the road yet.

While I’ve made it clear elsewhere that we need tech more than policy (because tech of our own—VRM tech—gives us agency), it will sure help to have policy that guides the deployment of that tech.

So,  what law might actually open the way for VRM development, preferably by simply giving individuals a new power they’ve been lacking, such as real control over just one aspect of their privacy: what Louis Brandeis and Samuel Warren called “the right to be let alone” when we’re online?

I like two.

First is the Do-Not-Track Act of 2019. It’s model legislation from DuckDuckGo, and explained this way:

When you turn on the setting in your browser that says “Do Not Track”, you probably expect to no longer be tracked on most websites you visit. Right? Well, you would be wrong. But don’t worry, you’re not alone.

Our recent study on the Do Not Track (DNT) browser setting indicated that about a quarter of people have turned on this setting, and most were unaware big sites do not respect it. That means approximately 75 million Americans, 115 million citizens of the European Union, and many more people worldwide are, right now, broadcasting a DNT signal.

All of these people are actively asking the sites they visit to not track them. Unfortunately, no law requires websites to respect your Do Not Track signals, and the vast majority of sites, including most all of the big tech companies, sadly choose to simply ignore them.

Let’s change that now. Let’s put teeth behind this widely used browser setting by making a law that would align with current consumer expectations and empower people to more easily regain control of their online privacy.

While DuckDuckGo actively supports the passing of strong, comprehensive privacy laws, we also recognize that it will take time for them to take effect worldwide. In the meantime, governments can provide immediate relief by enacting separate, simpler Do Not Track legislation.

It is extremely rare to have such an exciting legislative opportunity like this, where the hardest work — coordinated mainstream technical implementation and widespread consumer adoption — is already done.

That’s why we’re announcing draft legislation that can serve as a starting point for legislators in America and beyond. It’s entitled the “Do-Not-Track Act of 2019” and, if it were to be enacted, would require sites to respect the Do Not Track browser setting in this manner:

  1. No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission.
  2. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn’t be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history.

Under this proposed law, these restrictions would only come into play if a consumer has turned on the Do Not Track signal for their Internet traffic. To keep the Internet from breaking, these restrictions would have very narrowly tailored exceptions for debugging, auditing, security, non-commercial security research, and reporting, and further limited by mandated data-minimization requirements.

In particular, each of these narrow exceptions would only apply if a site adopts strict data-minimization practices, such as using the least amount of personal information needed, and anonymizing it whenever possible. And importantly, this draft legislation takes a more realistic view of what constitutes anonymous data vs. de-identified data. Legislators need to appreciate that users can be re-identified unless companies implement extra measures of protection.

Katherine Druckman and I also talked about this a bit with Gabriel Weinberg, CEO and founder of DuckDuckGo, in our Reality 2.0 podcast with him last month.

The other is Adrian Gropper‘s Patient Privacy Rights Information Governance Label. It says,

Patient Privacy Rights Information Governance Label August 19, 2019 Note: 0-to-5 of the boxes to be checked by the application, device, or service provider.1. No sharing: The data is never shared with any external entities. It is not even shared in de-identified form.

2. No aggregation: The data is never aggregated with other types of input or data from external sources. This includes mixing the data gathered via The Service with other data, such as patient-reported outcomes.

3. Always voluntary self-identification: The user of The Service is able to choose their own identity. The user does not need to have their identity verified unless required by law.

4. Digital agent support: The user is able to specify a digital agent, trustee, or equivalent information manager, and this specified agent will not be subject to certification or censorship.

5. No vendor lock-in: The Service is easily and conveniently substitutable, so the user can easily move their data to another vendor providing a similar service. This prevents vendor lock-in and is often accomplished using Open Standards. Indications for Use: The five separately self-asserted statements on the PPR Information Governance Label are subject to legal enforcement as would the privacy policy associated with The Service.

While not proposed as a law, it would be good to have a law that imposes those requirements, and leaves room for individuals to provide for exceptions, for example when they have working relationships with a service provider.

Maciej Ceglowski also has some good suggestions.


*Part 1 under Article 6 of the GDPR, covering the “Lawfulness of processing,” says, “Processing shall be lawful only if and to the extent that at least one of the following applies: (a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes.” Hence the consent notices with an “accept” button in front of websites.  These are most often presented as “cookie notices.” (Which are actually required by earlier EU law that was to some degree ignored until the GDPR came along.)

Whether a notice on the front of a website talks cookies or not, it usually means the site is obtaining your consent to being tracked “to personalize content and advertising” (or whatever) by spying on you. I’ve been told by GDPR experts that this really isn’t a loophole, and that most of these consent notices actually violate the GDPR’s letter and not just its spirit. Still, while that might be true, violation of the GDPR’s spirit remains normative.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

VRM is Me2B

Most of us weren’t at the latest VRM Day or IIW (both of which happened in the week before last), so I’ll fill you in on a cool development there: a working synonym for VRM that makes a helluva lot more sense and may have a lot more box office.

That synonym is Me2B.

And by “we” I mean Lisa Lavasseur, who in addition to everything behind that link, runs the new Me2B Alliance, which features the graphic there on the right, (suggesting an individual in a driver’s seat). She is also is the Vice Chair of the  IEEE 7012 Standard for Machine Readable Personal Privacy Terms, a new effort with which some of us are also involved.

Lisa led many sessions at IIW, mostly toward solidifying what the Me2B Alliance will do. If you stay tuned to me2b.us, you can see how that work grows and evolves.

The main thing for me, in the here and now, is to share how much I like Me2B as a synonym for VRM.

It is  also a synonym for C2B, of course; but it’s more personal. I also think it may have what it takes to imply Archimedes-grade leverage for individuals in the marketplace. For more on what I mean by that, see any or all of these:EIC award

I’m also putting this up to help me prep for mentioning Me2B tomorrow during this talk at the  2019 European Identity & Cloud Conference. It was at this same conference in 2008 that ProjectVRM won its first award. That’s it there on the right.

It’s becoming clear now that we were were way ahead of a time that finally seems to be arriving.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

A citizen-sovereign way to pay for news—or for any creative work

The Aspen Institute just published a 180-page report by the Knight Commission on Trust, Media and Democracy titled  (in all caps) CRISIS IN DEMOCRACY: RENEWING TRUST IN AMERICA. Its Call to Action concludes,
This is good. Real good.  Having  Aspen and Knight endorse personal sovereignty as a necessity for solving the crises of democracy and trust also means they endorse what we’ve been pushing forward here for more than a dozen years.

Since the report says (under Innovation, on page 73) we need to “use technology to enhance journalism’s roles in fostering democracy,” and that “news companies need to embrace technology to support their mission and achieve sustainability,” it should help to bring up the innovation we proposed in an application for a Knight News Challenge grant in 2011. This innovation was, and still is, called EmanciPay. It’s a citizen-sovereign way to pay for news, plus all forms of creative production where there is both demand and failing or absent sources of funding.

We have not only needed this for a long time, but it is for lack of it (or of any original and market-based approach to paying for creative work) that the EU is poised to further break our one Internet into four or more parts and destroy the open Web that has done so much to bring the world together and generate near-boundless forms of new wealth, inclusivity, equality, productivity and other good nouns ending in ty. The EU’s hammer for breaking the open Web is the EU Copyright Directive,  which has been under consideration and undergoing steady revision since 2016. Cory Doctorow, writing for the EFF, says The Final Version of the EU’s Copyright Directive Is the Worst One Yet. One offense (among too many to list here):

Under the final text, any online community, platform or service that has existed for three or more years, or is making €10,000,001/year or more, is responsible for ensuring that no user ever posts anything that infringes copyright, even momentarily. This is impossible, and the closest any service can come to it is spending hundreds of millions of euros to develop automated copyright filters. Those filters will subject all communications of every European to interception and arbitrary censorship if a black-box algorithm decides their text, pictures, sounds or videos are a match for a known copyrighted work. They are a gift to fraudsters and criminals, to say nothing of censors, both government and private.

There are much better ways of getting the supply and demand sides of creative markets together. EmanciPay is one of them, and deserves another airing.

Perhaps now that Knight and Aspen are cheering the citizen-sovereign bandwagon, it’s worth welcoming the fact that our original EmanciPay proposal in open source form.

So here it is, copied and pasted out of the last draft before we submitted it. Since much has changed since then (other than the original idea, which is the same as ever only more timely), I’ve added a bunch of notes at the end, and a call for action. Before reading it, please note two things: 1) we are not asking for money now (we were then, but not now); and 2) while this proposal addresses the challenge of paying for news, it applies much more broadly to all creative work.

10:00pm Monday 31 January 2011

Project Title:

EmanciPay: a user-driven system for generating revenue and managing relationships

Requested amount from Knight News Challenge

$325,000

Describe your project:

EmanciPay is the first user-driven payment system for news and information media. It is also the first system by which the consumers of media can create and participate in relationships with media — and the first system to reform the legal means by which those relationships are created and sustained.

With EmanciPay, users can easily choose to pay whatever they like, whenever they like, however they like — on their own terms and not just those controlled by the media’s supply side. EmanciPay will also provide means for building genuine two-way relationships, rather than relationships defined by each organization’s subscription and membership systems. As with Creative Commons, terms will be expressed in text and symbols that can be read easily by both software and people.

While there is no limit to payment choice options with EmanciPay, we plan to test these one at a time. The first planned trials are with Tipsy, which is being developed in alongside EmanciPay, and which also has a Knight News Challenge application. The two efforts are cooperative and coordinated.

EmanciPay belongs to a growing family of VRM (Vendor Relationship Management) tools. Both EmanciPay and VRM grew out of work in ProjectVRM, which I launched in 2006, at the start of my fellowship with Harvard’s Berkman Center for Internet & Society. In the past four years the VRM development community has grown internationally and today involves many allied noncommercial and commercial efforts. Here is the current list of VRM development projects.

How will your project improve the delivery of news and information to geographic communities?:

Two ways.

First is with a new business model. Incumbent local and regional media currently have three business models: paid delivery (subscriptions and newsstand sales), advertising, and (in the case of noncommercial media) appeals for support. All of these have well-known problems and limitations. They are also controlled in a top-down way by media organizations. By reducing friction and lowering the threshold of payment, EmanciPay will raise the number of customers while also providing direct and useful intelligence about the size and nature of demand. This supports geographic customisation of news and information goods.

Second is by providing ways for both individuals and news organizations to create and sustain relationships that go beyond “membership” (which in too many cases means little more than “we gave money”). EmanciPay will also help consumers of news participate in the news development process. Because EmanciPay is based on open source code and open standards, it can be widely adopted and adapted to meet local needs. CRM (customer relationship management) software companies, many of which supply CRM systems to media organizations, are also awaiting VRM developments. (The cover and much of this CRM Magazine are devoted to VRM.)

What unmet need does your proposal answer?:

EmanciPay meets the need for maximum freedom and flexibility in paying for news and information, and for a media business model that does not depend only on advertising, membership systems, large donors or government grants. (This last one is of special interest at a time when cutting government funding of public broadcasting is a campaign pledge by many freshly elected members of Congress.)

Right now most news and information is free of charge on the Web, even when the same goods are sold on newsstands or through cable TV subscriptions. This fact, plus cumbersome and widely varied membership, pledging and payment systems, serves to discourage payment by media users. Even the membership systems of public broadcasting stations exclude vast numbers of people who would contribute “if it was easy”. EmanciPay overcomes these problems by making it easy for consumers of news to become customers of news. It also allows users to initiate real and productive relationships with news organizations, whether or not they pay those organizations.

How is your idea new?:

Equipping individuals with their own tools for choosing what and how to pay, and for creating and maintaining relationships, is a new idea. Nearly all other sustainability ideas involve creating new intermediators or working on improving services on the supply side.

Tying sustainability to meaningful relationships (rather than just “membership” is also new). So is creating means by which individuals can assert their own “terms of service” — and match those terms with those on the supply side.

EmanciPay is also new in the scope of its ambition. Beyond creating a large new source of revenues, and scaffolding meaningful relationships between supply and demand, EmanciPay intends to remove legal frictions from the marketplace as well. What lawyers call contracts of adhesion (ones in which the dominant party is free to change what they please while the submissive party is nailed to whatever the dominant party dictates) have been pro forma on the Web since the invention of the cookie in 1995. EmanciPay is the first and only system intended to obsolete and replace these onerous “agreements” (which really aren’t).

Once in place and working, EmanciPay’s effects should exceed even those of Creative Commons, because EmanciPay addresses the demand as well as the supply side of the marketplace. And, like Creative Commons, EmanciPay does not require changes in standing law.

Finally, EmanciPay is new in the sense that it is not centralized, and does not require an intermediary. As with email (the protocols of which are open and decentralized, by design), EmanciPay supports both self-hosting and hosting in “the cloud.” It is also both low-level and flexible enough to provide base-level building material for any number of new businesses and services.

What will you have changed by the end of the project?:

First, we will have changed the habits and methods by which people pay for the media goods they receive, starting with news and information.

Second, we will have introduced relationship systems that are not controlled by the media, but driven instead by the individuals who are each at the centers of their own relationships with many different entities. Thus relationships will be user-driven and not just organization-driven.

Third, we will have created a new legal framework for agreements between buyers and sellers on the Web and in the networked world, eliminating many of the legal frictions involved in today’s e-commerce systems.

Fourth, we will have introduced to the world an intention economy, based on the actual intentions of buyers, rather than on guesswork by sellers about what customers might buy. (The latter is the familiar “attention economy” of advertising and promotion.)

Why are you the right person or team to complete this project?:

I know how to get ideas and code moving in the world. I’ve done that while running ProjectVRM for the last four years. As of today VRM tools are being developed in many places, by many programmers, in both commercial and noncommercial capacities, around the world, Those places include BostonLondonJohannesburgDubuqueSantiagoBelfastSalt Lake CitySanta BarbaraVienna, and Seattle. Much of this work has also been advanced at twice-yearly IIW (Internet Identity Workshop) events, which I co-founded in 2005 and continue to help organize.

As Senior Editor of Linux Journal, I’ve been covering open source code development since 1996, contributing to its understanding and widespread adoption. For that and related work, I received a Google-O’Reilly Open Source Award for “Best Communicator” in 2005.

I helped reform both markets and marketing as a co-author of The Cluetrain Manifesto, a business bestseller in 2000 that has since become part of the marketing canon. (As of today, Cluetrain is cited by more than 5300 other books.) I also coined Cluetrain’s most-quoted line, “Markets are conversations.”

I helped popularize blogging, a subject to which I have been contributing original thinking and writing since 1999. I also have more than 12,000 followers on Twitter.

EmanciPay is also my idea, and one I have been working on for some time. This includes collaboration with PRX and other members of the public radio community on ListenLog (the brainchild of Keith Hopper at NPR), which can be found today on the Public Radio Player, an iPhone app that has been downloaded more than 2 million times. I am also working on EmanciPay with students at MIT/CSAIL and Kings College London. The MIT/CSAIL collaboration is led by David Karger of the MIT faculty, and ties in with work he and students are doing with Haystack and Tipsy.

I’ve also contributed to other VRM development efforts — on identity and trust frameworks, on privacy assurance, on selective disclosure of personal data, and on personal data stores (PDSes), all of which will help support EmanciPay as it is deployed.

What terms best describe your project?:

Bold, original, practical, innovative and likely to succeed.

What tasks/benchmarks need to be accomplished to develop your project and by when will you complete them? (500 words)

1) Engaging of programmers at MIT and other institutions within two months.

2) Establishment of Customer Commons (similar to Creative Commons) within two months.

3) Getting EmanciPay into clinical law case study by classes at law schools, one semester after the grant money arrives.

4) Beta-level code within six months.

5) Recruitment of first-round participating media entities (journals, sites, blogs, broadcast stations — completed within six months.

6) Relationships established with PayPal, Google Checkout and other payment intermediators within six months.

7) Tipsy trials within three months after beta-level code is ready.

8) Full EmanciPay trials within six months after beta-level code is ready.

9) Research protocols completed by the time beta code is ready.

How will you measure progress?: (500 words)

1) Involvement in open source code development by programmers other than those already paid or engaged (for example, as students) for the work

2) Completion of code

3) Deployment in target software and devices

4) Cooperation by allied development .orgs and .coms

5) Adoption and use by individuals

6) Direct financial benefit for news organizations.

All are measurable. We can count programmers working on code bases, as well as patches and lines of code submitted and added. We can see completed code in downloadable and installable form in the appropriate places. We can see and document cooperation by organizations. We can count downloads and monitor activities by users (with their permission). And we can see measurable financial benefits to news and information organizations. Researching each of these will be part of the project. For example, we will need to provide on our website, or directly, descriptions of accounting methods for the organizations that will benefit directly from individual contributions.

Do you see any risk in the development of your project?: (500 words)

EmanciPay is likely to be seen as disruptive by organizations that are highly vested in existing forms of funding. One example is public broadcasting, which has relied on fund drives for decades.

There is also a fear that EmanciPay will raise the number of contributors while lowering the overall funding dollar amount spent by contributors. I don’t expect that to happen. What I do expect is for the market to decide — and for EmanciPay to provide the means. Fortunately, EmanciPay also provides means for non-monetary relationships to grow as well, which will raise the perception of value by users and customers, and the likelihood that more users will become customers.

How will people learn about what you are doing?: (500 words) remaining

We will blog about it, talk about it at conferences, tweet about it, and use every other personal and social medium to spread the word. And we will use traditional media relations as well — which shouldn’t be too hard, since we will be working to bring more income to those media.

We have a good story about an important cause. I’m good at communicating and driving stories forward, and I and have no doubt that the effort will succeed.

Is this a one-time experiment or do you think it will continue after the grant?: (500 words)

EmanciPay will continue after the grant because it will become institutionalized within the fabric of the economy, as will its allied efforts.

In addition to the Knight News Challenge, does your project rely on other revenue sources? (Choose all that apply):

[ ] Advertising
[ ] Paid Subscriptions
[ x ] Crowd-Funded
[ ] Earned Income
[ ] Syndication
[ ] Other

Here’s what happened after that.

  1. Customer Commons was incorporated as a California-based 501(3)(c) nonprofit shortly after this was submitted.  (It is also currently cited in this CNN story  and this one by Fox News.) Almost entirely bootstrapped, Customer Commons has established itself as a Creative Commons-like place where model personal privacy policies and terms of engagement that individuals proffer as first parties can live. Those terms are among a number of other tools for exercising citizen sovereign powers. “CuCo” also plans, immodestly, to be a worldwide membership organization, comprised entirely of customers (possible slogan: “We’re the hundred percent”). In that capacity, it will hold events, publish, develop customer-side code that’s good for both customers and businesses (e.g. a shopping cart of your own that you can take from site to site), and lobby for policies that respect the natural sovereignty and power of customers in the digital world. After years of prep, and not much asking, Customer Commons is at last ready to accept funding, and to start scaling up. If you have money to invest in grass roots citizen-sovereign work, that’s a good place to do it.
  2. Commercial publishers, including nearly all the world’s websites (or so it seems) became deeply dependent on adtech—tracking based advertising—for income. (I reviewed that history here in 2015.) We’ve been fighting that. So have governments. Both the GDPR in Europe and the California Consumer Privacy Act were called to existence by privacy abuses funded by adtech. (Seriously, without adtech, those laws wouldn’t have happened.)
  3. The current VRM developments list is a large and growing one. So is our list of participants.
  4. Some of the allied projects mentioned in the proposal are gone or have morphed. But some are still there, and there are many other potential collaborators.
  5. Fintech has become a thing, along with blockchain, distributed ledgers and other person-driven solutions to the problem of excessive centralization.
  6. The word cluetrain is now mentioned in more than 13,000 books. And, twenty years since it was first uttered, cluetrain is also tweeted almost constantly.
  7. I am now editor-in-chief of Linux Journal, the first publication ready  to accept terms proffered by readers, starting with a Customer Commons one dubbed #NoStalking.

That list could go on, but it’s not what matters.

What matters is that EmanciPay was a great idea when we proposed it in the first place, and a better idea now. With the right backing, it can scale.

If you want to solve the problem of paying for news (or all of journalism), there is not a more democratic, fair, trust-causing and potentially massive idea on the table, for doing exactly that, than EmanciPay. And nobody is better potentiated to address lots of other problems and  goals laid out in that Knight Commission report. One example: An immodest proposal for the music industry.

If you’re interested, talk to me.

VRM TBDs

Every construction project has a punch list of to-be-done items.  Since we’ve been at this for a dozen years, and have a rather long list of development works in progress on our wiki,  now seems like a good time and place to list what still needs to be done, but from the individual’s point of view. In other words, things they need but don’t have yet.

So  here is a  punch list of those things, in the form of a static page rather than a post such as this one. There is also a shortcut to the punch list in the menu above.

For the record, here’s that list as it stands today:

  1. Make companies agree to our terms, rather than the other way around.
  2. Have real relationships with companies, based on open standards and code, rather than relationships trapped inside corporate silos, each with their own very different ways of managing customer relationships (CRM), “delivering” a “customer experience” (aka CX), leading us on a “journey” or having us “join the conversation.”
  3. Standardizing the ways we relate to the service sides of companies, both for requesting service and for exchanging useful data in the course of owning a product or renting a service, so market intelligence flows both ways, and the “customer journey” becomes a virtuous cycle.
  4. Control our own self-sovereign identities, and give others what little they need to know about us on an as-needed basis.
  5. Get rid of logins and passwords.
  6. Change our personal details (surname, phone number, email or home address) in the records of all the organizations we deal with, in one move.
  7. Pay what we want, where we want, for whatever we want, in our own ways.
  8. Call for service or support in one simple and straightforward way of our own, rather than in as many ways as there are 800 numbers to call and numbers to punch into a phone before we wait on hold while bad music plays.
  9. Express loyalty in our own ways, which are genuine rather than coerced.
  10. Have an Internet of MY Things, which each of us controls for ourselves, and in which every thing we own has its own cloud, which we control as well.
  11. Own and control all our health and fitness records, and how others use them.
  12. Have wallets of our own, rather than only those provided by platforms.
  13. Have shopping carts of our own, which we can take from store to store and site to site online, rather than being tied to ones provided only by the stores themselves.
  14. Have personal devices of our own (such as this one) that aren’t cells in a corporate silo, or suction cups on corporate tentacles. (Alas, that’s what we still have with all Apple iOS phones and tablets, and all Android devices with embedded Google apps.)
  15. Remake education around the power we all have to teach ourselves and lean from each other, making optional at most the formal educational systems built more for maintaining bell curves than liberating the inherent genius of every student.

Please help us improve and correct it.

[The photo is from this collection.]

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/subscribe/splits/vanityfair/ — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from community@email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

« Older posts Newer posts »

© 2024 ProjectVRM

Theme by Anders NorenUp ↑