Category: distributed (page 1 of 2)

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. Its hard to find a bigger hairball of conflicting interests and inevitably strange, unintended and in some cases dangerous outcomes.

Second, What does the Internet make of us, which I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

Privacy = personal agency + respect by others for personal dignity

Privacy is a state each of us enjoys to the degrees others respect it.

And they respect what economists call signals. We send those signals through our behavior (hand signals, facial expressions) and technologies. Both are expressions of agency: the ability to act with effect in the world.

So, for example, we signal a need not to reveal our private parts  by wearing clothes. We signal a need not to have our private spaces invaded by buttoning our clothes, closing doors, setting locks on those doors, and pulling closed curtains or shades. We signal a need not to be known by name to everybody by not wearing name tags as we walk about the world. (That we are naturally anonymous is a civic grace, but a whole ‘nuther thread.)

All of this has been well understood in the physical world for as long as we’ve had civilization—and perhaps longer. It varies by culture, but remained remarkably non-controversial—until we added the digital world to the physical one.

The digital world, like the physical one, came without privacy. We had to invent privacy in the physical world with technologies (clothing, shelter, doors, locks) and norms such as respect for the simple need for personal dignity.

We have not yet done the same in the digital world. We did, however, invent administrative identities for people, because administrative systems need to know who they’re interested in and dealing with.

These systems are not our own. They belong to administrative entities: companies, government agencies, churches, civic groups, whatever. Nearly 100% of conversation about both identity and privacy take place inside the administrative context. All questions  come down to “How can this system with ways of identifying us give us privacy?” Even Privacy By Design (PbD) is about administrative systems. It is not something you and I have. Not in the way we have clothes.

And that’s what we need: the digital equivalents of clothing and ways of signaling what’s okay and what’s not okay.  Norms should follow, and then laws and regulations restricting violations of those norms.

Unfortunately, we got the laws (e.g. the EU’s GDPR and California’s AB 375) before we got the tech and the norms.

But I’m encouraged about getting both, for two reasons. One is the work going on here among VRM-ish developers. The other is that @GregAEngineer gave a talk this morning on exactly this topic, at the IEEE #InDITA conference in Bangalore.

Oh, and lest we think privacy matters only to those in the fully privileged world, watch Privacy on the Line, a video just shared here.

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  While there are good reasons to challenge whether or not data can be property (see Jefferson and  Renieris), I want to focus on a different problem: the one best to solve first: the need for personal agency in the online world.

I see two reasons why personal agency matters more than personal data.

The first reason we have far too little agency in the networked world is that we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party: dependent, subordinate, secondary. In defaulted regulatory terms, we clients are mere “data subjects,” and only server operators are privileged to be “data controllers,” “data processors,” or both.

Fortunately, the Net’s and the Web’s base protocols remain peer-to-peer, by design. We can still build on those. And it’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from, for example sticking their hands inside our clothes without permission.

The fact that adtech plants tracking beacons on our naked digital selves and tracks us like animals across the digital frontier may be a norm for now, but it is also morally wrong, massively rude and now illegal under the  GDPR.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons is all about.)

Businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties is a design flaw in every standing “agreement” we “accept,” and we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on the absolute. (It’s no coincidence that more than a year ago, 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. It simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control.  For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

2018: When Customers Finally Take Charge

In Spring of 2012, Harvard Business Review Press published The Intention Economy: When Customers Take Charge. Not long after that, word came from  The Wall Street Journal that Robert James Thomson, then Managing Editor of the paper, wanted to use the opening  chapter of the book as a cover essay for the paper’s Review section.  Amazon at the time was already giving that chapter away as a teaser for book sales, so I ended up compressing the whole book to a single 2000-word piece.  Here’s how the cover looked:

I thought, “Holy shit, that looks like the cover of Dianetics or something.” Also, “I never would have used that headline.”

But that’s why they pay big bucks to headline writers. That one proved so terrific that I want to use it as the title of my next book, to follow up on The Intention Economy now that it’s finally about to happen.

The timing is right because tectonic shifts now shaking business were twelve years in the future when I started ProjectVRM (in Fall of 2006) and six years in the future when The Intention Economy came out.

Let’s frame those shifts with a pair of graphics from Larry Lessig‘s 1999 book Code and Other Laws of Cyberspace, and its successor in 2005, Code v2. The first is this dot, representing the individual:

The second is this graphic, representing four constraints on the individual:

Each of those four ovals, Larry wrote, constrain or regulate what the individual can do in the networked world.

With ProjectVRM, our work is about turning around those arrows, empowering individuals to exert influence—or agency (the power to operate with full effect)—in all four directions:

In other words, to be a god.

In Code, Larry explains the four constraints with the example of smoking:

If you want to smoke, what constraints do you face? What factors regulate your decision to smoke or not?

One constraint is legal. In some places at least, laws regulate smoking—if you are under eighteen, the law says that cigarettes cannot be sold to you…

But laws are not the most significant constraints on smoking. Smokers in the United States certainly feel their freedom regulated… Norms say that one doesn’t light a cigarette in a private car without first asking permission of the other passengers…

The market is also a constraint. The price of cigarettes is a constraint on your ability to smoke —change the price, and you change this constraint…

Finally, there are the constraints created by the technology of cigarettes, or by the technologies affecting their supply… How the cigarette is, how it is designed, how it is built —in a word, its architecture—affects the constraints faced by a smoker.

Thus, four constraints regulate this pathetic dot—the law, social norms, the market, and architecture—and the “regulation” of this dot is the sum of these four constraints. Changes in any one will affect the regulation of the whole… A complete view, therefore, must consider these four modalities together.

But the Internet was not designed for pathetic dots. By specifying little more than how data is addressed and moved between any two points in the world, across any variety of networks, the Internet gave every conscious entity on that world a lever so huge  Archimedes could only imagine it. I explain this in How tools for customers have more leverage than tools for business alone:

Archimedes said “Give me a place to stand and a lever long enough and I can move the world.”

Alas, Archimedes didn’t have that place. Now all of us do. It’s called the Internet.

Before the Internet, the best way to improve business was with better tools and services for businesses, or with new businesses to disrupt or compete with existing ones.

With the Internet, we can improve customers. In fact, that’s where we started when the Internet showed up in its current form, on 30 April 1995. (That’s when the Net could start supporting all forms of data traffic, including the commercial kind.) The three biggest tools giving customers leverage back then (and still today) were browsers, email and the ability to do anything any company could, starting with publishing.

But then we did what came most easily to business back in the Industrial Age: create new businesses and improve old ones. Nothing wrong with that, of course. Just something inadequate.

Worse, we created giant businesses that only gave customers leverage inside their walled gardens. By now we’ve lived so long inside Google, Apple, Facebook and Amazon (called GAFA in Europe) that we can hardly think outside their boxes.

But if we do, we can see again what the promise of the Net was in the first place: Archimedes-grade power for everybody. And there are a lot more customers than companies in that population.

This is why a bunch of us have been working, through ProjectVRM, on tools that make customers both independent and better able to engage with business.

Now let’s look at one changed constraint: Law.

The tectonic shift happening there is the General Data Protection Regulation, or GDPR. It was created by the European Union to constrain what  Shoshana Zuboff calls surveillance capitalism. Nearly all that surveillance is for the purpose of providing ways to aim ads at tracked eyeballs wherever they go. The GDPR forbids doing that, and imposes potentially massive fines for violations—up to 4% of global revenues over the prior year. I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.

But there is a silver lining for advertising in the GDPR’s mushroom cloud, in the form of the oldest form of law in the world: contracts. These are agreements that any two parties can form with each other.

So, if an individual proffers a term to a publisher that says,

—and that publisher agrees to it, that publisher is compliant with the GDPR, plain and simple. (I unpack how this works in Good news for publishers and advertisers fearing the GDPR and in many other pieces in the People vs. Adtech series.)

Those terms will live at Customer Commons, a non-profit spin-off of ProjectVRM. “CuCo” was created to do for personal terms what Creative Commons did, and still does, for personal copyright. (Creative Commons was a brainchild of Larry Lessig when he was a fellow at the Berkman Klein Center. We steal from the best.)

Our goal is to have our first agreement—the one two paragraphs up—working for both readers and publishers before the GDPR deadline in May. We have help toward that from the Cyberlaw Clinic at Harvard Law School and the Berkman Klein Center, from other friendly legal folk, and from equally friendly techies, such as those behind the JLINC protocol.

If publishers accept this olive branch from individuals (who are no longer mere “consumers” or “users”), it will demonstrate how existing law and a simple new architecture can alter both markets and norms in ways that make the world better for everybody.

In October 2016, I announced  the end of ProjectVRM’s Phase One and the start of Phase Two.

Making VRM happen in 2018  will complete Phase Two. At the end of it our original thesis—that free customers are more valuable than captive ones—will either prove out or wait for other projects to do the job. Either way we’ll be done. All projects need an end, and this will be ours.

I believe free customers will prove more valuable than captive ones—to themselves, and to everyone else—for two reasons. One is that the Internet was designed to prove it in the first place (and no amount of screwage by governments or service providers can stuff that genie back in the bottle). The other is what I just tweeted here:

Services providing countless different ways for countless different businesses to provide good “customer experiences” () can’t answer the customer’s need for one way to deal with all of them. In fact, they only make things worse with every new login and “loyalty” program.

In other words, we need #customertech. Simple as that. That’s the lever that makes each of us an Archimedes. We’ll get it, from one or more of the projects and companies already on our developments list—and from others who will come along to answer a need that has been in the market since long before the Internet showed up.

So consider this is a recruitment post. We have a lot of work to do in a very short time.

 

 

How should customers look to business?

The world of business has a default symbol for customers: the ones they put on restroom doors.

Outside of those, there is no universal symbol for a customer.

When business talks to itself, it mostly uses generic cartoon images such as these (from a Bing search) and these (from a Google one):

I’m sure all of us identify more with the restroom symbols (and emojis) than we do with those things.

It’s interesting how, even though we comprise 100% of the marketplace, we remain a prevailing absence in nearly every business conference, business book and business school class.

The notion that customers can be independent and fully empowered agents of themselves, with scale across all the businesses they deal with, at best gets the intellectual treatment (seeing customers, for example, as “rational actors”).

At worst, customers are seen as creatures that go moo and squit money if they’re held captive and squeezed the right ways.  Listen to the talk. Typically customers are “targets” that businesses “acquire,” “manage,” “control” or “lock in” as if we are cattle or slaves.

Often customers are simply ignored.

One example that showed up today was this press release announcing “an innovative initiative focused on the overhaul of open account trade finance infrastructure.” It’s from R3, which makes Corda, a ” distributed ledger platform designed specifically for financial services,” and is “a joint undertaking between R3, TradeIX, and twelve financial institutions.” This network, says the release, will “improve access to open account trade for the global ecosystem of banks, buyers, suppliers, technology providers, insurers, and other parties, such as logistics companies, that are critical to facilitating global open account trade flows.”

Never mind that distributed ledgers have been hailed as the second coming (or even the first) of the customer-empowering peer-to-peer world. Instead note the absence of customers: people and institutions who entrust their money and assets to all the parties listed in that long sentence.

Our goal with ProjectVRM is to equip customers (not just “consumers,” or “end users”) to say We’re not just at the same table with you guys. We are that table. And we are much bigger and far more powerful than you can ever make us on your own.

In other words, our job here is to give customers superpowers.

There are lots of people arguing that more policy is the answer. But we already have the GDPR. Huge leverage there. Let’s use it to highlight how own customer-empowering solutions put the companies that serve us in compliance.

In the last post we named one. That and many other forms of #customertech will be featured at VRM Day and IIW, later this month at the Computer History Museum in Silicon Valley. Looking forward to seeing many of you there.

Let’s make customers powerful. Then it won’t matter how they look to business, other than real.

 

“Disruption” isn’t the whole VRM story

250px-mediatetrad-svg

The vast oeuvre of Marshall McLuhan contains a wonderful approach to understanding media called the tetrad (i.e. foursome) of media effects.  You can apply it to anything, from stone tools to robots. McLuhan unpacks it with four questions:

  1. What does the medium enhance?
  2. What does the medium make obsolete?
  3. What does the medium retrieve that had been obsolesced earlier?
  4. What does the medium reverse or flip into when pushed to extremes?

I suggest that VRM—

  1. Enhances CRM
  2. Obsoletes marketing guesswork, especially adtech
  3. Retrieves conversation
  4. Reverses or flips into the bazaar

Note that many answers are possible. That’s why McLuhan poses the tetrad as questions. Very clever and useful.

I bring this up for three reasons:

  1. The tetrad is also helpful for understanding every topic that starts with “disruption.” Because a new medium (or technology) does much more than just disrupt or obsolete an old one—yet not so much more that it can’t be understood inside a framework.
  2. The idea from the start with VRM has never been to disrupt or obsolete CRM, but rather to give it a hand to shake—and a way customers can pull it out of the morass of market-makers (especially adtech) that waste its time, talents and energies.
  3. After ten years of ProjectVRM, we still don’t have a single standardized base VRM medium (e.g. a protocol), even though we have by now hundreds of developers we call VRM in one way or another. Think of this missing medium as a single way, or set of ways, that VRM demand can interact with CRM supply, and give every customer scale across all the companies they deal with. We’ve needed that from the start. But perhaps, with this handy pedagogical tool, we can look thorugh one framework toward both the causes and effects of what we want to make happen.

I expect this framework to be useful at VRM Day (May 1 at the Computer History Museum) and at IIW on the three days that follow there.

Save

The distributed future is personal

The End of Cloud Computing, is a prophetic presentation by  Peter Levine, of Andreesen Horowitz, and required viewing by anyone interested in making the distributed future happen.

His key point: “We are returning to an edge-intelligence distributed computing model that’s absolutely thematic with the trends in computing moving from centralized out to distributed,” which he illustrates this way:

back-to-the-future

Later he adds, “We are absolutely going to return to a peer-to-peer computing model where the edge devices connect together creating a network of end point devices not unlike what we sort of saw in the original distributed computing model.” Here’s a graphic for that one:

sensor-data-explosion

I added the face in the middle, because the edge is individuals and not just the technology and data occupying their lives.

Joe Andrieu wrote about this a decade ago in his landmark post VRM: The user as point of integration.  An excerpt:

User Centrism as System Architecture

Doc Searls shared a story about his experience getting medical care while at Harvard recently. As a fellow at the Berkman center, he just gave them his Harvard ID card and was immediately ushered into a doctor’s office–minimal paperwork, maximal service. They even called him a cab to go to Mass General and gave him a voucher for the ride. At the hospital, they needed a bit more paperwork, but as everything was in order, they immediately fixed him up. It was excellent service.

But what Doc noticed was that at every point where some sort of paperwork was done, there were errors. His name was spelled wrong. They got the wrong birthdate. Wrong employer. Something. As he shuffled from Berkman to the clinic to the cabbie to the hospital to the pharmacy, a paper (and digital trail) followed him through archaic legacy systems with errors accumulating as he went. What became immediately clear to Doc was that for the files at the clinic, the voucher, the systems at the hospital, for all of these systems, he was the natural point of data integration… he was the only component gauranteed to contact each of these service providers. And yet, his physical person was essentially incidental to the entire data trail being created on his behalf.

User as Point of Integration

But what if those systems were replaced with a VRM approach? What if instead of individual, isolated IT departments and infrastructure, Doc, the user was the integrating agent in the system? That would not only assure that Doc had control over the propagation of his medical history, it would assure all of the service providers in the loop that, in fact, they had access to all of Doc’s medical history. All of his medications. All of his allergies. All of his past surgeries or treatments. His (potentially apocryphal) visits to new age homeopathic healers. His chiropractic treatments. His crazy new diet. All of these things could affect the judgment of the medical professionals charged with his care. And yet, trying to integrate all of those systems from the top down is not only a nightmare, it is a nightmare that apparently continues to fail despite massive federal efforts to re-invent medical care.

(See The Emergence of National Electronic Health Record Architectures in the United States and Australia: Models, Costs, and Questions and Difficulties Implementing an Electronic Medical Record for Diverse Healthcare Service Providers for excellent reviews of what is going on this area, both pro and con.)

Profoundly Different

Doc’s insight–and that of user-centric systems–isn’t new. What’s new is the possibility to utilize the user-centric Identity meta-system to securely and efficiently provide seamless access to user-managed data stores. With that critical piece coming into place, we have the opportunity to completely re-think what it means to build out our IT infrastructure.

Which brings us to Peter Levine’s final point, and slide:

entireworld-it

That world will be comprised of individuals operating with full agency, rather than as peripheral entities, and concerns, of centralized systems. Which is exactly what we’ve been fostering here at ProjectVRM from the start, ten years ago.

To obtain full agency, with control over the data and machine power suffusing our connected lives, we will need what’s been called first person or self-sovereign technologies. Not “personal power as a service” from some centralized system.

One immediate example is Adrian Gropper‘s Free Independent Health Records, which he’ll talk about on Thursday, January 26, at the Berkman Klein Center at Harvard University.  At that link: “Gropper’s research centers on self-sovereign technology for management of personal information both in control of the individual and as hosted or curated by others.”

For other efforts in the same direction, see our VRM Development Work page.

 

 

Save

Save

We’re done with Phase One

Here’s a picture that’s worth more than a thousand words:

maif-vrm

He’s with MAIF, the French insurance company, speaking at MyData 2016 in Helsinki, a little over a month ago. Here’s another:

sean-vrm

That’s Sean Bohan, head of our steering committee, expanding on what many people at the conference already knew.

I was there too, giving the morning keynote on Day 2:

cupfu1hxeaa4thh

It was an entirely new talk. Pretty good one too, especially since  I came up with it the night before.

See, by the end of Day 1, it was clear that pretty much everybody at the conference already knew how market power was shifting from centralized industries to distributed individuals and groups (including many inside centralized industries). It was also clear that most of the hundreds of people at the conference were also familiar with VRM as a market category. I didn’t need to talk about that stuff any more. At least not in Europe, where most of the VRM action is.

So, after a very long journey, we’re finally getting started.

In my own case, the journey began when I saw the Internet coming, back in the ’80s.  It was clear to me that the Net would change the world radically, once it allowed commercial activity to flow over its pipes. That floodgate opened on April 30, 1995. Not long after that, I joined the fray as an editor for Linux Journal (where I still am, by the way, more than 20 years later). Then, in 1999, I co-wrote The Cluetrain Manifesto, which delivered this “one clue” above its list of 95 Theses:

not

And then, one decade ago last month, I started ProjectVRM, because that clue wasn’t yet true. Our reach did not exceed the grasp of marketers in the world. If anything, the Net extended marketers’ grasp a lot more than it did ours. (Shoshana Zuboff says their grasp has metastacized into surveillance capitalism. ) In respect to Gibson’s Law, Cluetrain proclaimed an arrived future that was not yet distributed. Our job was to distribute it.

Which we have. And we can start to see results such as those above. So let’s call Phase One a done thing. And start thinking about Phase Two, whatever it will be.

To get that work rolling, here are a few summary facts about ProjectVRM and related efforts.

First, the project itself could hardly be more lightweight, at least administratively. It consists of:

Second, we have a spin-off: Customer Commons, which will do for personal terms of engagement (one each of us can assert online) what Creative Commons (another Berkman-Klein spinoff) did for copyright.

Third, we have a list of many dozens of developers, which seem to be concentrated in Europe and Australia/New Zealand.  Two reasons for that, both speculative:

  1. Privacy. The concept is much more highly sensitive and evolved in Europe than in the U.S. The reason we most often get goes, “Some of our governments once kept detailed records of people, and those records were used to track down and kill many of them.” There are also more evolved laws respecting privacy. In Australia there have been privacy laws for several years requiring those collecting data about individuals to make it available to them, in forms the individual specifies. And in Europe there is the General Data Protection Regulation, which will impose severe penalties for unwelcome data gathering from individuals, starting in 2018.
  2. Enlightened investment. Meaning investors who want a startup to make a positive difference in the world, and not just give them a unicorn to ride out some exit. (Which seems to have become the default model in the U.S., especially Silicon Valley.)

What we lack is research. And by we I mean the world, and not just ProjectVRM.

Research is normally the first duty of a project at the Berkman Klein Center, which is chartered as a research organization. Research was ProjectVRM’s last duty, however, because we had nothing to research at first. Or, frankly, until now. That’s why we were defined as a development & research project rather than the reverse.

Where and how research on VRM and related efforts happens is a wide open question. What matters is that it needs to be done, starting soon, while the “before” state still prevails in most of the world, and the future is still on its way in delivery trucks. Who does that research matters far less than the research itself.

So we are poised at a transitional point now. Let the conversations about Phase Two commence.

 

 

 

Save

Save

Save

Save

Save

Save

Save

The new frontier for CRM is CDL: Customer Driven Leads

cdlfunnelImagine customers diving, on their own, straight down to the bottom of the sales funnel.

Actually, don’t imagine it. Welcome it, because it’s coming, in the form of leads that customers generate themselves, when they’re ready to buy something. Here in the VRM world we call this intentcasting. At the receiving end, in the  CRM world, they’re CDLs, or Customer Driven Leads.

Because CDLs come from fully interested customers with cash in hand, they’re worth more than MQLs (Marketing Qualified Leads) or  SQLs (Sales Qualifed Leads), both of which need to be baited with marketing into the sales funnel.

CDLs are also free.  When the customer is ready to buy, she signals the market with an intentcast that CRM systems can hear as a fresh CDL. When the CRM system replies, an exchange of data and permissions follows, with the customer taking the lead.

It’s a new dance, this one with the customer taking the lead. But it’s much more direct, efficient and friendly than the old dances in which customers were mere “targets” to be “acquired.”

The first protocol-based way to generate CDLs for CRM is described in At last, a protocol to connect VRM and CRM, posted here in August. It’s called JLINC. We’ll be demonstrating it working on a Salesforce system on VRM Day at the Computer History Museum in Silicon Valley, on Monday, October 24. VRM Day is free, but space is limited, so register soon, here.

We’ll also continue to work on CDL development  over the next three days in the same location, at the IIW, the Internet Identity Workshop. IIW is an unconference that’s entirely about getting stuff done. No keynotes, no panels. Just working sessions run by attendees. This next one will be our 23rd IIW since we started them in 2005. It remains, in my humble estimation, the most leveraged conference I know. (And I go to a lot of them, usually as a speaker.)

As an additional temptation, we’re offering a 25% discount on IIW to the next 20 people who register for VRM Day. (And it you’ve already reigstered, talk to me.)

Iain Henderson, who works with JLINC Labs, will demo CDLs on Salesforce. We also invite all the other CRM companies—IBM, Microsoft Dynamics, SAP, SugarCRM… you know who you are—to show up and participate as well. All CRM systems are programmable. And the level of programming required to hear intentcasts is simple and easy.

See you there!

 

Save

Older posts

© 2019 ProjectVRM

Theme by Anders NorenUp ↑