fruit thought

If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?

Well, no.

Yet there is lately a widespread urge to claim personal data as personal property, and to create commodity markets for personal data, so people can start making money by selling or otherwise monetizing their own.

ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of its wiki a heading called Markets for Personal Data. Listed there are:

So: respect.

Yet, while I salute these efforts’ respect for individuals, and their righteous urges to right the wrongs of wanton and rude harvesting of personal data from approximately everybody, I also think there are problems with this approach. And, since I’ve been asked lately to spell out those problems, I shall. Here goes.

The first problem is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. Here’s a table that may help (borrowed from this Linux Journal column):

Excludability Excludability
YES NO
Rivalness YES Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Rivalness NO Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting

 

The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation

Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is essential in a very human way.

The third problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data.

Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, even though it’s combustible.

Put another way, why would you want to make almost nothing (the likely price) selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist?

What makes us fully powerful as human beings is our ability to generate and share ideas and other combustible public goods, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.

Important note: I’m not knocking labor here. Most of us have to work for wages as parts of industrial machines, or as independent actors. I do too. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.

Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something. It’s a helpful one.

Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage.

Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.

The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.

Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is little interest in buying what can be had for free.

Worse, surveillance capitalism’s business is making guesses about you so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:

  1. Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
  2. Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
  3. The entrails of surveillance capitalism are fully infected with fraud and malware.
  4. Surveillance capitalism is also quite satisfied to soak up to 97% of an advertising spend before an ad’s publisher gets its 3% for pushing an ad at you.

Trying to get in on that business is just an awful proposition.

Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?

Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)

And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?

What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.

It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.

It’s still early. Web 2.0 is an archaic stage in the formation of the digital world. surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. It’s too absurd, corrupt, complex and annoying to keep living forever.

So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting our most human of powers to work.

The most basic form of agency in the digital world is control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.

Bonus links:

 

 

 

 

How would you feel if you had been told in the early days of the Web that in the year 2018 you would still need logins and passwords for damned near everything.

Your faith in the tech world would be deeply shaken, no?

And what if you had been told that in 2018 logins and passwords would now be required for all kinds of other shit, from applications on mobile devices to subscription services on TV?

Or worse, that in 2018 you would be rob-logged-out of sites and services frequently, whether you were just there or not, for security purposes — and that logging back in would often require “two factor” authentication, meaning you have to do even more work to log in to something, and that (also for security purposes) every password you use would not only have be different, but impossible for any human to remember, especially when average connected human now has hundreds of login/password combinations, many of which change constantly?

Would you not imagine this to be a dystopian hell?

Welcome to now, folks. Our frog is so fully boiled that it looks like Brunswick stew.

Can we please fix this?

Please, please, please, tech world: move getting rid of logins and passwords to the top of your punch list, ahead of AI, ML, IoT, 5G, smart dust, driverless cars and going to Mars.

Your home planet thanks you.

[Addendum…] Early responses to this post suggest that I’m talking about fixing the problem at the superficial level of effects. So, to clarify, logins and passwords are an effect, and not a cause of anything other than inconvenience and annoyance. The causes are design and tech choices made long ago—choices that can be changed.

Not only that, but many people have been working on solving the identity side of this thing for many years. In fact we’re about to have our 27th Internet Identity Workshop in October at the Computer History Museum. If you want to work on this with other people who are doing the same, register here.

 

A few weeks ago, while our car honked its way through dense traffic in Delhi, I imagined an Onion headline: American Visitor Seeks To Explain What He’ll Never Understand About India.

By the norms of traffic laws in countries where people tend to obey them, vehicular and pedestrian traffic in the dense parts of Indian cities appears to lawless. People do seem to go where they want, individually and collectively, in oblivity to danger.

Yet there is clearly a system here, in the sense that one’s body has a circulatory system. Or a nervous system. Meaning it’s full of almost random stuff at the cellular traffic level, but also organic in a literal way. It actually works. Somehow. Some way. Or ways. Many of them. Alone and together. So yes, I don’t understand it and probably never will, but it does work.

For example, a four-lane divided highway will have traffic moving constantly, occasionally in both directions on both sides. It will include humans, dogs, cattle, rickshaws and bikes, some laden with bags of cargo that look like they belong in a truck, in addition to cars, trucks and motorcycles, all packed together and honking constantly.

Keeping me from explaining, or even describing, any more than I just did, are the opening sentences of Saul Bellow’s Mr. Sammler’s Planet:

Shortly after dawn, on what would have been drawn in a normal sky, Mr. Artur Sammler with his bushy eye took in the books and papers of his West Side bedroom and suspected strongly that they were the wrong books, the wrong papers. In a way it did not matter to a man of seventy-plus, and at leisure. You had to be a crank to insist on being right. Being right was largely a matter of explanations. Intellectual man had become an explaining creature. Fathers to children, wives to husbands, lecturers to listeners, experts to laymen, colleagues to colleagues, doctors to patients, man to his own soul, explained. The roots of this, the causes of the other, the source of events, the history, the structure, the reasons why. For the most part it went in one ear out the other. The soul wanted what it wanted. It has its own natural knowledge. It sat unhappily on superstructures of explanation, poor bird, not knowing which way to fly.

So I will disclaim being right about a damn thing here. But I will share some links from some brilliant people, each worthy of respect, who think they are right about some stuff we maybe ought to care about; and each of whom have, in their own very separate ways, advice and warnings for us. Here ya go:

Each author weaves a different handbasket we might travel to hell, but all make interesting reading. All are also downbeat as hell too.

My caution with readings that veer toward conspiracy (notably Martin’s) is one of the smartest things my smart wife ever said: “The problem with conspiracy theories is that they presume competence.”

So here’s what I’m thinking about every explanation of what’s going on in our still-new Digital Age: None of us has the whole story of what’s going on—and what’s going on may not be a story at all.

Likewise (or dislike-wise), I also think all generalizations, whatever they are, fail in the particulars, and that’s a feature of them. We best generalize when we know we risk being wrong in the details.  Reality wants wackiness in particulars. If you don’t find what’s wacky there, maybe you aren’t looking hard enough. Or believe too much in veracities.

Ed McCabe: “I have no use for rules. They only rule out the possibility of brilliant exceptions.”

We need to laugh. That means we need our ironies, our multiple meanings, our attentions misdirected, for the magic of life to work.

And life is magic. Pure misdirection, away from the facticity of non-existence.

Every laugh, every smile, is an ironic argument against the one non-ironic fact of life—and of everything—which is death. We all die. We all end. To “be” dead is not to be in a state of existence. It is not to be at all. Shakespeare was unimprovable on that point.

To some of us older people*, death isn’t a presence. It’s just the future absence of our selves in a world designed to discard everything with a noun, proper or not, eventually. Including the world itself. This is a feature, not a bug.

It’s also a feature among some of us to, as Gandhi said, “Live as if you were to die tomorrow. Learn as if you were to live forever”: always interested, always open to possibilities, always willing to vet what we at least think we know, always leaving the rest of existence to those better equipped to persist on the same mission. So I guess that’s my point here.

Basically it’s the same point as Bill Hicks’ “It’s just a ride.”

*I’m not old. I’ve just been young a long time. To obey Gandhi, you have to stay young. It’s the best way to learn. And perhaps to die as well.

Enforcing Data Protection: A Model for Risk-Based Supervision Using Responsive Regulatory Tools, a post by Dvara Research, summarizes Effective Enforcement of a Data Protection Regime, a deeply thought and researched paper by Beni Chugh (@BeniChugh), Malavika Raghavan (@teninthemorning), Nishanth Kumar (@beamboybeamboy) and Sansiddha Pani (@julupani). While it addresses proximal concerns in India, it provides useful guidance for data regulators everywhere.

An excerpt:

Any data protection regulator faces certain unique challenges. The ubiquitous collection and use of personal data by service providers in the modern economy creates a vast space for a regulator to oversee. Contraventions of a data protection regime may not immediately manifest and when they do, may not have a clear monetary or quantifiable harm. The enforcement perimeter is market-wide, so a future data protection authority will necessarily interface with other sectoral institutions.  In light of these challenges, we present a model for enforcement of a data protection regime based on risk-based supervision and the use of a range of responsive enforcement tools.

This forward-looking approach considers the potential for regulators to employ a range of softer tools before a breach to prevent it and after a breach to mitigate the effects. Depending on the seriousness of contraventions, the regulator can escalate up to harder enforcement actions. The departure from the focus on post-data breach sanctions (that currently dominate data protection regimes worldwide) is an attempt to consider how the regulatory community might act in coordination with entities processing data to minimise contraventions of the regime.

I hope European regulators are looking at this. Because, as I said in a headline to a post last month, without enforcement, the GDPR is a fail.

Bonus link from the IAPP (International Association of Privacy Professionals): When will we start seeing GDPR enforcement actions? We guess Feb. 22, 2019.

This is the situation at Newark Airport right now:

Those blobs are thunderstorms. The little racetrack in upstate New York is an inbound flight from Lisbon in a holding pattern.

Follow the link under that screen shot. Interesting to see, in close to real time, how flights on approach and departure dodge heavy weather.

I’ll be flying out of there in a few hours myself, to India, for the firs time. Should be fun.

And the same goes for California’s AB-375 privacy bill.

The GDPR has been in force since May 25th, and it has done almost nothing to stop websites that make money from tracking-based-advertising stop participating in the tracking of readers. Instead almost all we’ve seen so far are requests for from websites to keep doing what they’re doing.

Only worse. Because now when you click “Accept” under an interruptive banner saying the site’s “cookies and other technologies collect data to enhance your experience and personalize the content and advertising you see,” you’ve just consented to being spied on. And they’re covered. They can carry on with surveillance-as-usual.

Score: Adtech 1, privacy 0.

Or so it seems. So far.

Are there any examples of publications that aren’t participating in #adtech’s spy game? Besides Linux Journal?

 

In Chatbots were the next big thing: what happened?, Justin Lee (@justinleejw) nicely unpacks how chatbots were overhyped to begin with and continue to fail their Turing tests, especially since humans in nearly all cases would  rather talk to humans than to mechanical substitutes.

There’s also a bigger and more fundamental reason why bots still aren’t a big thing: we don’t have them. If we did, they’d be our robot assistants, going out to shop for us, to get things fixed, or to do whatever.

Why didn’t we get bots of our own?

I can pinpoint the exact time and place where bots of our own failed to happen, and all conversation and development went sideways, away from the vector that takes us to bots of our own (hashtag: #booo), and instead toward big companies doing more than ever to deal with us robotically, mostly to sell us shit.

The time was April 2016, and the place was Facebook’s F8 conference. It was on stage there that Mark Zuckerberg introduced “messenger bots”. He began,

Now that Messenger has scaled, we’re starting to develop ecosystems around it. And the first thing we’re doing is exploring how you can all communicate with businesses.

Note his use of the second person you. He’s speaking to audience members as individual human beings. He continued,

You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I’ve never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there’s gotta be a better way to do this.

We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn’t take your full attention, like a phone call would. And you shouldn’t have to install a new app.

This promised pure VRM: a way for a customer to relate to a vendor. For example, to issue a service request, or to intentcast for bids on a new washing machine or a car.

So at this point Mark seemed to be talking about a new communication channel that could relieve the typical pains of being a customer while also opening the floodgates of demand notifying supply when it’s ready to buy. Now here’s where it goes sideways:

So today we’re launching Messenger Platform. So you can build bots for Messenger.

By “you” Zuck now means developers. He continues,

And it’s a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let’s take a look.

See the shift there? Up until that last sentence, he seemed to be promising something for people, for customers, for you and me: a better way to deal with business. But alas, it’s just shit:

CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.

And right there the opportunity was lost. And all the promise, up there at the to of the hype cycle. Note how Aaron Batalion uses the word “reach” in  ‘Bot’ is the wrong name…and why people who think it’s silly are wrong, written not long after Zuck’s F8 speech: “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”

What we needed, and still need, is for reach to go the other way: a standard bot design that would let lots of developers give us better ways to reach businesses. Today lots of developers compete to give us better ways to use the standards-based tools we call browsers and email clients. The same should be true of bots.

In Market intelligence that flows both ways, I describe one such approach, based on open source code, that doesn’t require locating your soul inside a giant personal data extraction business.

Here’s a diagram that shows how one person (me in this case) can relate to a company whose moccasins he owns:

vrmcrmconduit

The moccasins have their own pico: a cloud on the Net for a thing in the physical world: one that becomes a standard-issue conduit between customer and company.

A pico of this type might come in to being when the customer assigns a QR code to the moccasins and scans it. The customer and company can then share records about the product, or notify the other party when there’s a problem, a bargain on a new pair, or whatever. It’s tabula rasa: wide open.

The current code for this is called Wrangler. It’s open source and in Github. For the curious, Phil Windley explains how picos work in Reactive Programming With Picos.

It’s not bots yet, but it’s a helluva lot better place to start re-thinking and re-developing what bots should have been in the first place. Let’s start developing there, and not inside giant silos.

[Note: the image at the top is from this 2014 video by Capgemini explaining #VRM. Maybe now that Facebook is doing face-plants in the face of the GDPR, and privacy is finally a thing, the time is ripe, not only for #booos, but for the rest of the #VRM portfolio of unfinished and un-begun work on the personal side.]

This is what greets me when I go to the Washington Post site from here in Germany:

Washington Post greeting for Europeans

So you can see it too, wherever you are, here’s the URL I’m redirected to on Chrome, on Firefox, on Safari and on Brave. All look the same except for Brave, which shows a blank page.

Note that last item in the Premium EU Subscription column: “No on-site advertising or third-party tracking.”

Ponder for a moment how the Sunday (or any) edition of the Post‘s print edition would look with no on-paper advertising. It would be woefully thin and kind of worthless-looking. Two more value-adds for advertising in the print edition:

  1. It doesn’t track readers, which is the sad and broken norm for newspapers and magazines in the online world—a norm now essentially outlawed by the GDPR, and surely the reason the Post is running this offer.
  2. It sponsors the Post. Tracking-based advertising, known in the trade as adtech, doesn’t sponsor anything. Instead it hunts down eyeballs its spyware already knows about, no matter where they go. In other words, if adtech can shoot a Washington Post reader between the eyes at the Skeevy Lake Tribune, and the Skeevy is cheaper, it might rather hit the reader over there.

So here’s the message I want the Post to hear from me, and from every reader who values what they do:

That’s what I get from the print edition, and that’s what I want from the online edition as well.

So I want two things here.

One is an answer to this question: Are ANY publishers in the online world selling old-fashioned ads that aren’t based on tracking and therefore worth more than the tracking kind? (And are GDPR-compliant as well, since the ads aren’t aimed by collected personal data.)

The other is to subscribe to the Post as soon as they show me they’re willing to do what I ask: give me those real ads again. And stop assuming that all ads need to be the tracking-based kind.

Thanks in advance.

Tags: , , ,

In The Big Short, investor Michael Burry says “One hallmark of mania is the rapid rise in the incidence and complexity of fraud.” (Burry shorted the mania- and fraud-filled subprime mortgage market and made a mint in the process.)

One would be equally smart to bet against the mania for the tracking-based form of advertising called adtech.

Since tracking people took off in the late ’00s, adtech has grown to become a four-dimensional shell game played by hundreds (or, if you include martech, thousands) of companies, none of which can see the whole mess, or can control the fraud, malware and other forms of bad acting that thrive in the midst of it.

And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.

Without adtech, the EU’s GDPR (General Data Protection Regulation) would never have happened. But the GDPR did happen, and as a result websites all over the world are suddenly posting notices about their changed privacy policies, use of cookies, and opt-in choices for “relevant” or “interest-based” (translation: tracking-based) advertising. Email lists are doing the same kinds of things.

“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.

Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”

Of course what I just said greatly simplifies what the GDPR actually utters, in bureaucratic legalese. The GDPR is also full of loopholes only snakes can thread; but the spirit of the law is clear, and the snakes will be easy to shame, even if they don’t get fined. (And legitimate interest—an actual loophole in the GDPR, may prove hard to claim.)

Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?

Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.

First, advertising:

    1. Advertising isn’t personal, and doesn’t have to be. In fact, knowing it’s not personal is an advantage for advertisers. Consumers don’t wonder what the hell an ad is doing where it is, who put it there, or why.
    2. Advertising makes brands. Nearly all the brands you know were burned into your brain by advertising. In fact the term branding was borrowed by advertising from the cattle business. (Specifically by Procter and Gamble in the early 1930s.)
    3. Advertising carries an economic signal. Meaning that it shows a company can afford to advertise. Tracking-based advertising can’t do that. (For more on this, read Don Marti, starting here.)
    4. Advertising sponsors media, and those paid by media. All the big pro sports salaries are paid by advertising that sponsors game broadcasts. For lack of sponsorship, media—especially publishers—are hurting. @WaltMossberg learned why on a conference stage when an ad agency guy said the agency’s ads wouldn’t sponsor Walt’s new publication, recode. Walt: “I asked him if that meant he’d be placing ads on our fledgling site. He said yes, he’d do that for a little while. And then, after the cookies he placed on Recode helped him to track our desirable audience around the web, his agency would begin removing the ads and placing them on cheaper sites our readers also happened to visit. In other words, our quality journalism was, to him, nothing more than a lead generator for target-rich readers, and would ultimately benefit sites that might care less about quality.” With friends like that, who needs enemies?

Second, Adtech:

    1. Adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media, and it causes negative associations with brands. Consider this: perhaps a $trillion or more has been spent on adtech, and not one brand known to the world has been made by it. (Bob Hoffman, aka the Ad Contrarian, is required reading on this.)
    2. Adtech wants to be personal. That’s why it’s tracking-based. Though its enthusiasts call it “interest-based,” “relevant” and other harmless-sounding euphemisms, it relies on tracking people. In fact it can’t exist without tracking people. (Note: while all adtech is programmatic, not all programmatic advertising is adtech. In other words, programmatic advertising doesn’t have to be based on tracking people. Same goes for interactive. Programmatic and interactive advertising will both survive the adtech crash.)
    3. Adtech spies on people and violates their privacy. By design. Never mind that you and your browser or app are anonymized. The ads are still for your eyeballs, and correlations can be made.
    4. Adtech is full of fraud and a vector for malware. @ACFou is required reading on this.
    5. Adtech incentivizes publications to prioritize “content generation” over journalism. More here and here.
    6. Intermediators take most of what’s spent on adtech. Bob Hoffman does a great job showing how as little as 3¢ of a dollar spent on adtech actually makes an “impression. The most generous number I’ve seen is 12¢. (When I was in the ad agency business, back in the last millennium, clients complained about our 15% take. Media our clients bought got 85%.)
    7. Adtech gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.
    8. Adtech incentivizes hate speech and tribalism by giving both—and the platforms that host them—a business model too.
    9. Adtech relies on misdirection. See, adtech looks like advertising, and is called advertising; but it’s really direct marketing, which is descended from junk mail and a cousin of spam. Because of that misdirection, brands think they’re placing ads in media, while the systems they hire are actually chasing eyeballs to anywhere. (Pro tip: if somebody says every ad needs to “perform,” or that the purpose of advertising is “to get the right message to the right person at the right time,” they’re actually talking about direct marketing, not advertising. For more on this, read Rethinking John Wanamaker.)
    10. Compared to advertising, adtech is ugly. Look up best ads of all time. One of the top results is for the American Advertising Awards. The latest winners they’ve posted are the Best in Show for 2016. Tops there is an Allstate “Interactive/Online” ad pranking a couple at a ball game. Over-exposure of their lives online leads that well-branded “Mayhem” guy to invade and trash their house. In other words, it’s a brand ad about online surveillance.
    11. Adtech has caused the largest boycott in human history. By more than a year ago, 1.7+ billion human beings were already blocking ads online.

To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in AdExchanger by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:

Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.

Specifically,

The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.

Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.

Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”

And that’s not all:

Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.

Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.

This is also Google’s way of scraping off GDPR liability on publishers.

Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,

The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.

The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…

Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.

In other words, good luck with that.

[Later, 26 May…] Well, Google caved on this one, so apparently Google is coming to IAB Europe’s table.

[And on 30 May…] Axel Springer is also going its own way.

One big upside for IAB Europe is that its Framework contains open source code and an SDK. For a full unpacking of what’s there see the Consent String and Vendor List Format: Transparency & Consent Framework on GitHub and IAB Europe’s own FAQ. More about this shortly.

Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.

One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there. Here’s the most revealing graphic in the bunch:

Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.

“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”

Pretty cynical, no?

The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.

The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.

Meanwhile::::

Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.

Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.

Google can say it already has consent, and that it is also has a legitimate interest (one of the six “lawful bases” for tracking) in the personal data it harvests from us.

Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.

Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)

Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.

For more on how that will work, read The Intention Economy: When Customers Take Charge. Six years after Harvard Business Review Press published that book, what it says will start to come true. Thank you, GDPR.

Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.

What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)

Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.

When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.

Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:

  1. An easy fix for a broken advertising system (12 October 2017 in Medium and in my blog)
  2. Without aligning incentives, we can’t kill fake news or save journalism (15 September 2017 in Medium)
  3. Let’s get some things straight about publishing and advertising (9 September 2017 and the same day in Medium)
  4. Good news for publishers and advertisers fearing the GDPR (3 September 2017 in ProjectVRM and 7 October in Medium).
  5. Markets are about more than marketing (2 September 2017 in Medium).
  6. Publishers’ and advertisers’ rights end at a browser’s front door (17 June 2017 in Medium). It updates one of the 2015 blog posts below.
  7. How to plug the publishing revenue drain (9 June 2017 in Medium). It expands on the opening (#publishing) section of my Daily Tab for that date.
  8. How True Advertising Can Save Journalism From Drowning in a Sea of Content (22 January 2017 in Medium and 26 January 2017 in my blog.)It’s People vs. Advertising, not Publishers vs. Adblockers (26 August 2016 in ProjectVRM and 27 August 2016 in Medium)
  9. Why #NoStalking is a good deal for publishers (11 May 2016, and in Medium)
  10. How customers can debug business with one line of code (19 April 2016 in ProjectVRM and in Medium)
  11. An invitation to settle matters with @Forbes, @Wired and other publishers (15 April 2016 and in Medium)
  12. TV Viewers to Madison Avenue: Please quit driving drunk on digital (14 Aprl 2016, and in Medium)
  13. The End of Internet Advertising as We’ve Known It(11 December 2015 in MIT Technology Review)
  14. Ad Blockers and the Next Chapter of the Internet (5 November in Harvard Business Review)
  15. How #adblocking matures from #NoAds to #SafeAds (22 October 2015)
  16. Helping publishers and advertisers move past the ad blockade (11 October 2015 on the ProjectVRM blog)
  17. Beyond ad blocking — the biggest boycott in human history (28 Septemper 2015)
  18. A way to peace in the adblock war (21 September 2015, on the ProjectVRM blog)
  19. How adtech, not ad blocking, breaks the social contract (23 September 2015)
  20. If marketing listened to markets, they’d hear what ad blocking is telling them (8 September 2015)
  21. Apple’s content blocking is chemo for the cancer of adtech (26 August 2015)
  22. Separating advertising’s wheat and chaff (12 August 2015, and on 2 July 2016 in an updated version in Medium)
  23. Thoughts on tracking based advertising (18 February 2015)
  24. On marketing’s terminal addiction to data fracking and bad guesswork (10 January 2015)
  25. Why to avoid advertising as a business model (25 June 2014, re-running Open Letter to Meg Whitman, which ran on 15 October 2000 in my old blog)
  26. What the ad biz needs is to exorcize direct marketing (6 October 2013)
  27. Bringing manners to marketing (12 January 2013 in Customer Commons)
  28. What could/should advertising look like in 2020, and what do we need to do now for this future?(Wharton’s Future of Advertising project, 13 November 2012)
  29. An olive branch to advertising (12 September 2012, on the ProjectVRM blog)

I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)

Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.

Nature and the Internet both came without privacy.

The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.

When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.

In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.

Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”

Two such answers arrived with this morning’s New York TimesFacebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with with personal tech gives each of us power not just resist encroachments by others, but to have agency. (Merriam Websterthe capacity, condition, or state of acting or of exerting power.)

We acquired agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of it with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech.

I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop.

On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.

Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.

The purpose is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws follow.*

But hey, let’s invent the tech we need first.

*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.

Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.

Bonus link.

« Older entries