Last night I watched The Great Hack a second time. It’s a fine documentary, maybe even a classic. (A classic in literature, I learned on this Radio Open Source podcast, is a work that “can only be re-read.” If that’s so, then perhaps a classic movie is one that can only be re-watched.*)
The movie’s message could hardly be more loud and clear: vast amounts of private information about each of us is gathered constantly in the digital world, and is being weaponized so our minds and lives can be hacked by others for commercial or political gain. Or both. The movie’s star, Professor David Carroll of the New School (@profcarroll), has been delivering that message for many years, as have manyothers, including myself.
But to what effect?
Sure, we have policy moves such as the GDPR, the main achievement of which (so far) has been to cause every website to put confusing and (in most cases) insincere cookie notices on their index pages, meant (again, in most cases) to coerce “consent” (which really isn’t) to exactly the unwanted tracking the regulation was meant to stop.
Those don’t count.
Ennui does. Apathy does.
On seeing The Great Hack that second time, I had exactly the same feeling my wife had on seeing it for her first: that the very act of explaining the problem also trivialized it. In other words, the movie worsened the very problem it solved. And it isn’t alone at this, because so has everything everybody has said, written or reported about it. Or so it sometimes seems. At least to me.
Okay, so: if I’m right about that, why might it be?
One reason is that there’s no story. See, every story requires three elements: character (or characters), problem (or problems), and movement toward resolution. (Find a more complete explanation here.) In this case, the third element—movement toward resolution—is absent. Worse, there’s almost no hope. “The Great Hack” concludes with a depressing summary that tends to leave one feeling deeply screwed, especially since the only victories in the movie are over the late Cambridge Analytica; and those victories were mostly within policy circles we know will either do nothing or give us new laws that protect yesterday from last Thursday… and then last another hundred years.
The bigger reason is that we are now in a media environment summarized by Marshall McLuhan in his book The Medium is the Massage: “every new medium works us over completely.” Our new medium is the Internet, which is a non-place absent of distance and gravity. The only institutions holding up there are ones clearly anchored in the physical world. Health care and law enforcement, for example. Others dealing in non-material goods, such as information and ideas, aren’t doing as well.
Journalism, for example. Worse, on the Internet it’s easy for everyone to traffic in thoughts and opinions, as well as in solid information. So now the world of thoughts and ideas, which preponderate on social media such as Twitter, Facebook and Instagram, are vast floods of everything from everybody. In the midst of all that, the news cycle, which used to be daily, now lasts about as long as a fart. Calling it all too much is a near-absolute understatement.
But David Carroll is right. Darkness is falling. I just wish all the light we keep trying to shed would do a better job of helping us all see that.
Among the many important things the Turing Institute is doing for us right now is highlighting with that notice exactly what’s wrong with the cookie system for remembering choices, and lack of them, for each of us using the Web.
What these switches highlight is that the memory of your choices is theirs, not yours. The whole cookie system outsources your memory of cookie choices to the sites and services of the world. While the cookies themselves can be found somewhere deep in the innards of your computer, you have little or no knowledge of what they are or what they mean, and there are thousands of those in there already.
And yes, we do have browsers that protect us in various ways from unwelcome cookies, but they all do that differently, and none in standard ways that give us clear controls over how we deal with sites and how sites deal with us.
One way to start thinking about this is as a need for cookies go the other way:
Another way is to think (and work toward getting the sites and services of the world to agree to our terms, and to have standard ways of recording that, on our side rather than theirs. Work on that is proceeding at Customer Commons, the IEEE, various Kantara initiatives and the Me2B Alliance.
Then we will need a dashboard, a cockpit (or the metaphor of your choice) through which we can see and control what’s going on as we move about the Web. This will give us personal scale that we should have had on Day One (specifically, in 1995, when graphical browsers took off). This too should be standardized.
There can be no solution that starts on the sites’ side. None. That’s a fail that in effect gives us a different browser for every site we visit. We need solutions of our own. Personal ones. Global ones. Ones with personal scale. It’s the only way.
So we respect that work. It is also essential to recognize problems it faces.
The first problem is that, economically speaking, data is a publicgood, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from a Linux Journal column I wrote a few years ago:
Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers
Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works
Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting
The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation
Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is important for us to get our heads around amidst the rising chorus of voices insistenting that data is a form of property.
Who owns your data? It’s a popular question of late in the identity community, particularly in the wake of Cambridge Analytica, numerous high-profile Equifax-style data breaches, and the GDPR coming into full force and effect. In our view, it’s not only the wrong question to be asking but it’s flat out dangerous when it frames the entire conversation. While ownership implies a property law model of our data, we argue that the legal framework for our identity-related data must also consider constitutional or human rights laws rather than mere property law rules…
Under common law, ownership in property is a bundle of five rights — the rights of possession, control, exclusion, enjoyment, and disposition. These rights can be separated and reassembled according to myriad permutations and exercised by one or more parties at the same time. Legal ownership or “title” of real property (akin to immovable property under civil law) requires evidence in the form of a deed. Similarly, legal ownership of personal property (i.e. movable property under civil law) in the form of commercial goods requires a bill of lading, receipt, or other document of title. This means that proving ownership or exerting these property rights requires backing from the state or sovereign, or other third party. In other words, property rights emanate from an external source and, in this way, can be said to be extrinsic rights. Moreover, property rights are alienable in the sense that they can be sold or transferred to another party.
Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. Rather, human rights emanate from an internal source and require no evidence of their existence. In this way, they can be said to be intrinsic rights that are self-evident. While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.
In the physical realm, it is long settled that people and organs are not treated like property. Moreover, rights to freedom from unreasonable search and seizure, to associate and peaceably assemble with others, and the rights to practice religion and free speech are not property rights — rather, they are constitutional rights under U.S. law. Just as constitutional and international human rights laws protect our personhood, they also protect things that are property-like or exhibit property-like characteristics. The Fourth Amendment of the U.S. Constitution provides “the right of the people to be secure in their persons” but also their “houses, papers, and effects.” Similarly, the Universal Declaration of Human Rights and the European Convention on Human Rights protect the individual’s right to privacy and family life, but also her “home and correspondence”…
Obviously some personal data may exist in property-form just as letters and diaries in paper form may be purchased and sold in commerce. The key point is that sometimes these items are also defined as papers and effects and therefore subject to Fourth Amendment and other legal frameworks. In other words, there are some uses of (and interests in) our data that transform it from an interest in property to an interest in our personal privacy — that take it from the realm of property law to constitutional or human rights law. Location data, biological, social, communications and other behavioral data are examples of data that blend into personal identity itself and cross this threshold. Such data is highly revealing and the big-data, automated systems that collect, track and analyze this data make the need to establish proportional protections and safeguards even more important and more urgent. It is critical that we apply the correct legal framework.
The fourth problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data. Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, combustible or not.
Put another way, why would you want to make almost nothing (the likely price) from selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist, and where rights are fully understood and protected within existing legal frameworks?
What makes us fully powerful as human beings is our ability to generate and share ideas and other goods that are expansible over all space, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.
Important note: I’m not knocking labor here. Most of us have to work for wages, either as parts of industrial machines, or as independent actors. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.
Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage. JP and I called this way of making money a because effect. The entire Internet, the World Wide Web and the totality of free and open source code all have vast because effects in money made with products and services that depend on those graces. Each are rising free tides that lift all commercial boats. Non-commercial ones too.
Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.
The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.
Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is zero interest in buying what can be exploited for free.
Worse, surveillance capitalism’s business is making guesses about you, so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:
Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
Trying to get in on that business is an awful proposition.
Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?
Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)
And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?
It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.
But it’s still early. Web 2.0 is an archaic stage in the formation of the digital world. Surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. The whole thing is too absurd, corrupt, complex and annoying to keep living forever.
So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting better human powers to work.
If we’re going to obsess over personal data, let’s look instead toward ways to regulate or control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.
And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.
“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.
Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”
Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?
Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.
Advertising isn’t personal, and doesn’t have to be. In fact, knowing it’s not personal is an advantage for advertisers. Consumers don’t wonder what the hell an ad is doing where it is, who put it there, or why.
Advertising makes brands. Nearly all the brands you know were burned into your brain by advertising. In fact the term branding was borrowed by advertising from the cattle business. (Specifically by Procter and Gamble in the early 1930s.)
Advertising sponsors media, and those paid by media. All the big pro sports salaries are paid by advertising that sponsors game broadcasts. For lack of sponsorship, media—especially publishers—are hurting. @WaltMossberg learned why on a conference stage when an ad agency guy said the agency’s ads wouldn’t sponsor Walt’s new publication, recode. Walt: “I asked him if that meant he’d be placing ads on our fledgling site. He said yes, he’d do that for a little while. And then, after the cookies he placed on Recode helped him to track our desirable audience around the web, his agency would begin removing the ads and placing them on cheaper sites our readers also happened to visit. In other words, our quality journalism was, to him, nothing more than a lead generator for target-rich readers, and would ultimately benefit sites that might care less about quality.” With friends like that, who needs enemies?
Adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media, and it causes negative associations with brands. Consider this: perhaps a $trillion or more has been spent on adtech, and not one brand known to the world has been made by it. (Bob Hoffman, aka the Ad Contrarian, is required reading on this.)
Adtech wants to be personal. That’s why it’s tracking-based. Though its enthusiasts call it “interest-based,” “relevant” and other harmless-sounding euphemisms, it relies on tracking people. In fact it can’t exist without tracking people. (Note: while all adtech is programmatic, not all programmatic advertising is adtech. In other words, programmatic advertising doesn’t have to be based on tracking people. Same goes for interactive. Programmatic and interactive advertising will both survive the adtech crash.)
Adtech relies on misdirection. See, adtech looks like advertising, and is called advertising; but it’s really direct marketing, which is descended from junk mail and a cousin of spam. Because of that misdirection, brands think they’re placing ads in media, while the systems they hire are actually chasing eyeballs to anywhere. (Pro tip: if somebody says every ad needs to “perform,” or that the purpose of advertising is “to get the right message to the right person at the right time,” they’re actually talking about direct marketing, not advertising. For more on this, read Rethinking John Wanamaker.)
Compared to advertising, adtech is ugly. Look up best ads of all time. One of the top results is for the American Advertising Awards. The latest winners they’ve posted are the Best in Show for 2016. Tops there is an Allstate “Interactive/Online” ad pranking a couple at a ball game. Over-exposure of their lives online leads that well-branded “Mayhem” guy to invade and trash their house. In other words, it’s a brand ad about online surveillance.
Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.
The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.
Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.
Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”
And that’s not all:
Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.
Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.
Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,
The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.
The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…
Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.
Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.
“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”
Pretty cynical, no?
The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.
The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.
Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.
Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.
Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.
Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)
Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.
Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.
What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)
Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.
When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.
Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:
I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)
Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.
Nature and the Internet both came without privacy.
The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.
When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.
In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.
Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”
Two such answers arrived with this morning’s New York Times: Facebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with personal tech that gives each of us power not just resist encroachments by others, but to have agency. (Merriam Webster: the capacity, condition, or state of acting or of exerting power.)
We acquired agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of it with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech.
*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.
Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.
That’s because a massive personal data extraction industry has grown up around the simple fact that our data is there for the taking. Or so it seems. To them. And their apologists.
As a result, we’re at a stage of wanton data extraction that looks kind of like the oil industry did in 1920 or so:
It’s a good metaphor, but for a horrible business. It’s a business we need to reform, replace, or both. What we need most are new industries that grow around who and what we are as individual human beings—and as a society that values what makes us human.
Our data is us. Each of us. It is our life online. Yes, we shed some data in the course of our activities there, kind of like we shed dandruff and odors. But that’s no excuse for the extractors to frack our lives and take what they want, just because it’s there, and they can.
Now think about what love is, and how it works. How we give it freely, and how worthwhile it is when others accept it. How taking it without asking is simply wrong. How it’s better to earn it than to demand it. How it grows when it’s given. How we grow when we give it as well.
True, all metaphors are wrong, but that’s how metaphors work. Time is not money. Life is not travel. A country is not a family. But all those things are like those other things, so we think and talk about each of them in terms of those others. (By saving, wasting and investing time; by arriving, departing, and moving through life; by serving our motherlands, and honoring our founding fathers.)
Oil made sense as a metaphor when data was so easy to take, and the resistance wasn’t there.
But now the resistance is there. More than half a billion people block ads online, most of which are aimed by extracted personal data. Laws like the GDPR have appeared, with heavy fines for taking personal data without clear permission.
I could go on, but I also need to go to bed. I just wanted to get this down while it was in the front of my mind, where it arrived while discussing how shitty and awful “data is the new oil” was when it first showed up in 2006, and how sadly popular it has become since then:
It’s time for a new metaphor that expresses what our personal data really is to us, and how much more it’s worth to everybody else if we keep, give and accept it on the model of love.
Synopsis—Advertising supported publishing in the offline world by sponsoring it. In the online world, advertising has been body-snatched by adtech, which tracks eyeballs via files injected into apps and browsers, then shoots those eyeballs with “relevant” ads wherever the eyeballs show up. Adtech has little or no interest in sponsoring a pub for the pub’s own worth. Worse, it incentivizes fake news (which is easier to produce than the real kind) and flooding the world with “content” rather than old-fashioned (and infinitely more worthwhile) editorial. When publishers agreed to funding by adtech, they sold their souls and their readers down a river full of fraud and malware, as well as indefensible manners. Fortunately, readers can bring both publishers and advertisers back into a soulful reunion. Helpfully, the GDPR makes it illegal not to, and that will be a huge issue as the deadline for compliance (next May 25th) approaches.
Do you think advertisers will pay enough for SafeAds to offset the losses publishers will have from selling fewer targeted ads due to privacy regs?
It’s a good question. (That’s what people say when they don’t have an answer, or can’t think of an easy one right away. But…) I thought about it, and replied with this:
Yes, and then some.
They’ll do it because there is more brand value to SafeAds.
The bigger question is for publishers: what business do they want to be in?
Do they want to operate barrels of “content” full of tracked fish baited there so adtech can shoot them with “interest-based” ads?
Or do they want to operate actual publications with good editorial that advertisers sponsor so their ads can be seen by readers who know those ads support the publication and are appropriate without being personal?
That’s the choice.
It helps that the second business — actual publishing — has been around for a couple hundred years, and even worked fine on the Web before publishers fell for the adtech sell.
Publishers sold a big piece of their soul when they consented to having their readers’ privacy violated, and with rampant impunity, by adtech. They also chose to ignore the fact that adtech is in the business of chasing eyeballs, not of sponsoring the good work publishers do, or of building brand reputation. (Which can’t be done by shooting people constantly with “interest-based” ads that mostly creep people out if they hit a bulls-eye.)
The GDPR, if it works like it should, will force publishers to fire adtech and normalize their relationship with readers. When that happens, publishers, advertisers, readers and agents for all three can start working out better business models than the creepy one we’ve had with adtech.
Ross quoted the first sentence of the second-to-last paragraph, which is probably the best one of the bunch he could have used. Most of the quotes he gathered from other folks in the biz were also very good. I study this topic a lot, and I still learned some new things. Hats off for that.
While I’m saluting what I just learned from Ross, however, I also want to visit some assumptions that surface in his piece. They aren’t his, but rather pretty much everybody’s, and that’s a problem. Here are four of them.
1) Consent can only go one way, meaning each of us should always be the ones consenting to terms proffered by sites and services. Here’s how Ross puts it:
The General Data Protection Regulation, which prevents brands from using a person’s data unless they have explicit permission to do so, could send more ad dollars to premium publishers that are more likely to obtain user consent than lower-quality publishers.
In fact consent can go the other way, meaning the publisher or advertiser can consent to our terms.
It is only because we made a Faustian bargain with client-server in 1995 that we remain stuck inside a model that assumes we “users” should always be second (and second-class) parties, with no choice but to agree as “clients” to terms proffered by server operators.
It helps that the Internet was designed so any one of us can be peers. This is an especially good design feature in the age that (at least I hope) begins with the GDPR.
One reason why I’m encouraged about the GDPR is that it says each of us can be “data controllers” as well as “data subjects.” (White & Case have a good unpacking of that, here.)
Tracking is the reason ad blocking, which has been around since 2003, didn’t hockey-stick toward the sky until 2012. That was when publishers and advertisers, led by the IAB, gave the middle finger to Do Not Track, which was merely a polite request not to be tracked that people could express in their browsers.
3) The best advertising is the most measurable, and is looking for a response from an individual.
That’s not true for advertising, but it is for direct response marketing (the wheat and chaff I talk about in the last cited piece). Unfortunately, as I say in that piece, “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”
The outlines of that alien replica can be seen in what Ross cites here:
Eric Berry, CEO of native ad platform TripleLift, said the GDPR could lead to a reduction in programmatic ad spend because ad buyers will struggle to measure whether their ads lead to purchases. There’s uncertainty about how the law will be enforced, but if users have to give consent to individual publishers, demand-side platforms and attribution vendors, the attribution companies won’t likely have enough data to make accurate measurements, which will lead ad buyers to shift their dollars to other marketing tactics. This would hurt publishers that rely on programmatic ad revenue, he said.
There is a reason perhaps a $trillion has been spent on adtech and not one worldwide brand everyone can name has been created by it, much less sustained or helped in any way.
As Don Martisays, only real advertising can carry the full economic and creative signals required to create and sustain a brand. And, as Bob Hoffman hammers home constantly (and very artfully) in The Ad Contrarian, the ad industry’s equation of “digital” with tracking is based entirely on bullshit. (His term, and the right one.)
Direct response marketing, which began as junk mail, and which looks to measure results for every message, wasn’t designed for that, and can’t do it.
Calling direct response marketing advertising was one of the biggest mistakes the ad industry ever made and masks the real problem the GDPR invites, which is that we risk throwing out the SafeAds baby with the FakeAds (adtech) bathwater.
If all the GDPR leads publishers to do is (as Ross says in his piece) “use intrusive messages — like pop-ups or interstitials — to get user consent,” and the EU fails to fine publishers and their adtech funders for violating the spirit as well as the letter of the GDPR, the GDPR will be as big a fail as the useless cookie consent notices people see on European sites.
4) There’s nothing really wrong with adtech.
Pretty much everything is wrong about adtech, but perhaps the wrongest of the wrong is the problem Siva Vaidhyanathan (@sivasaid)visits in a NY Times piece titled Facebook Wins, Democracy Loses. Here’s a pull quote:
A core principle in political advertising is transparency — political ads are supposed to be easily visible to everyone, and everyone is supposed to understand that they are political ads, and where they come from. And it’s expensive to run even one version of an ad in traditional outlets, let alone a dozen different versions. Moreover, in the case of federal campaigns in the United States, the 2002 McCain-Feingold campaign-finance act requires candidates to state they approve of an ad and thus take responsibility for its content.
The bold-face is mine (or actually my wife’s, who found and highlighted it for me).
The economic signaling value of an ad comes from what it costs. Only a brand with a lot of heft can afford to sponsor a publication or a mainstream broadcaster. But it’s super-cheap to run ads that narrowcast to just a few people. Or to put up a fake news site. (Both are big reasons why journalism is now drowning in a sea of content. Adtech is what paid publishing to trade journalism for “content generation.” This is a cancer on advertising, publishing and journalism, and makes adtech the Agent Smith of digital.)
What’s more, adtech has created environments where micro-targeted ads and adtech-funded fake news can work very effectively to destroy brands.
Consider this possibility: Trump and his sympathizers succeeded in destroying Hillary Clinton’s brand, and there wasn’t a damn thing any of her own big-budget and big-media branding efforts (#SafeAds all) could do about it. (And try, if you are a Trump sympathizer, to ignore whatever you think about how much Hillary brought it on herself or deserved it. In badness of the smear-worthy sort, she has plenty of company, especially Trump. In using modern adtech and fake news methods, the Trump campaign and those helping it were very smart and effective.)
As Siva says in his Times piece,
Ads on [Facebook] meant for, say, 20- to 30-year-old home-owning Latino men in Northern Virginia would not be viewed by anyone else, and would run only briefly before vanishing. The potential for abuse is vast. An ad could falsely accuse a candidate of the worst malfeasance a day before Election Day, and the victim would have no way of even knowing it happened. Ads could stoke ethnic hatred and no one could prepare or respond before serious harm occurs.
Can the GDPR address that problem?
Yes, by supporting individuals (not mere “users” or “consumers”) operating as first parties, getting the good publishers to agree not to run ads like the ones Siva describes, and to open the floodgates to brand ads that actually sponsor those publications, rather than regarding them as bait for shooting tracked eyeballs.
For today’s entries, I’m noting which linked pieces require you to turn off tracking protection, meaning tracking is required by those publishers. I’m also annotating entries with hashtags and organizing sections into bulleted lists.
The State of Ad Blocking and Online Ads: An Interview with Doc Searls (Matthew Maier in AdBlock) Pull quote: “What’s working is what has always worked: brand advertising in legacy print and broadcast media, and search advertising in the online world. Ads targeted at populations (rather than individuals) online also work, to the degrees that people are not bothered or creeped out by them. Not working is tracking-based ‘direct’ adtech, which succeeds because it’s called advertising, looks like advertising, and benefits from corporate appetites for the biggest possible data, and lots of maths to rationalize the expense.”
Apple reveals HomePod, a #privacy-focused smart #assistant (Zach Whittaker in ZDnet) Subhead: Throwing shade at its two data-hungry virtual assistant competitors, Amazon and Google, the iPhone maker said that nobody has “quite nailed it yet.” Pull-quote: “Apple’s logic is that, for the most part, it doesn’t want your data. Federighi reiterated that many of the advanced deep learning and artificial intelligence analysis — such as finding your location, facial recognition in photos, and setting calendar reminders — is done on the device, shutting Apple out of the loop — preventing anyone from asking Apple for data it doesn’t have. But for a company that doesn’t want your data — to make Siri better, it has increasingly been asking for it. Apple contends that it still doesn’t want to see your information.” Some #disambiguation is required there.
Why Does Apple Think It Can Get Away With Selling #Overpriced Stuff? (Mark Wilson in Co.Design) Pull-quote: “However, if Apple has any particular hope, it’s this: Amazon and Google are both invasive with consumer data. These companies track our activity largely with the goal of selling us something at just the right moment. Apple is far more transparent. It’s actively pushing machine learning to the device level by developing an on-device machine learning #APIandworking on a specialized machine learning chip to bring advanced AI to your phone, theoretically, without all your data going to a server, where it might be accessible by the government, advertisers, and more. It’s making cross-device #encryption a standard, which means a federal agent who seizes your phone at a border crossing—which happened during the Muslim ban—can’t as easily download its contents and read it all. And most of all, that new HomePod speaker, powered by Siri, will anonymize and encrypt everything you say. That means your private questions are not tied to your Apple ID for later reference. Such is not the case for Amazon’s and Google’s assistants. Apple has and will make trade-offs to protect consumer privacy. (Many of us, at the end of the day, get some value out of a Google knowing our history of things we’ve searched, even if it’s constantly #creepy.) It might not work, but at least we’re getting a clear picture of Apple’s big gamble going into the next decade: that people will continue paying more than they should for hardware, with the hope that it’s not just nicely designed, but that it operates with discretion, too.” All fine, but he misses another reason people pay more for Apple stuff: customer #support, especially at Apple Stores. Amazon and Google can’t, and don’t, compete.
Uber may, in Uber’s sole discretion, permit you from time to time to submit, upload, publish or otherwise make available to Uber through the Services textual, audio, and/or visual content and information, including commentary and feedback related to the Services, initiation of support requests, and submission of entries for competitions and promotions (“User Content”). Any User Content provided by you remains your property. However, by providing User Content to Uber, you grant Uber a worldwide, perpetual, irrevocable, transferable, royalty-free license, with the right to sublicense, to use, copy, modify, create derivative works of, distribute, publicly display, publicly perform, and otherwise exploit in any manner such User Content in all formats and distribution channels now known or hereafter devised (including in connection with the Services and Uber’s business and on third-party sites and services), without further notice to or consent from you, and without the requirement of payment to you or any other person or entity.
The emphasis is mine. Interesting legal hack there: you own your data, but you license it to them, on terms that grant you nothing and grant them everything.
Talk about a deal breaker. Wow. (Except it’s also the old deal.)
At the very least, Lyft should make hay on this, if they actually do have an advantage in the degree to which they protect privacy. (Denise, below, says they don’t. But hey, maybe they could if they wanted to compete on privacy.)
Here’s what matters (and remains unchanged from Denise’s corrections):::
We need our own terms. Meaning each of us should be the first party in agreements with service providers, not the second. Meaning they need to agree to our terms.
That’s Customer Commons’ reason for being. Just as Creative Commons is where you will find copyright terms you can assert as an artist, Customer Commons will be where you will find service terms you can assert as a customer.
With the wind of new .eu and .au privacy laws (e.g. the EU’s GDPR) at our backs, we stand a good chance of making this happen.
The question is how we can get some mojo behind it. Thoughts welcome. Shoulders to the wheel as well.