Doesn’t matter if you don’t. What does matter is that it ended. All business manias do.
That’s why we can expect the “platform economy” and “surveillance capitalism” to end. Sure, it’s hard to imagine that when we’re in the midst of the mania, but the end will come.
When it does, we can have a “privacy debate.” Meanwhile, there isn’t one. In fact therecan’t be one, because we don’t have privacy in the online world.
We do have privacy in the offline world, and we’ve had it ever since we invented clothing, doors, locks and norms for signaling what’s okay and what’s not okay in respect to our personal spaces, possessions and information.
That we hardly have the equivalent in the networked world doesn’t mean we won’t. Or that we can’t. The Internet in its current form was only born in the mid-’90s. In the history of business and culture, that’s a blip.
Really, it’s still early.
So, the fact that websites, network services, phone companies, platforms, publishers, advertisers and governments violate our privacy with wanton disregard for it doesn’t mean we can’t ever stop them. It means we haven’t done it yet, because we don’t have the tech for it. (Sure, some wizards do, but muggles don’t. And most of us are muggles.)
And, since we don’t have privacy tech yet, we lack the simple norms that grow around technologies that give us ways signal our privacy preferences. We’ll get those when we have the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells.
This is what many of us have been working on at ProjectVRM, Customer Commons, the Me2B Alliance, MyData and other organizations whose mission is getting each of us the tech we need to operate at full agency when dealing with the companies and governments of the world.
Indeed. And we do need the “optimism and activism” he calls for. In the activism category is code. Specifically, code that gives us the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells
Some of those are in the works. Others are not—yet. But they will be. Inevitably. Especially now that it’s becoming clearer every day that we’ll never get them from any system with a financial interest in violating it*. Or from laws that fail at protecting it.
If you want to help, join one or more of the efforts in the links four paragraphs up. And, if you’re a developer already on the case, let us know how we can help get your solutions into each and all of our digital hands.
*Especially publishers such as Salon, which Privacy Badger tells me tries to pump 20 potential trackers into my browser while I read the essay cited above. In fact, according to WhoTracksMe.com, Salon tends to run 204 tracking requests per page load, and the vast majority of those are for tracking-based advertising purposes. And Salon is hardly unique. Despite the best intentions of the GDPR and the CCPA, surveillance capitalism remains fully defaulted on the commercial Web—and will continue to remain entrenched until we have the privacy tech we’ve needed from the start.
In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. The greatest example of “gone” rock is what John Wesley Powell observed in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado river, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock, the layers of which at an odd angle from everything above. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between the basement rock and the floors of rock above it. He didn’t know how much time, and could hardly guess. The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.
In the decades since Powell made his notes, the same gap has been found all over the world, and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.
All of those stretches end abruptly with the Cambrian Explosion, which began about 541 million years ago, when the Cambrian period arrived, and with it an amplitude of history, written in stone.
Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing paradigm is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well.
The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, abrading their ways through mountains and across hills and plains like a trowel through wet cement. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, so plate tectonics could hide the evidence. (A fact little known outside geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled up as debris on continental edges here and there. Example: the Bay Area of California is ocean floor that wasn’t subducted into a trench.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.
I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish, because it lacks even a glacier’s talent for accumulation.
There is nothing about a bit that lends itself to persistence, other than the media it is written on, if it is written at all. Form follows function, and right now, most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities may have happened during an interval about which otherwise nothing is known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that began in the late 20th century.
This isn’t my theory. It comes from my wife, who has long claimed that future historians will look on our digital age as an invisible one, because it sucks so royally at archiving itself.
Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid state memory—are volatile by nature?
My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can be read only by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will the floppies or my mothalled PCs, all of which are likely bricked by decades of un-use.
As for other media, the prospect isn’t any better.
At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of Video8 and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? Maybe.
Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded our mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, so her whole generation is now gone.
The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassete tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in perpetuity in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation.
Same goes for my photographs. My old photographs are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photographs are spread across a mess of duplicated back-up drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year a piece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). It is unclear currently whether any of that will survive after any of those entities stop paying the yearly fee. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.
As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).
It should also be sobering to recognize that nobody owns a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.
So the historic progression I see, and try to illustrate in the photo at the beginning of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds that go away. Everything digital is snow falling and disappearing on the waters of time.
Will there ever be a way to save for the very long term what we ironically call our digital “assets?” I mean, for more than a few dozen years? Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Digital Unconformity, when everything was forgotten?
I can’t think of any technical questions more serious than those two.
Here’s the popover that greets visitors on arrival at Rolling Stone‘s website:
That policy is supplied by Rolling Stone’s parent (PMC) and weighs more than 10,000 words. In it the word “advertising” appears 68 times. Adjectives modifying it include “targeted,” “personalized,” “tailored,” “cookie-based,” “behavioral” and “interest-based.” All of that is made possible by, among other things—
Information we collect automatically:
Device information and identifiers such as IP address; browser type and language; operating system; platform type; device type; software and hardware attributes; and unique device, advertising, and app identifiers
Internet network and device activity data such as information about files you download, domain names, landing pages, browsing activity, content or ads viewed and clicked, dates and times of access, pages viewed, forms you complete or partially complete, search terms, uploads or downloads, the URL that referred you to our Services, the web sites you visit after this web site; if you share our content to social media platforms; and other web usage activity and data logged by our web servers, whether you open an email and your interaction with email content, access times, error logs, and other similar information. See “Cookies and Other Tracking Technologies” below for more information about how we collect and use this information.
Geolocation information such as city, state and ZIP code associated with your IP address or derived through Wi-Fi triangulation; and precise geolocation information from GPS-based functionality on your mobile devices, with your permission in accordance with your mobile device settings.
The “How We Use the Information We Collect” section says they will—
Personalize your experience to Provide the Services, for example to:
Customize certain features of the Services,
Deliver relevant content and to provide you with an enhanced experience based on your activities and interests
Send you personalized newsletters, surveys, and information about products, services and promotions offered by us, our partners, and other organizations with which we work
Customize the advertising on the Services based on your activities and interests
Create and update inferences about you and audience segments that can be used for targeted advertising and marketing on the Services, third party services and platforms, and mobile apps
Create profiles about you, including adding and combining information we obtain from third parties, which may be used for analytics, marketing, and advertising
Conduct cross-device tracking by using information such as IP addresses and unique mobile device identifiers to identify the same unique users across multiple browsers or devices (such as smartphones or tablets, in order to save your preferences across devices and analyze usage of the Service.
using inferences about your preferences and interests for any and all of the above purposes
For a look at what Rolling Stone, PMC and their third parties are up to, Privacy Badger’s browser extension “found 73 potential trackers on www.rollingstone.com:
I’m in California, where the CCPA gives me the right to shake down the vampiretariat for all the information about me they’re harvesting, sharing, selling or giving away to or through those third parties.* But apparently Rolling Stone and PMC don’t care about that.
Others do, and I’ll visit some of those in later posts. Meanwhile I’ll let Rolling Stone and PMC stand as examples of bad acting by publishers that remains rampant, unstopped and almost entirely unpunished, even under these new laws.
I also suggest following and getting involved with the fight against the plague of data vampirism in the publishing world. These will help:
Reading Don Marti’s blog, where he shares expert analysis and advice on the CCPA and related matters. Also People vs. Adtech, a compilation of my own writings on the topic, going back to 2008.
Following what the browser makers are doing with tracking protection (alas, differently†). Shortcuts: Brave, Google’s Chrome, Ghostery’s Cliqz, Microsoft’s Edge, Epic, Mozilla’s Firefox.
The California Constitution grants a right of privacy. Existing law provides for the confidentiality of personal information in various contexts and requires a business or person that suffers a breach of security of computerized data that includes personal information, as defined, to disclose that breach, as specified.
This bill would enact the California Consumer Privacy Act of 2018. Beginning January 1, 2020, the bill would grant a consumer a right to request a business to disclose the categories and specific pieces of personal information that it collects about the consumer, the categories of sources from which that information is collected, the business purposes for collecting or selling the information, and the categories of 3rd parties with which the information is shared. The bill would require a business to make disclosures about the information and the purposes for which it is used. The bill would grant a consumer the right to request deletion of personal information and would require the business to delete upon receipt of a verified request, as specified. The bill would grant a consumer a right to request that a business that sells the consumer’s personal information, or discloses it for a business purpose, disclose the categories of information that it collects and categories of information and the identity of 3rd parties to which the information was sold or disclosed…
Don Marti has a draft letter one might submit to the brokers and advertisers who use all that personal data. (He also tweets a caution here.)
Last night I watched The Great Hack a second time. It’s a fine documentary, maybe even a classic. (A classic in literature, I learned on this Radio Open Source podcast, is a work that “can only be re-read.” If that’s so, then perhaps a classic movie is one that can only be re-watched.*)
The movie’s message could hardly be more loud and clear: vast amounts of private information about each of us is gathered constantly in the digital world, and is being weaponized so our minds and lives can be hacked by others for commercial or political gain. Or both. The movie’s star, Professor David Carroll of the New School (@profcarroll), has been delivering that message for many years, as have manyothers, including myself.
But to what effect?
Sure, we have policy moves such as the GDPR, the main achievement of which (so far) has been to cause every website to put confusing and (in most cases) insincere cookie notices on their index pages, meant (again, in most cases) to coerce “consent” (which really isn’t) to exactly the unwanted tracking the regulation was meant to stop.
Those don’t count.
Ennui does. Apathy does.
On seeing The Great Hack that second time, I had exactly the same feeling my wife had on seeing it for her first: that the very act of explaining the problem also trivialized it. In other words, the movie worsened the very problem it solved. And it isn’t alone at this, because so has everything everybody has said, written or reported about it. Or so it sometimes seems. At least to me.
Okay, so: if I’m right about that, why might it be?
One reason is that there’s no story. See, every story requires three elements: character (or characters), problem (or problems), and movement toward resolution. (Find a more complete explanation here.) In this case, the third element—movement toward resolution—is absent. Worse, there’s almost no hope. “The Great Hack” concludes with a depressing summary that tends to leave one feeling deeply screwed, especially since the only victories in the movie are over the late Cambridge Analytica; and those victories were mostly within policy circles we know will either do nothing or give us new laws that protect yesterday from last Thursday… and then last another hundred years.
The bigger reason is that we are now in a media environment summarized by Marshall McLuhan in his book The Medium is the Massage: “every new medium works us over completely.” Our new medium is the Internet, which is a non-place absent of distance and gravity. The only institutions holding up there are ones clearly anchored in the physical world. Health care and law enforcement, for example. Others dealing in non-material goods, such as information and ideas, aren’t doing as well.
Journalism, for example. Worse, on the Internet it’s easy for everyone to traffic in thoughts and opinions, as well as in solid information. So now the world of thoughts and ideas, which preponderate on social media such as Twitter, Facebook and Instagram, are vast floods of everything from everybody. In the midst of all that, the news cycle, which used to be daily, now lasts about as long as a fart. Calling it all too much is a near-absolute understatement.
But David Carroll is right. Darkness is falling. I just wish all the light we keep trying to shed would do a better job of helping us all see that.
Among the many important things the Turing Institute is doing for us right now is highlighting with that notice exactly what’s wrong with the cookie system for remembering choices, and lack of them, for each of us using the Web.
What these switches highlight is that the memory of your choices is theirs, not yours. The whole cookie system outsources your memory of cookie choices to the sites and services of the world. While the cookies themselves can be found somewhere deep in the innards of your computer, you have little or no knowledge of what they are or what they mean, and there are thousands of those in there already.
And yes, we do have browsers that protect us in various ways from unwelcome cookies, but they all do that differently, and none in standard ways that give us clear controls over how we deal with sites and how sites deal with us.
One way to start thinking about this is as a need for cookies go the other way:
Another way is to think (and work toward getting the sites and services of the world to agree to our terms, and to have standard ways of recording that, on our side rather than theirs. Work on that is proceeding at Customer Commons, the IEEE, various Kantara initiatives and the Me2B Alliance.
Then we will need a dashboard, a cockpit (or the metaphor of your choice) through which we can see and control what’s going on as we move about the Web. This will give us personal scale that we should have had on Day One (specifically, in 1995, when graphical browsers took off). This too should be standardized.
There can be no solution that starts on the sites’ side. None. That’s a fail that in effect gives us a different browser for every site we visit. We need solutions of our own. Personal ones. Global ones. Ones with personal scale. It’s the only way.
So we respect that work. We are sure to learn from it. But we also need to respect the structural problems it faces.
PROBLEM #1 is that, economically speaking, data is a publicgood, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from a Linux Journal column I wrote a few years ago:
Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers
Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works
Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting
PROBLEM #2 is that the nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation
Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is important for us to get our heads around amidst the rising chorus of voices insistenting that data is a form of property.
Who owns your data? It’s a popular question of late in the identity community, particularly in the wake of Cambridge Analytica, numerous high-profile Equifax-style data breaches, and the GDPR coming into full force and effect. In our view, it’s not only the wrong question to be asking but it’s flat out dangerous when it frames the entire conversation. While ownership implies a property law model of our data, we argue that the legal framework for our identity-related data must also consider constitutional or human rights laws rather than mere property law rules…
Under common law, ownership in property is a bundle of five rights — the rights of possession, control, exclusion, enjoyment, and disposition. These rights can be separated and reassembled according to myriad permutations and exercised by one or more parties at the same time. Legal ownership or “title” of real property (akin to immovable property under civil law) requires evidence in the form of a deed. Similarly, legal ownership of personal property (i.e. movable property under civil law) in the form of commercial goods requires a bill of lading, receipt, or other document of title. This means that proving ownership or exerting these property rights requires backing from the state or sovereign, or other third party. In other words, property rights emanate from an external source and, in this way, can be said to be extrinsic rights. Moreover, property rights are alienable in the sense that they can be sold or transferred to another party.
Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. Rather, human rights emanate from an internal source and require no evidence of their existence. In this way, they can be said to be intrinsic rights that are self-evident. While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.
In the physical realm, it is long settled that people and organs are not treated like property. Moreover, rights to freedom from unreasonable search and seizure, to associate and peaceably assemble with others, and the rights to practice religion and free speech are not property rights — rather, they are constitutional rights under U.S. law. Just as constitutional and international human rights laws protect our personhood, they also protect things that are property-like or exhibit property-like characteristics. The Fourth Amendment of the U.S. Constitution provides “the right of the people to be secure in their persons” but also their “houses, papers, and effects.” Similarly, the Universal Declaration of Human Rights and the European Convention on Human Rights protect the individual’s right to privacy and family life, but also her “home and correspondence”…
Obviously some personal data may exist in property-form just as letters and diaries in paper form may be purchased and sold in commerce. The key point is that sometimes these items are also defined as papers and effects and therefore subject to Fourth Amendment and other legal frameworks. In other words, there are some uses of (and interests in) our data that transform it from an interest in property to an interest in our personal privacy — that take it from the realm of property law to constitutional or human rights law. Location data, biological, social, communications and other behavioral data are examples of data that blend into personal identity itself and cross this threshold. Such data is highly revealing and the big-data, automated systems that collect, track and analyze this data make the need to establish proportional protections and safeguards even more important and more urgent. It is critical that we apply the correct legal framework.
PROBLEM #4 is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data. Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, combustible or not.
Put another way, why would you want to make almost nothing (the likely price) from selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist, and where rights are fully understood and protected within existing legal frameworks?
What makes us fully powerful as human beings is our ability to generate and share ideas and other goods that are expansible over all space, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.
Important note: I’m not knocking labor here. Most of us have to work for wages, either as parts of industrial machines, or as independent actors. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.
Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage. JP and I called this way of making money a because effect. The entire Internet, the World Wide Web and the totality of free and open source code all have vast because effects in money made with products and services that depend on those graces. Each are rising free tides that lift all commercial boats. Non-commercial ones too.
Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.
The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.
Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is zero interest in buying what can be exploited for free.
Worse, surveillance capitalism’s business is making guesses about you, so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:
Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
Trying to get in on that business is an awful proposition.
Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?
Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)
And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?
It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.
But it’s still early. Web 2.0 is an archaic stage in the formation of the digital world. Surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. The whole thing is too absurd, corrupt, complex and annoying to keep living forever.
So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting better human powers to work.
If we’re going to obsess over personal data, let’s look instead toward ways to regulate or control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.
And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.
“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.
Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”
Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?
Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.
Advertising isn’t personal, and doesn’t have to be. In fact, knowing it’s not personal is an advantage for advertisers. Consumers don’t wonder what the hell an ad is doing where it is, who put it there, or why.
Advertising makes brands. Nearly all the brands you know were burned into your brain by advertising. In fact the term branding was borrowed by advertising from the cattle business. (Specifically by Procter and Gamble in the early 1930s.)
Advertising sponsors media, and those paid by media. All the big pro sports salaries are paid by advertising that sponsors game broadcasts. For lack of sponsorship, media—especially publishers—are hurting. @WaltMossberg learned why on a conference stage when an ad agency guy said the agency’s ads wouldn’t sponsor Walt’s new publication, recode. Walt: “I asked him if that meant he’d be placing ads on our fledgling site. He said yes, he’d do that for a little while. And then, after the cookies he placed on Recode helped him to track our desirable audience around the web, his agency would begin removing the ads and placing them on cheaper sites our readers also happened to visit. In other words, our quality journalism was, to him, nothing more than a lead generator for target-rich readers, and would ultimately benefit sites that might care less about quality.” With friends like that, who needs enemies?
Adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media, and it causes negative associations with brands. Consider this: perhaps a $trillion or more has been spent on adtech, and not one brand known to the world has been made by it. (Bob Hoffman, aka the Ad Contrarian, is required reading on this.)
Adtech wants to be personal. That’s why it’s tracking-based. Though its enthusiasts call it “interest-based,” “relevant” and other harmless-sounding euphemisms, it relies on tracking people. In fact it can’t exist without tracking people. (Note: while all adtech is programmatic, not all programmatic advertising is adtech. In other words, programmatic advertising doesn’t have to be based on tracking people. Same goes for interactive. Programmatic and interactive advertising will both survive the adtech crash.)
Adtech relies on misdirection. See, adtech looks like advertising, and is called advertising; but it’s really direct marketing, which is descended from junk mail and a cousin of spam. Because of that misdirection, brands think they’re placing ads in media, while the systems they hire are actually chasing eyeballs to anywhere. (Pro tip: if somebody says every ad needs to “perform,” or that the purpose of advertising is “to get the right message to the right person at the right time,” they’re actually talking about direct marketing, not advertising. For more on this, read Rethinking John Wanamaker.)
Compared to advertising, adtech is ugly. Look up best ads of all time. One of the top results is for the American Advertising Awards. The latest winners they’ve posted are the Best in Show for 2016. Tops there is an Allstate “Interactive/Online” ad pranking a couple at a ball game. Over-exposure of their lives online leads that well-branded “Mayhem” guy to invade and trash their house. In other words, it’s a brand ad about online surveillance.
Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.
The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.
Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.
Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”
And that’s not all:
Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.
Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.
Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,
The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.
The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…
Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.
Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.
“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”
Pretty cynical, no?
The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.
The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.
Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.
Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.
Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.
Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)
Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.
Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.
What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)
Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.
When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.
Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:
I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)
Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.
To get real privacy in the online world, we need to get the tech horse in front of the policy cart.
So far we haven’t done that. Let me explain…
Nature and the Internet both came without privacy.
The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.
When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.
In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.
Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”
Two such answers arrived with this morning’s New York Times: Facebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with personal tech that gives each of us power not just resist encroachments by others, but to have agency. (Merriam Webster: the capacity, condition, or state of acting or of exerting power.) When enough of us get personal agency, we can also have collective agency, for social as well as personal results.
We acquired both personal and social agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of both with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech. And that tech must be personal.
*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.
Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.
That’s because a massive personal data extraction industry has grown up around the simple fact that our data is there for the taking. Or so it seems. To them. And their apologists.
As a result, we’re at a stage of wanton data extraction that looks kind of like the oil industry did in 1920 or so:
It’s a good metaphor, but for a horrible business. It’s a business we need to reform, replace, or both. What we need most are new industries that grow around who and what we are as individual human beings—and as a society that values what makes us human.
Our data is us. Each of us. It is our life online. Yes, we shed some data in the course of our activities there, kind of like we shed dandruff and odors. But that’s no excuse for the extractors to frack our lives and take what they want, just because it’s there, and they can.
Now think about what love is, and how it works. How we give it freely, and how worthwhile it is when others accept it. How taking it without asking is simply wrong. How it’s better to earn it than to demand it. How it grows when it’s given. How we grow when we give it as well.
True, all metaphors are wrong, but that’s how metaphors work. Time is not money. Life is not travel. A country is not a family. But all those things are like those other things, so we think and talk about each of them in terms of those others. (By saving, wasting and investing time; by arriving, departing, and moving through life; by serving our motherlands, and honoring our founding fathers.)
Oil made sense as a metaphor when data was so easy to take, and the resistance wasn’t there.
But now the resistance is there. More than half a billion people block ads online, most of which are aimed by extracted personal data. Laws like the GDPR have appeared, with heavy fines for taking personal data without clear permission.
I could go on, but I also need to go to bed. I just wanted to get this down while it was in the front of my mind, where it arrived while discussing how shitty and awful “data is the new oil” was when it first showed up in 2006, and how sadly popular it has become since then:
It’s time for a new metaphor that expresses what our personal data really is to us, and how much more it’s worth to everybody else if we keep, give and accept it on the model of love.
Synopsis—Advertising supported publishing in the offline world by sponsoring it. In the online world, advertising has been body-snatched by adtech, which tracks eyeballs via files injected into apps and browsers, then shoots those eyeballs with “relevant” ads wherever the eyeballs show up. Adtech has little or no interest in sponsoring a pub for the pub’s own worth. Worse, it incentivizes fake news (which is easier to produce than the real kind) and flooding the world with “content” rather than old-fashioned (and infinitely more worthwhile) editorial. When publishers agreed to funding by adtech, they sold their souls and their readers down a river full of fraud and malware, as well as indefensible manners. Fortunately, readers can bring both publishers and advertisers back into a soulful reunion. Helpfully, the GDPR makes it illegal not to, and that will be a huge issue as the deadline for compliance (next May 25th) approaches.
Do you think advertisers will pay enough for SafeAds to offset the losses publishers will have from selling fewer targeted ads due to privacy regs?
It’s a good question. (That’s what people say when they don’t have an answer, or can’t think of an easy one right away. But…) I thought about it, and replied with this:
Yes, and then some.
They’ll do it because there is more brand value to SafeAds.
The bigger question is for publishers: what business do they want to be in?
Do they want to operate barrels of “content” full of tracked fish baited there so adtech can shoot them with “interest-based” ads?
Or do they want to operate actual publications with good editorial that advertisers sponsor so their ads can be seen by readers who know those ads support the publication and are appropriate without being personal?
That’s the choice.
It helps that the second business — actual publishing — has been around for a couple hundred years, and even worked fine on the Web before publishers fell for the adtech sell.
Publishers sold a big piece of their soul when they consented to having their readers’ privacy violated, and with rampant impunity, by adtech. They also chose to ignore the fact that adtech is in the business of chasing eyeballs, not of sponsoring the good work publishers do, or of building brand reputation. (Which can’t be done by shooting people constantly with “interest-based” ads that mostly creep people out if they hit a bulls-eye.)
The GDPR, if it works like it should, will force publishers to fire adtech and normalize their relationship with readers. When that happens, publishers, advertisers, readers and agents for all three can start working out better business models than the creepy one we’ve had with adtech.
Ross quoted the first sentence of the second-to-last paragraph, which is probably the best one of the bunch he could have used. Most of the quotes he gathered from other folks in the biz were also very good. I study this topic a lot, and I still learned some new things. Hats off for that.
While I’m saluting what I just learned from Ross, however, I also want to visit some assumptions that surface in his piece. They aren’t his, but rather pretty much everybody’s, and that’s a problem. Here are four of them.
1) Consent can only go one way, meaning each of us should always be the ones consenting to terms proffered by sites and services. Here’s how Ross puts it:
The General Data Protection Regulation, which prevents brands from using a person’s data unless they have explicit permission to do so, could send more ad dollars to premium publishers that are more likely to obtain user consent than lower-quality publishers.
In fact consent can go the other way, meaning the publisher or advertiser can consent to our terms.
It is only because we made a Faustian bargain with client-server in 1995 that we remain stuck inside a model that assumes we “users” should always be second (and second-class) parties, with no choice but to agree as “clients” to terms proffered by server operators.
It helps that the Internet was designed so any one of us can be peers. This is an especially good design feature in the age that (at least I hope) begins with the GDPR.
One reason why I’m encouraged about the GDPR is that it says each of us can be “data controllers” as well as “data subjects.” (White & Case have a good unpacking of that, here.)
Tracking is the reason ad blocking, which has been around since 2003, didn’t hockey-stick toward the sky until 2012. That was when publishers and advertisers, led by the IAB, gave the middle finger to Do Not Track, which was merely a polite request not to be tracked that people could express in their browsers.
3) The best advertising is the most measurable, and is looking for a response from an individual.
That’s not true for advertising, but it is for direct response marketing (the wheat and chaff I talk about in the last cited piece). Unfortunately, as I say in that piece, “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”
The outlines of that alien replica can be seen in what Ross cites here:
Eric Berry, CEO of native ad platform TripleLift, said the GDPR could lead to a reduction in programmatic ad spend because ad buyers will struggle to measure whether their ads lead to purchases. There’s uncertainty about how the law will be enforced, but if users have to give consent to individual publishers, demand-side platforms and attribution vendors, the attribution companies won’t likely have enough data to make accurate measurements, which will lead ad buyers to shift their dollars to other marketing tactics. This would hurt publishers that rely on programmatic ad revenue, he said.
There is a reason perhaps a $trillion has been spent on adtech and not one worldwide brand everyone can name has been created by it, much less sustained or helped in any way.
As Don Martisays, only real advertising can carry the full economic and creative signals required to create and sustain a brand. And, as Bob Hoffman hammers home constantly (and very artfully) in The Ad Contrarian, the ad industry’s equation of “digital” with tracking is based entirely on bullshit. (His term, and the right one.)
Direct response marketing, which began as junk mail, and which looks to measure results for every message, wasn’t designed for that, and can’t do it.
Calling direct response marketing advertising was one of the biggest mistakes the ad industry ever made and masks the real problem the GDPR invites, which is that we risk throwing out the SafeAds baby with the FakeAds (adtech) bathwater.
If all the GDPR leads publishers to do is (as Ross says in his piece) “use intrusive messages — like pop-ups or interstitials — to get user consent,” and the EU fails to fine publishers and their adtech funders for violating the spirit as well as the letter of the GDPR, the GDPR will be as big a fail as the useless cookie consent notices people see on European sites.
4) There’s nothing really wrong with adtech.
Pretty much everything is wrong about adtech, but perhaps the wrongest of the wrong is the problem Siva Vaidhyanathan (@sivasaid)visits in a NY Times piece titled Facebook Wins, Democracy Loses. Here’s a pull quote:
A core principle in political advertising is transparency — political ads are supposed to be easily visible to everyone, and everyone is supposed to understand that they are political ads, and where they come from. And it’s expensive to run even one version of an ad in traditional outlets, let alone a dozen different versions. Moreover, in the case of federal campaigns in the United States, the 2002 McCain-Feingold campaign-finance act requires candidates to state they approve of an ad and thus take responsibility for its content.
The bold-face is mine (or actually my wife’s, who found and highlighted it for me).
The economic signaling value of an ad comes from what it costs. Only a brand with a lot of heft can afford to sponsor a publication or a mainstream broadcaster. But it’s super-cheap to run ads that narrowcast to just a few people. Or to put up a fake news site. (Both are big reasons why journalism is now drowning in a sea of content. Adtech is what paid publishing to trade journalism for “content generation.” This is a cancer on advertising, publishing and journalism, and makes adtech the Agent Smith of digital.)
What’s more, adtech has created environments where micro-targeted ads and adtech-funded fake news can work very effectively to destroy brands.
Consider this possibility: Trump and his sympathizers succeeded in destroying Hillary Clinton’s brand, and there wasn’t a damn thing any of her own big-budget and big-media branding efforts (#SafeAds all) could do about it. (And try, if you are a Trump sympathizer, to ignore whatever you think about how much Hillary brought it on herself or deserved it. In badness of the smear-worthy sort, she has plenty of company, especially Trump. In using modern adtech and fake news methods, the Trump campaign and those helping it were very smart and effective.)
As Siva says in his Times piece,
Ads on [Facebook] meant for, say, 20- to 30-year-old home-owning Latino men in Northern Virginia would not be viewed by anyone else, and would run only briefly before vanishing. The potential for abuse is vast. An ad could falsely accuse a candidate of the worst malfeasance a day before Election Day, and the victim would have no way of even knowing it happened. Ads could stoke ethnic hatred and no one could prepare or respond before serious harm occurs.
Can the GDPR address that problem?
Yes, by supporting individuals (not mere “users” or “consumers”) operating as first parties, getting the good publishers to agree not to run ads like the ones Siva describes, and to open the floodgates to brand ads that actually sponsor those publications, rather than regarding them as bait for shooting tracked eyeballs.