problems

You are currently browsing the archive for the problems category.

A Route of Evanescence,
With a revolving Wheel –
A Resonance of Emerald
A Rush of Cochineal –
And every Blossom on the Bush
Adjusts it’s tumbled Head –
The Mail from Tunis – probably,
An easy Morning’s Ride –

—Emily Dickinson
(via The Poetry Foundation)

While that poem is apparently about a hummingbird, it’s the one that comes first to my mind when I contemplate the form of evanescence that’s rooted in the nature of the Internet, where all of us are here right now, as I’m writing and you’re reading this.

Because, let’s face it: the Internet is no more about anything “on” it than air is about noise, speech or anything at all. Like air, sunlight, gravity and other useful graces of nature, the Internet is good for whatever can be done with it.

Same with the Web. While the Web was born as a way to share documents at a distance (via the Internet), it was never a library, even though we borrowed the language of real estate and publishing (domains and sites with pages one could author, edit, publish, syndicate, visit and browse) to describe it. While the metaphorical framing in all those words suggests durability and permanence, they belie the inherently evanescent nature of all we call content.

Think about the words memorystorageupload, and download. All suggest that content in digital form has substance at least resembling the physical kind. But it doesn’t. It’s a representation, in a pattern of ones and zeros, recorded on a medium for as long the responsible party wishes to keep it there, or the medium survives. All those states are volatile, and none guarantee that those ones and zeroes will last.

I’ve been producing digital content for the Web since the early 90s, and for much of that time I was lulled into thinking of the digital tech as something at least possibly permanent. But then my son Allen pointed out a distinction between the static Web of purposefully durable content and what he called the live Web. That was in 2003, when blogs were just beginning to become a thing. Since then the live Web has become the main Web, and people have come to see content as writing or projections on a World Wide Whiteboard. Tweets, shares, shots and posts are mostly of momentary value. Snapchat succeeded as a whiteboard where people could share “moments” that erased themselves after one view. (It does much more now, but evanescence remains its root.)

But, being both (relatively) old and (seriously) old-school about saving stuff that matters, I’ve been especially concerned with how we can archive, curate and preserve as much as possible of what’s produced for the digital world.

Last week, for example, I was involved in the effort to return Linux Journal to the Web’s shelves. (The magazine and site, which lived from April 1994 to August 2019, was briefly down, and with it all my own writing there, going back to 1996. That corpus is about a third of my writing in the published world.) Earlier, when it looked like Flickr might go down, I worried aloud about what would become of my many-dozen-thousand photos there. SmugMug saved it (Yay!); but there is no guarantee that any Website will persist forever, in any form. In fact, the way to bet is on the mortality of everything there. (Perspective: earlier today, over at doc.blog, I posted a brief think piece about the mortality of our planet, and the youth of the Universe.)

But the evanescent nature of digital memory shouldn’t stop us from thinking about how to take better care of what of the Net and the Web we wish to see remembered for the world. This is why it’s good to be in conversation on the topic with Brewster Kahle (of archive.org), Dave Winer and other like-minded folk. I welcome your thoughts as well.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The words we put knowledge into he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers are their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it) and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools and each other.

Louis Brandeis and Samuel Warren visited the same problem more than a century ago, when they became alarmed at the implications of recording and reporting technologies that were far more primitive than the kind we have today. In response to those technologies, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal.”

But it’s hard to argue for those rights in the digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there is no limit to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

And facial recognition systems are getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and perhaps not just because they’re acquiescing to the inevitable.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of  what might get done with facial data if the bank gets hacked, or changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting thrugh to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But special cases such as that one haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”
  • Ban Facial Recognition

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional, because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

However, given the unlkelihood that the facial recognition genie will ever go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example to unlock phones or to sort through old photos. But, the data they gather should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes).
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted in to the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

 

Whither Linux Journal?

[16 August 2019…] Had a reassuring call yesterday with Ted Kim, CEO of London Trust Media. He told me the company plans to keep the site up as an archive at the LinuxJournal.com domain, and that if any problems develop around that, he’ll let us know. I told him we appreciate it very much—and that’s where it stands. I’m leaving up the post below for historical purposes.

On August 5th, Linux Journal‘s staff and contractors got word from the magazine’s parent company, London Trust Media, that everyone was laid off and the business was closing. Here’s our official notice to the world on that.

I’ve been involved with Linux Journal since before it started publishing in 1994, and have been on its masthead since 1996. I’ve also been its editor-in-chief since January of last year, when it was rescued by London Trust Media after nearly going out of business the month before. I say this to make clear how much I care about Linux Journal‘s significance in the world, and how grateful I am to London Trust Media for saving the magazine from oblivion.

London Trust Media can do that one more time, by helping preserve the Linux Journal website, with its 25 years of archives, so all its links remain intact, and nothing gets 404’d. Many friends, subscribers and long-time readers of Linux Journal have stepped up with offers to help with that. The decision to make that possible, however, is not in my hands, or in the hands of anyone who worked at the magazine. It’s up to London Trust Media. The LinuxJournal.com domain is theirs.

I have had no contact with London Trust Media in recent months. But I do know at least this much:

  1. London Trust Media has never interfered with Linux Journal‘s editorial freedom. On the contrary, it quietly encouraged our pioneering work on behalf of personal privacy online. Among other things, LTM published the first draft of a Privacy Manifesto now iterating at ProjectVRM, and recently published on Medium.
  2. London Trust Media has always been on the side of freedom and openness, which is a big reason why they rescued Linux Journal in the first place.
  3. Since Linux Journal is no longer a functioning business, its entire value is in its archives and their accessibility to the world. To be clear, these archives are not mere “content.” They are a vast store of damned good writing, true influence, and important history that search engines should be able to find where it has always been.
  4. While Linux Journal is no longer listed as one of London Trust Media’s brands, the website is still up, and its archives are still intact.

While I have no hope that Linux Journal can be rescued again as a subscriber-based digital magazine, I do have hope that the LinuxJournal.com domain, its (Drupal-based) website and its archives will survive. I base that hope on believing that London Trust Media’s heart has always been in the right place, and that the company is biased toward doing the right thing.

But the thing is up to them. It’s their choice whether or not to support the countless subscribers and friends who have stepped forward with offers to help keep the website and its archives intact and persistent on the Web. It won’t be hard to do that. And it’s the right thing to do.

“What’s the story?”

No question is asked more often by editors in newsrooms than that one. And for good reason: that’s what news is about: stories.

I was just 22 when I got my first gig as a journalist, reporting for a daily newspaper in New Jersey. It was there that I first learned that all stories are built around just three elements:

  1. Character
  2. Problem
  3. Movement toward resolution

You need all three. Subtract one or more, and all you have is an item, or an incident. Not a story. So let’s unpack those a bit.

The character can be a person, a group, a team, a cause—anything with a noun. Mainly the character needs to be worth caring about in some way. You can love the character, hate it (or him, or her or whatever). Mainly you have to care about the character enough to be interested.

The problem can be of any kind at all, so long as it causes conflict involving the character. All that matters is that the conflict keeps going, at least toward the possibility of resolution. If the conflict ends, the story is over. For example, if you’re at a sports event, and your team is up (or down) by forty points with five minutes left, the character you now care about is your own ass, and your problem is getting it out of the parking lot. If that struggle turns out to be interesting, it might be a story you tell later at a bar.)

Movement toward resolution is nothing more than that. Bear in mind that many stories, and many characters in many conflicts around many problems in stories, never arrive at a conclusion. In fact, that may be part of the story itself. Soap operas work that way.

For a case-in-point of how this can go very wrong, we have the character now serving as President of the United States.

Set the politics aside and just look at the dude through the prism of Story.

Trump—clearly, deeply and instinctively—understands how stories work. He is experienced and skilled at finding or causing problems that generate conflict and enlarge his own character to maximum size in the process.

He does this through constant characterization of others, for example with nicknames: “Little Mario,” “Low Energy Jeb,” “Crooked Hillary,” “Sleepy Joe,” “Failing New York Times.”

He stokes the fires of conflict by staying on the attack at all times: a strategy he learned from Roy Cohn., who Frank Rich felicitously calls “The worst human being who ever lived … the most evil, twisted, vicious bastard ever to snort coke at Studio 54.” Talk about character. Whoa. As Politico puts it here, “Cohn imparted an M.O. that’s been on searing display throughout Trump’s ascent, his divisive, captivating campaign, and his fraught, unprecedented presidency. Deflect and distract, never give in, never admit fault, lie and attack, lie and attack, publicity no matter what, win no matter what, all underpinned by a deep, prove-me-wrong belief in the power of chaos and fear.” There is genius to how Trump succeeds at this, especially in these early years of our new digital age, when the entire Internet is one big gossip mill.

Trump’s success at capturing the attention of everyone and the loyalty of many calls to mind The Mule in Isaac Azimov’s Foundation and Empire. (It from noting this resemblance that, along with Scott Adams, I expected Trump to win in 2016.)

So that’s the first way journalism fails: its appetite for stories proves a weakness when it’s fed by a genius at hogging the stage.

Journalism’s second failing is not reporting what doesn’t fit the story format. Stories are inadequate ways to represent facts and truths. Too much of both get excluded if they don’t fit “the narrative,” which is the modern way to talk about story.

There is a paradox here: that we need to know more than stories can tell, and yet stories are pretty much all human beings are interested in. Character, problem and movement give shape and purpose to every human life. We can’t correct for it.

Perhaps our best hope is to recognize something I learned from a shrink I was talking to at a party long ago. Most mental illness, she said, is just OCD: obsessive compulsive disorder.  And it’s hard to say exactly what’s a “disorder,” because all human accomplishments worth celebrating owe in large measure to obsessive and compulsive interests and behaviors. And, in some cases (for example our current president’s) to narcissism and other disorders as well.

I don’t know how to fix any of that, or if it can be fixed at all. I’m just sure that journalism as a discipline will benefit by looking more closely at how stories fail: at what they can’t do, as well as at what they can.

_________

*However, if you want good advice on how best to write stories about the guy, you can’t beat what @JayRosen_NYU tweets here. I suggest it also applies to the UK’s new prime minister.

 

 

 

In a press release, Amazon explained why it backed out of its plan to open a new headquarters in New York City:

For Amazon, the commitment to build a new headquarters requires positive, collaborative relationships with state and local elected officials who will be supportive over the long-term. While polls show that 70% of New Yorkers support our plans and investment, a number of state and local politicians have made it clear that they oppose our presence and will not work with us to build the type of relationships that are required to go forward with the project we and many others envisioned in Long Island City.

So, even if the economics were good, the politics were bad.

The hmm for me is why not New Jersey? Given the enormous economic and political overhead of operating in New York, I’m wondering why Amazon didn’t consider New Jersey first. Or if it’s thinking about it now.

New Jersey is cheaper and (so I gather) friendlier, at least tax-wise. It also has the country’s largest port (one that used to be in New York, bristling Manhattan’s shoreline with piers and wharves, making look like a giant paramecium) and is a massive warehousing and freight forwarding hub. In fact Amazon already has a bunch of facilities there (perhaps including its own little port on Arthur Kill). I believe there are also many more places to build on the New Jersey side. (The photo above, shot on approach to Newark Airport, looks at New York across some of those build-able areas.)

And maybe that’s the plan anyway, without the fanfare.

As it happens, I’m in the midst of reading Robert Caro‘s The Power Broker: Robert Moses and the Fall of New York. (Which is massive. There’s a nice summary in The Guardian here.) This helps me appreciate the power of urban planning, and how thoughtful and steel-boned opposition to some of it can be fully useful. One example of that is Jane Jacobs’ thwarting of Moses’ plan to run a freeway through Greeenwich Village. He had earlier done the same through The Bronx, with the Cross Bronx Expressway. While that road today is an essential stretch of the northeast transport corridor, at the time it was fully destructive to urban life in that part of the city—and in many ways still is.

So I try to see both sides of an issue such as this. What’s constructive and what’s destructive in urban planning are always hard to pull apart.

For an example close to home, I often wonder if it’s good that Fort Lee is now almost nothing but high-rises? This is the town my grandfather helped build (he was the head carpenter for D.W. Griffith when Fort Lee was the first Hollywood), where my father grew up climbing the Palisades for fun, and where he later put his skills to work as cable rigger, helping build the George Washington Bridge. The Victorian house Grandpa built for his family on Hoyt Avenue, and where my family lived when I was born, stood about as close to a giant new glass box called The Modern as I am from the kitchen in the apartment I’m writing this, a few blocks away from The Bridge on the other side of the Hudson. It’s paved now, by a road called Bruce Reynolds Boulevard. Remember Bridgegate? That happened right where our family home stood, in a pleasant neighborhood of which nothing remains.

Was the disappearance of that ‘hood a bad thing? Not by now, long after the neighborhood was erased and nearly everyone who lived has died or has long since moved on. Thousands more live there now than ever did when it was a grid of nice homes on quiet, tree-lined streets.

All urban developments are omelettes made of broken eggs. If you’re an egg, you’ve got reason to complain. If you’re a cook, you’d better make a damn fine omelette.

On comment spam

We had a temporary plague of comment spam here. My original post here remarked on that.

But it’s gone now, so its safe to comment again. 🙂

Thanks for bearing with me in the meantime.

Tags: , ,

twitter down a holeSo I’m taking live notes at Blockchain in Journalism: Promise and Practice, happening at the Brown Institute for Media Innovation, in the Tow Center for Digital Journalism at the Columbia School of Journalism, to name the four Russian dolls whose innards I’m inhabiting here

In advance of this gathering, Linux Journal, which I serve as editor-in-chief (but can’t use as a blog, meaning editing it live is do-able but not easy), published When the problem is the story. I wanted it up, on the outside chance that stories themselves, as journalism’s stock-in-trade, might get discussed. Because stories are a Hard Problem: maybe one we can’t solve.

Okay, now the live blogging commences::::

“Token curated registratries, aka TCRs.” Mike Goldin of AdChain is talking about those now. Looking him up. Links: Token Curated Registries 1.0#18 Mike Goldin, AdChain: Token-Curated Registries, An Emerging Cryptoeconomic Primitive.

Observation: blockchain is conceptually opaque, in ways the Internet (the way everything is connected) and the Web (a way to publish on the Internet) are not.

Not quite technically speaking, a blockchain is a distributed way of recording data in duplicate. Or something close enough to that. (Let’s not argue it.) What makes blockchain hard to grok is the “distributed” part. What it means is an ever-expanding copy of the same record accumulates on many different computers distributed everywhere. Including yours. Your computer is going to have a copy of a blockchain, or many blockchains, for the good of the world—or the parts of the world that could use a distributed way of keeping an immutable record of whatever. See what I mean? (Yes and no are equally good answers to that question.)

Mike Goldin just said that understanding blockchain is as big a cognitive leap as it took to grok the Internet way back when. Not so. Understanding blockchain is a shit-ton harder than understanding the Internet.

“Identity procreator type tool” just got uttered. My wife, who knows blockchain better than I, just made two fists and whispered “Yes!” I believe @JarrodDicker of Po.et just uttered it.

RadioTopia just got some love from Manoush Zomorodi of ZigZag.

So let’s get to the title of this post.

Normally I’d be tweeting this, but right now I can’t. Nor can I write about it in Medium. Both are closed to me, because Twitter hates my @dsearls login, for reasons unknown, and my login to Medium uses my Twitter handle.

<gripe>

When I tried to troubleshoot my eviction from Twitter this morning, I went to the trouble of creating a new password, alas without help from Dashlane, my password manager, which for some reason wasn’t able to help by generating me a new one. Dunno why.

Deeper background: I’m active on four different Twitter accounts, spread across four browsers. I tweet as myself on Chrome, and as @VRM, @CustomerCommons and @Cluetrain on the three other browsers. The latter three are ones where multiple people can also post.

(Yes, I know there are ways to post as different entities on single browsers or apps, but being different entities on different browsers is easier for me. Or was until this morning.)

So I decided to try getting onto Twitter on one of the other browsers. So I logged out @VRM on Firefox, failed to log in as myself, created the new password through Twitter’s password creating routine, made up a new password (because Dashlane couldn’t help on Firefox either), and wrote the new password down on a sticky.

Then, once I got @dsearls working on Firefox, I logged out, and tried to log in again as @vrm there. Twitter didn’t like that login and made me create a new password for that account too, again without Dashlane’s help. Now I had two passwords, for two accounts, on one sticky.

Then I got in the subway and came down to Columbia, ready to tweet about the #BlockchainJournalism from the audience at the Tow Center. But Twitter on Chrome wouldn’t let me in. Meanwhile, the new password was still on a sticky back at my apartment, and not remembered by Firefox. So I thought, hey, I’ll just create a new password again, now with Dashlane’s help. But I got stopped part way with this response from Twitter when I clicked on the new password making link: https://twitter.com/login/error?redirect… .

This kind of experience is why I posted Please let’s kill logins and passwords back in August, and the sentiment stands.

</gripe>

So now that I’m experiencing life without Twitter, on which much of journalism utterly depends, I’m beginning to think about how we’ll all work once Twitter is gone—either completely or just to hell. Also about my own dependence on it. And about how having Twitter as a constant steam valve has bled off energies I once devoted to doing full-force journalism. Or just to blogging. Such as now, here, when I can’t use Twitter.

A difference: tweets may persist somewhere, but they’re the journalistic equivalent of snow falling on water. Blog posts tend to persist in a findable form for as long as their publisher maintains their archive.

Interesting fact: back in the early ’00s, when I was kinda big in the (admittedly small) blogging world, I had many thousands of readers every day. Most of those subscribed to my RSS feed. Then, in ’06, Twitter and Facebook started getting big, most bloggers moved to those platforms, and readership of my own blog dropped eventually to dozens per day. So I got active on Twitter, where I now have 24.4k followers. But hey, so does the average parking space.

I guess where I’m going is toward where Hossein Derakhshan (@h0d3r)has been for some time, with The Web We Have to Save. That Web is ours, not Twitter’s or Facebook’s or any platform’s. (This is also what @DWeinberger and I said in the #NewClues addendum to The Cluetrain Manifesto back in ’15.) Journalism, or whatever it’s becoming, is far more at home there than in any silo, no matter how useful it may be.

 

 

fruit thought

If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?

Not yet. Or maybe not really.

Either way, that’s the idea behind the urge by some lately to claim personal data as personal property, and then to make money (in cash, tokens or cryptocurrency) by selling or otherwise monetizing it. The idea in all these cases is to somehow participate in existing (entirely extractive) commodity markets for personal data.

ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of the ProjectVRM wiki is a heading called Markets for Personal Data. Listed there are:

So we respect that work. It is also essential to recognize problems it faces.

The first problem is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from Linux Journal column I wrote a few years ago:

Excludability Excludability
YES NO
Rivalness YES Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Rivalness NO Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting

 

The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation

Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is important for us to get our heads around amidst the rising chorus of voices insistenting that data is a form of property.

The third problem is that there are better legal frameworks than property law for protecting personal data. In Do we really want to “sell” ourselves? The risks of a property law paradigm for personal data ownership, Elizabeth Renieris and Dazza Greenwood write,

Who owns your data? It’s a popular question of late in the identity community, particularly in the wake of Cambridge Analytica, numerous high-profile Equifax-style data breaches, and the GDPR coming into full force and effect. In our view, it’s not only the wrong question to be asking but it’s flat out dangerous when it frames the entire conversation. While ownership implies a property law model of our data, we argue that the legal framework for our identity-related data must also consider constitutional or human rights laws rather than mere property law rules

Under common law, ownership in property is a bundle of five rights — the rights of possession, control, exclusion, enjoyment, and disposition. These rights can be separated and reassembled according to myriad permutations and exercised by one or more parties at the same time. Legal ownership or “title” of real property (akin to immovable property under civil law) requires evidence in the form of a deed. Similarly, legal ownership of personal property (i.e. movable property under civil law) in the form of commercial goods requires a bill of lading, receipt, or other document of title. This means that proving ownership or exerting these property rights requires backing from the state or sovereign, or other third party. In other words, property rights emanate from an external source and, in this way, can be said to be extrinsic rights. Moreover, property rights are alienable in the sense that they can be sold or transferred to another party.

Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. Rather, human rights emanate from an internal source and require no evidence of their existence. In this way, they can be said to be intrinsic rights that are self-evident. While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.

In the physical realm, it is long settled that people and organs are not treated like property. Moreover, rights to freedom from unreasonable search and seizure, to associate and peaceably assemble with others, and the rights to practice religion and free speech are not property rights — rather, they are constitutional rights under U.S. law. Just as constitutional and international human rights laws protect our personhood, they also protect things that are property-like or exhibit property-like characteristics. The Fourth Amendment of the U.S. Constitution provides “the right of the people to be secure in their persons” but also their “houses, papers, and effects.” Similarly, the Universal Declaration of Human Rights and the European Convention on Human Rights protect the individual’s right to privacy and family life, but also her “home and correspondence”…

Obviously some personal data may exist in property-form just as letters and diaries in paper form may be purchased and sold in commerce. The key point is that sometimes these items are also defined as papers and effects and therefore subject to Fourth Amendment and other legal frameworks. In other words, there are some uses of (and interests in) our data that transform it from an interest in property to an interest in our personal privacy — that take it from the realm of property law to constitutional or human rights law. Location data, biological, social, communications and other behavioral data are examples of data that blend into personal identity itself and cross this threshold. Such data is highly revealing and the big-data, automated systems that collect, track and analyze this data make the need to establish proportional protections and safeguards even more important and more urgent. It is critical that we apply the correct legal framework.

The fourth problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data. Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, combustible or not.

Put another way, why would you want to make almost nothing (the likely price) from selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist, and where rights are fully understood and protected within existing legal frameworks?

What makes us fully powerful as human beings is our ability to generate and share ideas and other goods that are expansible over all space, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.

Important note: I’m not knocking labor here. Most of us have to work for wages, either as parts of industrial machines, or as independent actors. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.

Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something.

Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage. JP and I called this way of making money a because effect. The entire Internet, the World Wide Web and the totality of free and open source code all have vast because effects in money made with products and services that depend on those graces. Each are rising free tides that lift all commercial boats. Non-commercial ones too.

Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.

The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.

Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is zero interest in buying what can be exploited for free.

Worse, surveillance capitalism’s business is making guesses about you, so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:

  1. Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
  2. Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
  3. The entrails of surveillance capitalism are fully infected with fraud and malware.
  4. Surveillance capitalism is also quite satisfied to soak up to 97% of an advertising spend before an ad’s publisher gets its 3% for pushing an ad at you.

Trying to get in on that business is an awful proposition.

Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?

Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)

And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?

What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.

It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.

But it’s still early. Web 2.0 is an archaic stage in the formation of the digital world. Surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. The whole thing is too absurd, corrupt, complex and annoying to keep living forever.

So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting better human powers to work.

If we’re going to obsess over personal data, let’s look instead toward ways to regulate or control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.

Bonus links:

 

 

 

 

How would you feel if you had been told in the early days of the Web that in the year 2018 you would still need logins and passwords for damned near everything.

Your faith in the tech world would be deeply shaken, no?

And what if you had been told that in 2018 logins and passwords would now be required for all kinds of other shit, from applications on mobile devices to subscription services on TV?

Or worse, that in 2018 you would be rob-logged-out of sites and services frequently, whether you were just there or not, for security purposes — and that logging back in would often require “two factor” authentication, meaning you have to do even more work to log in to something, and that (also for security purposes) every password you use would not only have be different, but impossible for any human to remember, especially when average connected human now has hundreds of login/password combinations, many of which change constantly?

Would you not imagine this to be a dystopian hell?

Welcome to now, folks. Our frog is so fully boiled that it looks like Brunswick stew.

Can we please fix this?

Please, please, please, tech world: move getting rid of logins and passwords to the top of your punch list, ahead of AI, ML, IoT, 5G, smart dust, driverless cars and going to Mars.

Your home planet thanks you.

[Addendum…] Early responses to this post suggest that I’m talking about fixing the problem at the superficial level of effects. So, to clarify, logins and passwords are an effect, and not a cause of anything other than inconvenience and annoyance. The causes are design and tech choices made long ago—choices that can be changed.

Not only that, but many people have been working on solving the identity side of this thing for many years. In fact we’re about to have our 27th Internet Identity Workshop in October at the Computer History Museum. If you want to work on this with other people who are doing the same, register here.

 

In The Big Short, investor Michael Burry says “One hallmark of mania is the rapid rise in the incidence and complexity of fraud.” (Burry shorted the mania- and fraud-filled subprime mortgage market and made a mint in the process.)

One would be equally smart to bet against the mania for the tracking-based form of advertising called adtech.

Since tracking people took off in the late ’00s, adtech has grown to become a four-dimensional shell game played by hundreds (or, if you include martech, thousands) of companies, none of which can see the whole mess, or can control the fraud, malware and other forms of bad acting that thrive in the midst of it.

And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.

Without adtech, the EU’s GDPR (General Data Protection Regulation) would never have happened. But the GDPR did happen, and as a result websites all over the world are suddenly posting notices about their changed privacy policies, use of cookies, and opt-in choices for “relevant” or “interest-based” (translation: tracking-based) advertising. Email lists are doing the same kinds of things.

“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.

Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”

Of course what I just said greatly simplifies what the GDPR actually utters, in bureaucratic legalese. The GDPR is also full of loopholes only snakes can thread; but the spirit of the law is clear, and the snakes will be easy to shame, even if they don’t get fined. (And legitimate interest—an actual loophole in the GDPR, may prove hard to claim.)

Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?

Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.

First, advertising:

    1. Advertising isn’t personal, and doesn’t have to be. In fact, knowing it’s not personal is an advantage for advertisers. Consumers don’t wonder what the hell an ad is doing where it is, who put it there, or why.
    2. Advertising makes brands. Nearly all the brands you know were burned into your brain by advertising. In fact the term branding was borrowed by advertising from the cattle business. (Specifically by Procter and Gamble in the early 1930s.)
    3. Advertising carries an economic signal. Meaning that it shows a company can afford to advertise. Tracking-based advertising can’t do that. (For more on this, read Don Marti, starting here.)
    4. Advertising sponsors media, and those paid by media. All the big pro sports salaries are paid by advertising that sponsors game broadcasts. For lack of sponsorship, media—especially publishers—are hurting. @WaltMossberg learned why on a conference stage when an ad agency guy said the agency’s ads wouldn’t sponsor Walt’s new publication, recode. Walt: “I asked him if that meant he’d be placing ads on our fledgling site. He said yes, he’d do that for a little while. And then, after the cookies he placed on Recode helped him to track our desirable audience around the web, his agency would begin removing the ads and placing them on cheaper sites our readers also happened to visit. In other words, our quality journalism was, to him, nothing more than a lead generator for target-rich readers, and would ultimately benefit sites that might care less about quality.” With friends like that, who needs enemies?

Second, Adtech:

    1. Adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media, and it causes negative associations with brands. Consider this: perhaps a $trillion or more has been spent on adtech, and not one brand known to the world has been made by it. (Bob Hoffman, aka the Ad Contrarian, is required reading on this.)
    2. Adtech wants to be personal. That’s why it’s tracking-based. Though its enthusiasts call it “interest-based,” “relevant” and other harmless-sounding euphemisms, it relies on tracking people. In fact it can’t exist without tracking people. (Note: while all adtech is programmatic, not all programmatic advertising is adtech. In other words, programmatic advertising doesn’t have to be based on tracking people. Same goes for interactive. Programmatic and interactive advertising will both survive the adtech crash.)
    3. Adtech spies on people and violates their privacy. By design. Never mind that you and your browser or app are anonymized. The ads are still for your eyeballs, and correlations can be made.
    4. Adtech is full of fraud and a vector for malware. @ACFou is required reading on this.
    5. Adtech incentivizes publications to prioritize “content generation” over journalism. More here and here.
    6. Intermediators take most of what’s spent on adtech. Bob Hoffman does a great job showing how as little as 3¢ of a dollar spent on adtech actually makes an “impression. The most generous number I’ve seen is 12¢. (When I was in the ad agency business, back in the last millennium, clients complained about our 15% take. Media our clients bought got 85%.)
    7. Adtech gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.
    8. Adtech incentivizes hate speech and tribalism by giving both—and the platforms that host them—a business model too.
    9. Adtech relies on misdirection. See, adtech looks like advertising, and is called advertising; but it’s really direct marketing, which is descended from junk mail and a cousin of spam. Because of that misdirection, brands think they’re placing ads in media, while the systems they hire are actually chasing eyeballs to anywhere. (Pro tip: if somebody says every ad needs to “perform,” or that the purpose of advertising is “to get the right message to the right person at the right time,” they’re actually talking about direct marketing, not advertising. For more on this, read Rethinking John Wanamaker.)
    10. Compared to advertising, adtech is ugly. Look up best ads of all time. One of the top results is for the American Advertising Awards. The latest winners they’ve posted are the Best in Show for 2016. Tops there is an Allstate “Interactive/Online” ad pranking a couple at a ball game. Over-exposure of their lives online leads that well-branded “Mayhem” guy to invade and trash their house. In other words, it’s a brand ad about online surveillance.
    11. Adtech has caused the largest boycott in human history. By more than a year ago, 1.7+ billion human beings were already blocking ads online.

To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in AdExchanger by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:

Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.

Specifically,

The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.

Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.

Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”

And that’s not all:

Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.

Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.

This is also Google’s way of scraping off GDPR liability on publishers.

Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,

The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.

The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…

Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.

In other words, good luck with that.

[Later, 26 May…] Well, Google caved on this one, so apparently Google is coming to IAB Europe’s table.

[And on 30 May…] Axel Springer is also going its own way.

One big upside for IAB Europe is that its Framework contains open source code and an SDK. For a full unpacking of what’s there see the Consent String and Vendor List Format: Transparency & Consent Framework on GitHub and IAB Europe’s own FAQ. More about this shortly.

Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.

One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there. Here’s the most revealing graphic in the bunch:

Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.

“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”

Pretty cynical, no?

The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.

The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.

Meanwhile::::

Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.

Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.

Google can say it already has consent, and that it is also has a legitimate interest (one of the six “lawful bases” for tracking) in the personal data it harvests from us.

Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.

Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)

Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.

For more on how that will work, read The Intention Economy: When Customers Take Charge. Six years after Harvard Business Review Press published that book, what it says will start to come true. Thank you, GDPR.

Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.

What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)

Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.

When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.

Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:

  1. An easy fix for a broken advertising system (12 October 2017 in Medium and in my blog)
  2. Without aligning incentives, we can’t kill fake news or save journalism (15 September 2017 in Medium)
  3. Let’s get some things straight about publishing and advertising (9 September 2017 and the same day in Medium)
  4. Good news for publishers and advertisers fearing the GDPR (3 September 2017 in ProjectVRM and 7 October in Medium).
  5. Markets are about more than marketing (2 September 2017 in Medium).
  6. Publishers’ and advertisers’ rights end at a browser’s front door (17 June 2017 in Medium). It updates one of the 2015 blog posts below.
  7. How to plug the publishing revenue drain (9 June 2017 in Medium). It expands on the opening (#publishing) section of my Daily Tab for that date.
  8. How True Advertising Can Save Journalism From Drowning in a Sea of Content (22 January 2017 in Medium and 26 January 2017 in my blog.)It’s People vs. Advertising, not Publishers vs. Adblockers (26 August 2016 in ProjectVRM and 27 August 2016 in Medium)
  9. Why #NoStalking is a good deal for publishers (11 May 2016, and in Medium)
  10. How customers can debug business with one line of code (19 April 2016 in ProjectVRM and in Medium)
  11. An invitation to settle matters with @Forbes, @Wired and other publishers (15 April 2016 and in Medium)
  12. TV Viewers to Madison Avenue: Please quit driving drunk on digital (14 Aprl 2016, and in Medium)
  13. The End of Internet Advertising as We’ve Known It(11 December 2015 in MIT Technology Review)
  14. Ad Blockers and the Next Chapter of the Internet (5 November in Harvard Business Review)
  15. How #adblocking matures from #NoAds to #SafeAds (22 October 2015)
  16. Helping publishers and advertisers move past the ad blockade (11 October 2015 on the ProjectVRM blog)
  17. Beyond ad blocking — the biggest boycott in human history (28 Septemper 2015)
  18. A way to peace in the adblock war (21 September 2015, on the ProjectVRM blog)
  19. How adtech, not ad blocking, breaks the social contract (23 September 2015)
  20. If marketing listened to markets, they’d hear what ad blocking is telling them (8 September 2015)
  21. Apple’s content blocking is chemo for the cancer of adtech (26 August 2015)
  22. Separating advertising’s wheat and chaff (12 August 2015, and on 2 July 2016 in an updated version in Medium)
  23. Thoughts on tracking based advertising (18 February 2015)
  24. On marketing’s terminal addiction to data fracking and bad guesswork (10 January 2015)
  25. Why to avoid advertising as a business model (25 June 2014, re-running Open Letter to Meg Whitman, which ran on 15 October 2000 in my old blog)
  26. What the ad biz needs is to exorcize direct marketing (6 October 2013)
  27. Bringing manners to marketing (12 January 2013 in Customer Commons)
  28. What could/should advertising look like in 2020, and what do we need to do now for this future?(Wharton’s Future of Advertising project, 13 November 2012)
  29. An olive branch to advertising (12 September 2012, on the ProjectVRM blog)

I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)

Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.

« Older entries