twitter down a holeSo I’m taking live notes at Blockchain in Journalism: Promise and Practice, happening at the Brown Institute for Media Innovation, in the Tow Center for Digital Journalism at the Columbia School of Journalism, to name the four Russian dolls whose innards I’m inhabiting here.

In advance of this gathering, Linux Journal, which I serve as editor-in-chief (but which I can’t use as a blog, meaning editing it live is maybe do-able but not easy), published When the problem is the story. I wanted it up, on the outside chance that stories themselves, as journalism’s stock-in-trade, might get discussed. Because stories are a Hard Problem: maybe one we can’t solve.

The panels are interesting but so far tell me nothing I didn’t already know, though some of it is interesting at the jargon level.

Okay, here comes a new one: “Token curated registratries, aka TCRs. Mike Goldin of AdChain is talking about those now. Looking him up. Links: Token Curated Registries 1.0#18 Mike Goldin, AdChain: Token-Curated Registries, An Emerging Cryptoeconomic Primitive.

Observation: blockchain is conceptually opaque, in ways the Internet (the way everything is connected) and the Web (a way to publish on the Internet) are not.

Not quite technically speaking, a blockchain is a distributed way of recording data in duplicate. Or something close enough to that. (Let’s not argue it.) What makes blockchain hard to grok is the “distributed” part. What it means is an ever-expanding copy of the same record accumulates on many different computers distributed everywhere. Including yours. Your computer is going to have a copy of a blockchain, or many blockchains, for the good of the world—or the parts of the world that could use a distributed way of keeping an immutable record of whatever. See what I mean? (Yes and no are equally good answers to that question.)

Mike Goldin just said that understanding blockchain is as big a cognitive leap as it took to grok the Internet way back when. Not so. Understanding blockchain is a shit-ton harder than understanding the Internet.

“Identity procreator type tool” just got uttered. My wife, who is not very technical but knows blockchain better than I, just made two fists and whispered “Yes!” I believe @JarrodDicker of just uttered it.

RadioTopia just got some love from Manoush Zomorodi of ZigZag.

So let’s get to the title of this post.

Normally I’d be tweeting this, but right now I can’t. Nor can I write about it in Medium. Both are closed to me, because Twitter hates my @dsearls login, for reasons unknown, and my login to Medium uses my Twitter handle.


When I tried to trouble-shoot my eviction from Twitter this morning, I went to the trouble of creating a new password, alas without help from Dashlane, my password manager, which for some reason wasn’t able to help by generating me a new one. Dunno why.

Deeper background: I’m active on four different Twitter accounts, spread across four browsers. I tweet as myself on Chrome, and as @VRM, @CustomerCommons and @Cluetrain on the three other browsers. The latter three are ones where multiple people can also post.

(Yes, I know there are ways to post as different entities on single browsers or apps, but being different entities on different browsers is easier for me. Or was until this morning.)

So I decided to try getting onto Twitter on one of the other browsers. So I logged out @VRM on Firefox, failed to log in as myself, created the new password through Twitter’s password creating routine, made up a new password (because Dashlane couldn’t help on Firefox either), and wrote the new password down on a sticky.

Then, once I got @dsearls working on Firefox, I logged out, and tried to log in again as @vrm there. Twitter didn’t like that login and made me create a new password for that account too, again without Dashlane’s help. Now I had two passwords, for two accounts, on one sticky.

Then I got in the subway and came down to Columbia, ready to tweet about the #BlockchainJournalism from the audience at the Tow Center. But Twitter on Chrome wouldn’t let me in. Meanwhile, the new password was still on a sticky back at my apartment, and not remembered by Firefox. So I thought, hey, I’ll just create a new password again, now with Dashlane’s help. But I got stopped part way with this response from Twitter when I clicked on the new password making link:… .

This kind of experience is why I posted Please let’s kill logins and passwords back in August, and the sentiment stands.


So now that I’m experiencing life without Twitter, on which much of journalism utterly depends, I’m beginning to think about how we’ll all work once Twitter is gone—either completely or just to hell. Also about my own dependence on it. And about how having Twitter as a constant steam valve has bled off energies I once devoted to doing full-force journalism. Or just to blogging. Such as now, here, when I can’t use Twitter.

A difference: tweets may persist somewhere, but they’re the journalistic equivalent of snow falling on water. Blog posts tend to persist in a findable form for as long as their publisher maintains their archive.

Interesting fact: back in the early ’00s, when I was kinda big in the (admittedly small) blogging world, I had many thousands of readers every day. Most of those subscribed to my RSS feed. Then, in ’06, Twitter and Facebook started getting big, most bloggers moved to those platforms, and readership of my own blog dropped eventually to dozens per day. So I got active on Twitter, where I now have 24.4k followers. But hey, so does the average parking space.

I guess where I’m going is toward where Hossein Derakhshan (@h0d3r)has been for some time, with The Web We Have to Save. That Web is ours, not Twitter’s or Facebook’s or any platform’s. (This is also what @DWeinberger and I said in the #NewClues addendum to The Cluetrain Manifesto back in ’15.) Journalism, or whatever it’s becoming, is far more at home there than in any silo, no matter how useful it may be.



We live in two worlds now: the natural one where we have bodies that obey the laws of gravity and space/time, and the virtual one where there is no gravity or distance (though there is time).

In other words, we are now digital as well as physical beings, and this is new to a human experience where, so far, we are examined and manipulated like laboratory animals by giant entities that are out of everybody’s control—including theirs.

The collateral effects are countless and boundless.

Take journalism, for example. That’s what I did in a TEDx talk I gave last month in Santa Barbara:

I next visited several adjacent territories with a collection of brilliant folk at the Ostrom Workshop on Smart Cities. (Which was live-streamed, but I’m not sure is archived yet. Need to check.)

Among those folk was Brett Frischmann, whose canonical work on infrastructure I covered here, and who in Re-Engineering Humanity (with Evan Selinger) explains exactly how giants in the digital infrastructure business are hacking the shit out of us—a topic I also visit in Engineers vs. Re-Engineering (my August editorial in Linux Journal).

Now also comes Bruce Schneier, with his perfectly titled book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, which Farhad Manjoo in The New York Times sources in A Future Where Everything Becomes a Computer Is as Creepy as You Feared. Pull-quote: “In our government-can’t-do-anything-ever society, I don’t see any reining in of the corporate trends.”

In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, a monumental work due out in January (and for which I’ve seen some advance galleys) Shoshana Zuboff makes both cases (and several more) at impressive length and depth.

Privacy plays in all of these, because we don’t have it yet in the digital world. Or not much of it, anyway.

In reverse chronological order, here’s just some what I’ve said on the topic:

So here we are: naked in the virtual world, just like we were in the natural one before we invented clothing and shelter.

And that’s the challenge: to equip ourselves to live private and safe lives, and not just public and endangered ones, in our new virtual world.

Some of us have taken up that challenge too: with ProjectVRM, with Customer Commons, and with allied efforts listed here.

And I’m optimistic about our prospects.

I’ll also be detailing that optimism in the midst of a speech titled “Why adtech sucks and needs to be killed” next Wednesday (October 17th) at An Evening with Advertising Heretics in NYC. Being at the Anne L. Bernstein Theater on West 50th, it’s my off-Broadway debut. The price is a whopping $10.



fruit thought

If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?

Well, no.

Yet there is lately a widespread urge to claim personal data as personal property, and to create commodity markets for personal data, so people can start making money by selling or otherwise monetizing their own.

ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of its wiki a heading called Markets for Personal Data. Listed there are:

So: respect.

Yet, while I salute these efforts’ respect for individuals, and their righteous urges to right the wrongs of wanton and rude harvesting of personal data from approximately everybody, I also think there are problems with this approach. And, since I’ve been asked lately to spell out those problems, I shall. Here goes.

The first problem is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. Here’s a table that may help (borrowed from this Linux Journal column):

Excludability Excludability
Rivalness YES Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Rivalness NO Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting


The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation

Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is essential in a very human way.

The third problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data.

Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, even though it’s combustible.

Put another way, why would you want to make almost nothing (the likely price) selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist?

What makes us fully powerful as human beings is our ability to generate and share ideas and other combustible public goods, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.

Important note: I’m not knocking labor here. Most of us have to work for wages as parts of industrial machines, or as independent actors. I do too. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.

Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something. It’s a helpful one.

Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage.

Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.

The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.

Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is little interest in buying what can be had for free.

Worse, surveillance capitalism’s business is making guesses about you so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:

  1. Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
  2. Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
  3. The entrails of surveillance capitalism are fully infected with fraud and malware.
  4. Surveillance capitalism is also quite satisfied to soak up to 97% of an advertising spend before an ad’s publisher gets its 3% for pushing an ad at you.

Trying to get in on that business is just an awful proposition.

Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?

Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)

And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?

What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.

It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.

It’s still early. Web 2.0 is an archaic stage in the formation of the digital world. surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. It’s too absurd, corrupt, complex and annoying to keep living forever.

So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting our most human of powers to work.

The most basic form of agency in the digital world is control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.

Bonus links:





How would you feel if you had been told in the early days of the Web that in the year 2018 you would still need logins and passwords for damned near everything.

Your faith in the tech world would be deeply shaken, no?

And what if you had been told that in 2018 logins and passwords would now be required for all kinds of other shit, from applications on mobile devices to subscription services on TV?

Or worse, that in 2018 you would be rob-logged-out of sites and services frequently, whether you were just there or not, for security purposes — and that logging back in would often require “two factor” authentication, meaning you have to do even more work to log in to something, and that (also for security purposes) every password you use would not only have be different, but impossible for any human to remember, especially when average connected human now has hundreds of login/password combinations, many of which change constantly?

Would you not imagine this to be a dystopian hell?

Welcome to now, folks. Our frog is so fully boiled that it looks like Brunswick stew.

Can we please fix this?

Please, please, please, tech world: move getting rid of logins and passwords to the top of your punch list, ahead of AI, ML, IoT, 5G, smart dust, driverless cars and going to Mars.

Your home planet thanks you.

[Addendum…] Early responses to this post suggest that I’m talking about fixing the problem at the superficial level of effects. So, to clarify, logins and passwords are an effect, and not a cause of anything other than inconvenience and annoyance. The causes are design and tech choices made long ago—choices that can be changed.

Not only that, but many people have been working on solving the identity side of this thing for many years. In fact we’re about to have our 27th Internet Identity Workshop in October at the Computer History Museum. If you want to work on this with other people who are doing the same, register here.


A few weeks ago, while our car honked its way through dense traffic in Delhi, I imagined an Onion headline: American Visitor Seeks To Explain What He’ll Never Understand About India.

By the norms of traffic laws in countries where people tend to obey them, vehicular and pedestrian traffic in the dense parts of Indian cities appears to lawless. People do seem to go where they want, individually and collectively, in oblivity to danger.

Yet there is clearly a system here, in the sense that one’s body has a circulatory system. Or a nervous system. Meaning it’s full of almost random stuff at the cellular traffic level, but also organic in a literal way. It actually works. Somehow. Some way. Or ways. Many of them. Alone and together. So yes, I don’t understand it and probably never will, but it does work.

For example, a four-lane divided highway will have traffic moving constantly, occasionally in both directions on both sides. It will include humans, dogs, cattle, rickshaws and bikes, some laden with bags of cargo that look like they belong in a truck, in addition to cars, trucks and motorcycles, all packed together and honking constantly.

Keeping me from explaining, or even describing, any more than I just did, are the opening sentences of Saul Bellow’s Mr. Sammler’s Planet:

Shortly after dawn, on what would have been drawn in a normal sky, Mr. Artur Sammler with his bushy eye took in the books and papers of his West Side bedroom and suspected strongly that they were the wrong books, the wrong papers. In a way it did not matter to a man of seventy-plus, and at leisure. You had to be a crank to insist on being right. Being right was largely a matter of explanations. Intellectual man had become an explaining creature. Fathers to children, wives to husbands, lecturers to listeners, experts to laymen, colleagues to colleagues, doctors to patients, man to his own soul, explained. The roots of this, the causes of the other, the source of events, the history, the structure, the reasons why. For the most part it went in one ear out the other. The soul wanted what it wanted. It has its own natural knowledge. It sat unhappily on superstructures of explanation, poor bird, not knowing which way to fly.

So I will disclaim being right about a damn thing here. But I will share some links from some brilliant people, each worthy of respect, who think they are right about some stuff we maybe ought to care about; and each of whom have, in their own very separate ways, advice and warnings for us. Here ya go:

Each author weaves a different handbasket we might travel to hell, but all make interesting reading. All are also downbeat as hell too.

My caution with readings that veer toward conspiracy (notably Martin’s) is one of the smartest things my smart wife ever said: “The problem with conspiracy theories is that they presume competence.”

So here’s what I’m thinking about every explanation of what’s going on in our still-new Digital Age: None of us has the whole story of what’s going on—and what’s going on may not be a story at all.

Likewise (or dislike-wise), I also think all generalizations, whatever they are, fail in the particulars, and that’s a feature of them. We best generalize when we know we risk being wrong in the details.  Reality wants wackiness in particulars. If you don’t find what’s wacky there, maybe you aren’t looking hard enough. Or believe too much in veracities.

Ed McCabe: “I have no use for rules. They only rule out the possibility of brilliant exceptions.”

We need to laugh. That means we need our ironies, our multiple meanings, our attentions misdirected, for the magic of life to work.

And life is magic. Pure misdirection, away from the facticity of non-existence.

Every laugh, every smile, is an ironic argument against the one non-ironic fact of life—and of everything—which is death. We all die. We all end. To “be” dead is not to be in a state of existence. It is not to be at all. Shakespeare was unimprovable on that point.

To some of us older people*, death isn’t a presence. It’s just the future absence of our selves in a world designed to discard everything with a noun, proper or not, eventually. Including the world itself. This is a feature, not a bug.

It’s also a feature among some of us to, as Gandhi said, “Live as if you were to die tomorrow. Learn as if you were to live forever”: always interested, always open to possibilities, always willing to vet what we at least think we know, always leaving the rest of existence to those better equipped to persist on the same mission. So I guess that’s my point here.

Basically it’s the same point as Bill Hicks’ “It’s just a ride.”

*I’m not old. I’ve just been young a long time. To obey Gandhi, you have to stay young. It’s the best way to learn. And perhaps to die as well.

Enforcing Data Protection: A Model for Risk-Based Supervision Using Responsive Regulatory Tools, a post by Dvara Research, summarizes Effective Enforcement of a Data Protection Regime, a deeply thought and researched paper by Beni Chugh (@BeniChugh), Malavika Raghavan (@teninthemorning), Nishanth Kumar (@beamboybeamboy) and Sansiddha Pani (@julupani). While it addresses proximal concerns in India, it provides useful guidance for data regulators everywhere.

An excerpt:

Any data protection regulator faces certain unique challenges. The ubiquitous collection and use of personal data by service providers in the modern economy creates a vast space for a regulator to oversee. Contraventions of a data protection regime may not immediately manifest and when they do, may not have a clear monetary or quantifiable harm. The enforcement perimeter is market-wide, so a future data protection authority will necessarily interface with other sectoral institutions.  In light of these challenges, we present a model for enforcement of a data protection regime based on risk-based supervision and the use of a range of responsive enforcement tools.

This forward-looking approach considers the potential for regulators to employ a range of softer tools before a breach to prevent it and after a breach to mitigate the effects. Depending on the seriousness of contraventions, the regulator can escalate up to harder enforcement actions. The departure from the focus on post-data breach sanctions (that currently dominate data protection regimes worldwide) is an attempt to consider how the regulatory community might act in coordination with entities processing data to minimise contraventions of the regime.

I hope European regulators are looking at this. Because, as I said in a headline to a post last month, without enforcement, the GDPR is a fail.

Bonus link from the IAPP (International Association of Privacy Professionals): When will we start seeing GDPR enforcement actions? We guess Feb. 22, 2019.

This is the situation at Newark Airport right now:

Those blobs are thunderstorms. The little racetrack in upstate New York is an inbound flight from Lisbon in a holding pattern.

Follow the link under that screen shot. Interesting to see, in close to real time, how flights on approach and departure dodge heavy weather.

I’ll be flying out of there in a few hours myself, to India, for the firs time. Should be fun.

And the same goes for California’s AB-375 privacy bill.

The GDPR has been in force since May 25th, and it has done almost nothing to stop websites that make money from tracking-based-advertising stop participating in the tracking of readers. Instead almost all we’ve seen so far are requests for from websites to keep doing what they’re doing.

Only worse. Because now when you click “Accept” under an interruptive banner saying the site’s “cookies and other technologies collect data to enhance your experience and personalize the content and advertising you see,” you’ve just consented to being spied on. And they’re covered. They can carry on with surveillance-as-usual.

Score: Adtech 1, privacy 0.

Or so it seems. So far.

Are there any examples of publications that aren’t participating in #adtech’s spy game? Besides Linux Journal?


In Chatbots were the next big thing: what happened?, Justin Lee (@justinleejw) nicely unpacks how chatbots were overhyped to begin with and continue to fail their Turing tests, especially since humans in nearly all cases would  rather talk to humans than to mechanical substitutes.

There’s also a bigger and more fundamental reason why bots still aren’t a big thing: we don’t have them. If we did, they’d be our robot assistants, going out to shop for us, to get things fixed, or to do whatever.

Why didn’t we get bots of our own?

I can pinpoint the exact time and place where bots of our own failed to happen, and all conversation and development went sideways, away from the vector that takes us to bots of our own (hashtag: #booo), and instead toward big companies doing more than ever to deal with us robotically, mostly to sell us shit.

The time was April 2016, and the place was Facebook’s F8 conference. It was on stage there that Mark Zuckerberg introduced “messenger bots”. He began,

Now that Messenger has scaled, we’re starting to develop ecosystems around it. And the first thing we’re doing is exploring how you can all communicate with businesses.

Note his use of the second person you. He’s speaking to audience members as individual human beings. He continued,

You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I’ve never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there’s gotta be a better way to do this.

We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn’t take your full attention, like a phone call would. And you shouldn’t have to install a new app.

This promised pure VRM: a way for a customer to relate to a vendor. For example, to issue a service request, or to intentcast for bids on a new washing machine or a car.

So at this point Mark seemed to be talking about a new communication channel that could relieve the typical pains of being a customer while also opening the floodgates of demand notifying supply when it’s ready to buy. Now here’s where it goes sideways:

So today we’re launching Messenger Platform. So you can build bots for Messenger.

By “you” Zuck now means developers. He continues,

And it’s a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let’s take a look.

See the shift there? Up until that last sentence, he seemed to be promising something for people, for customers, for you and me: a better way to deal with business. But alas, it’s just shit:

CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.

And right there the opportunity was lost. And all the promise, up there at the to of the hype cycle. Note how Aaron Batalion uses the word “reach” in  ‘Bot’ is the wrong name…and why people who think it’s silly are wrong, written not long after Zuck’s F8 speech: “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”

What we needed, and still need, is for reach to go the other way: a standard bot design that would let lots of developers give us better ways to reach businesses. Today lots of developers compete to give us better ways to use the standards-based tools we call browsers and email clients. The same should be true of bots.

In Market intelligence that flows both ways, I describe one such approach, based on open source code, that doesn’t require locating your soul inside a giant personal data extraction business.

Here’s a diagram that shows how one person (me in this case) can relate to a company whose moccasins he owns:


The moccasins have their own pico: a cloud on the Net for a thing in the physical world: one that becomes a standard-issue conduit between customer and company.

A pico of this type might come in to being when the customer assigns a QR code to the moccasins and scans it. The customer and company can then share records about the product, or notify the other party when there’s a problem, a bargain on a new pair, or whatever. It’s tabula rasa: wide open.

The current code for this is called Wrangler. It’s open source and in Github. For the curious, Phil Windley explains how picos work in Reactive Programming With Picos.

It’s not bots yet, but it’s a helluva lot better place to start re-thinking and re-developing what bots should have been in the first place. Let’s start developing there, and not inside giant silos.

[Note: the image at the top is from this 2014 video by Capgemini explaining #VRM. Maybe now that Facebook is doing face-plants in the face of the GDPR, and privacy is finally a thing, the time is ripe, not only for #booos, but for the rest of the #VRM portfolio of unfinished and un-begun work on the personal side.]

This is what greets me when I go to the Washington Post site from here in Germany:

Washington Post greeting for Europeans

So you can see it too, wherever you are, here’s the URL I’m redirected to on Chrome, on Firefox, on Safari and on Brave. All look the same except for Brave, which shows a blank page.

Note that last item in the Premium EU Subscription column: “No on-site advertising or third-party tracking.”

Ponder for a moment how the Sunday (or any) edition of the Post‘s print edition would look with no on-paper advertising. It would be woefully thin and kind of worthless-looking. Two more value-adds for advertising in the print edition:

  1. It doesn’t track readers, which is the sad and broken norm for newspapers and magazines in the online world—a norm now essentially outlawed by the GDPR, and surely the reason the Post is running this offer.
  2. It sponsors the Post. Tracking-based advertising, known in the trade as adtech, doesn’t sponsor anything. Instead it hunts down eyeballs its spyware already knows about, no matter where they go. In other words, if adtech can shoot a Washington Post reader between the eyes at the Skeevy Lake Tribune, and the Skeevy is cheaper, it might rather hit the reader over there.

So here’s the message I want the Post to hear from me, and from every reader who values what they do:

That’s what I get from the print edition, and that’s what I want from the online edition as well.

So I want two things here.

One is an answer to this question: Are ANY publishers in the online world selling old-fashioned ads that aren’t based on tracking and therefore worth more than the tracking kind? (And are GDPR-compliant as well, since the ads aren’t aimed by collected personal data.)

The other is to subscribe to the Post as soon as they show me they’re willing to do what I ask: give me those real ads again. And stop assuming that all ads need to be the tracking-based kind.

Thanks in advance.

Tags: , , ,

« Older entries