You are currently browsing the archive for the Internet category.

In a press release, Amazon explained why it backed out of its plan to open a new headquarters in New York City:

For Amazon, the commitment to build a new headquarters requires positive, collaborative relationships with state and local elected officials who will be supportive over the long-term. While polls show that 70% of New Yorkers support our plans and investment, a number of state and local politicians have made it clear that they oppose our presence and will not work with us to build the type of relationships that are required to go forward with the project we and many others envisioned in Long Island City.

So, even if the economics were good, the politics were bad.

The hmm for me is why not New Jersey? Given the enormous economic and political overhead of operating in New York, I’m wondering why Amazon didn’t consider New Jersey first. Or if it’s thinking about it now.

New Jersey is cheaper and (so I gather) friendlier, at least tax-wise. It also has the country’s largest port (one that used to be in New York, bristling Manhattan’s shoreline with piers and wharves, making look like a giant paramecium) and is a massive warehousing and freight forwarding hub. In fact Amazon already has a bunch of facilities there (perhaps including its own little port on Arthur Kill). I believe there are also many more places to build on the New Jersey side. (The photo above, shot on approach to Newark Airport, looks at New York across some of those build-able areas.)

And maybe that’s the plan anyway, without the fanfare.

As it happens, I’m in the midst of reading Robert Caro‘s The Power Broker: Robert Moses and the Fall of New York. (Which is massive. There’s a nice summary in The Guardian here.) This helps me appreciate the power of urban planning, and how thoughtful and steel-boned opposition to some of it can be fully useful. One example of that is Jane Jacobs’ thwarting of Moses’ plan to run a freeway through Greeenwich Village. He had earlier done the same through The Bronx, with the Cross Bronx Expressway. While that road today is an essential stretch of the northeast transport corridor, at the time it was fully destructive to urban life in that part of the city—and in many ways still is.

So I try to see both sides of an issue such as this. What’s constructive and what’s destructive in urban planning are always hard to pull apart.

For an example close to home, I often wonder if it’s good that Fort Lee is now almost nothing but high-rises? This is the town my grandfather helped build (he was the head carpenter for D.W. Griffith when Fort Lee was the first Hollywood), where my father grew up climbing the Palisades for fun, and where he later put his skills to work as cable rigger, helping build the George Washington Bridge. The Victorian house Grandpa built for his family on Hoyt Avenue, and where my family lived when I was born, stood about as close to a giant new glass box called The Modern as I am from the kitchen in the apartment I’m writing this, a few blocks away from The Bridge on the other side of the Hudson. It’s paved now, by a road called Bruce Reynolds Boulevard. Remember Bridgegate? That happened right where our family home stood, in a pleasant neighborhood of which nothing remains.

Was the disappearance of that ‘hood a bad thing? Not by now, long after the neighborhood was erased and nearly everyone who lived has died or has long since moved on. Thousands more live there now than ever did when it was a grid of nice homes on quiet, tree-lined streets.

All urban developments are omelettes made of broken eggs. If you’re an egg, you’ve got reason to complain. If you’re a cook, you’d better make a damn fine omelette.

This is a game for our time. I play it on New York and Boston subways, but you can play it anywhere everybody in a crowd is staring at their personal rectangle.

I call it Rectangle Bingo.

Here’s how you play. At the moment when everyone is staring down at their personal rectangle, you shoot a pano of the whole scene. Nobody will see you because they’re not present: they’re absorbed in rectangular worlds outside their present space/time.

Then you post your pano somewhere search engines will find it, and hashtag it #RectangularBingo.

Then, together, we’ll think up some way to recognize winners.



I want to point to three great posts.

First is Larry Lessig‘s Podcasting and the Slow Democracy Movement. A pull quote:

The architecture of the podcast is the precise antidote for the flaws of the present. It is deep where now is shallow. It is insulated from ads where now is completely vulnerable. It is a chance for thinking and reflection; it has an attention span an order of magnitude greater than the Tweet. It is an opportunity for serious (and playful) engagement. It is healthy eating for a brain-scape that now gorges on fast food.

If 2016 was the Twitter election — fast food, empty calorie content driving blood pressure but little thinking — then 2020 must be the podcast election — nutrient-rich, from every political perspective. Not sound bites driven by algorithms, but reflective and engaged humans doing what humans still do best: thinking with empathy about ideals that could make us better — as humans, not ad-generating machines.

There is hope here. We need to feed it.

I found that through a Radio Open Source email pointing to the show’s latest podcast, The New Normal. I haven’t heard that one yet; but I am eager to, because I suspect the “new normal” may be neither. And, as I might not with Twitter, I am foregoing judgement until I do hear it. The host is also Chris Lydon, a friend whose podcast pionering owes to collaboration with Dave Winer, who invented the form of RSS used by nearly all the world’s podcasters, and who wrote my third recommended post, Working Together, in 2019. That one is addressed to Chris and everyone else bringing tools and material to the barns we’re raising together. The title says it all, but read it anyway.

Work is how we feed the hope Larry talks about.


The original pioneer in space-based telephony isn’t @ElonMusk (though he deserves enormous credit for his work in the field, the latest example of which is SpaceX‘s 7,518-satellite Starlink network, and which has been making news lately). It’s the people behind the Iridium satellite constellation, the most driven and notorious of which was Ed Staiano.

Much has been written about Iridium’s history, and Ed’s role in driving its satellites into space, most of it negative toward Ed. But I’ve always thought that was at least partly unfair. Watching the flow of news about Iridium at the time it was moving from ground to sky, it was clear to me that Iridium would have remained on the ground if Ed wasn’t a tough bastard about making it fly.

My ad agency, Hodskins Simone & Searls, worked for Ed when he was at Motorola in pre-Iridium days. He was indeed a tough taskmaster: almost legendarily demanding and impatient with fools. But I never had a problem with him. And I believe it’s a testimony to Ed’s vision and persistence that Iridium is still up there, doing the job he wanted it to do, and paving the way for Elon and others (including @IridiumComm itself).

So hats off, Ed. Hope you’re doing well.

Tags: ,

We live in two worlds now: the natural one where we have bodies that obey the laws of gravity and space/time, and the virtual one where there is no gravity or distance (though there is time).

In other words, we are now digital as well as physical beings, and this is new to a human experience where, so far, we are examined and manipulated like laboratory animals by giant entities that are out of everybody’s control—including theirs.

The collateral effects are countless and boundless.

Take journalism, for example. That’s what I did in a TEDx talk I gave last month in Santa Barbara:

I next visited several adjacent territories with a collection of brilliant folk at the Ostrom Workshop on Smart Cities. (Which was live-streamed, but I’m not sure is archived yet. Need to check.)

Among those folk was Brett Frischmann, whose canonical work on infrastructure I covered here, and who in Re-Engineering Humanity (with Evan Selinger) explains exactly how giants in the digital infrastructure business are hacking the shit out of us—a topic I also visit in Engineers vs. Re-Engineering (my August editorial in Linux Journal).

Now also comes Bruce Schneier, with his perfectly titled book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, which Farhad Manjoo in The New York Times sources in A Future Where Everything Becomes a Computer Is as Creepy as You Feared. Pull-quote: “In our government-can’t-do-anything-ever society, I don’t see any reining in of the corporate trends.”

In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, a monumental work due out in January (and for which I’ve seen some advance galleys) Shoshana Zuboff makes both cases (and several more) at impressive length and depth.

Privacy plays in all of these, because we don’t have it yet in the digital world. Or not much of it, anyway.

In reverse chronological order, here’s just some what I’ve said on the topic:

So here we are: naked in the virtual world, just like we were in the natural one before we invented clothing and shelter.

And that’s the challenge: to equip ourselves to live private and safe lives, and not just public and endangered ones, in our new virtual world.

Some of us have taken up that challenge too: with ProjectVRM, with Customer Commons, and with allied efforts listed here.

And I’m optimistic about our prospects.

I’ll also be detailing that optimism in the midst of a speech titled “Why adtech sucks and needs to be killed” next Wednesday (October 17th) at An Evening with Advertising Heretics in NYC. Being at the Anne L. Bernstein Theater on West 50th, it’s my off-Broadway debut. The price is a whopping $10.



fruit thought

If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?

Not yet. Or maybe not really.

Either way, that’s the idea behind the urge by some lately to claim personal data as personal property, and then to make money (in cash, tokens or cryptocurrency) by selling or otherwise monetizing it. The idea in all these cases is to somehow participate in existing (entirely extractive) commodity markets for personal data.

ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of the ProjectVRM wiki is a heading called Markets for Personal Data. Listed there are:

So we respect that work. We also need to recognize some problems it faces.

The first problem is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from Linux Journal column I wrote a few years ago:

Excludability Excludability
Rivalness YES Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Rivalness NO Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting


The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation

Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is essential in a very human way.

The third problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data.

Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, even though it’s combustible.

Put another way, why would you want to make almost nothing (the likely price) selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist?

What makes us fully powerful as human beings is our ability to generate and share ideas and other combustible public goods, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.

Important note: I’m not knocking labor here. Most of us have to work for wages as parts of industrial machines, or as independent actors. I do too. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.

Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something. It’s a helpful one.

Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage.

Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.

The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.

Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is little interest in buying what can be had for free.

Worse, surveillance capitalism’s business is making guesses about you so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:

  1. Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
  2. Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
  3. The entrails of surveillance capitalism are fully infected with fraud and malware.
  4. Surveillance capitalism is also quite satisfied to soak up to 97% of an advertising spend before an ad’s publisher gets its 3% for pushing an ad at you.

Trying to get in on that business is just an awful proposition.

Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?

Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)

And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?

What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.

It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.

It’s still early. Web 2.0 is an archaic stage in the formation of the digital world. surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. It’s too absurd, corrupt, complex and annoying to keep living forever.

So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting our most human of powers to work.

The most basic form of agency in the digital world is control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.

Bonus links:





How would you feel if you had been told in the early days of the Web that in the year 2018 you would still need logins and passwords for damned near everything.

Your faith in the tech world would be deeply shaken, no?

And what if you had been told that in 2018 logins and passwords would now be required for all kinds of other shit, from applications on mobile devices to subscription services on TV?

Or worse, that in 2018 you would be rob-logged-out of sites and services frequently, whether you were just there or not, for security purposes — and that logging back in would often require “two factor” authentication, meaning you have to do even more work to log in to something, and that (also for security purposes) every password you use would not only have be different, but impossible for any human to remember, especially when average connected human now has hundreds of login/password combinations, many of which change constantly?

Would you not imagine this to be a dystopian hell?

Welcome to now, folks. Our frog is so fully boiled that it looks like Brunswick stew.

Can we please fix this?

Please, please, please, tech world: move getting rid of logins and passwords to the top of your punch list, ahead of AI, ML, IoT, 5G, smart dust, driverless cars and going to Mars.

Your home planet thanks you.

[Addendum…] Early responses to this post suggest that I’m talking about fixing the problem at the superficial level of effects. So, to clarify, logins and passwords are an effect, and not a cause of anything other than inconvenience and annoyance. The causes are design and tech choices made long ago—choices that can be changed.

Not only that, but many people have been working on solving the identity side of this thing for many years. In fact we’re about to have our 27th Internet Identity Workshop in October at the Computer History Museum. If you want to work on this with other people who are doing the same, register here.


In Chatbots were the next big thing: what happened?, Justin Lee (@justinleejw) nicely unpacks how chatbots were overhyped to begin with and continue to fail their Turing tests, especially since humans in nearly all cases would  rather talk to humans than to mechanical substitutes.

There’s also a bigger and more fundamental reason why bots still aren’t a big thing: we don’t have them. If we did, they’d be our robot assistants, going out to shop for us, to get things fixed, or to do whatever.

Why didn’t we get bots of our own?

I can pinpoint the exact time and place where bots of our own failed to happen, and all conversation and development went sideways, away from the vector that takes us to bots of our own (hashtag: #booo), and instead toward big companies doing more than ever to deal with us robotically, mostly to sell us shit.

The time was April 2016, and the place was Facebook’s F8 conference. It was on stage there that Mark Zuckerberg introduced “messenger bots”. He began,

Now that Messenger has scaled, we’re starting to develop ecosystems around it. And the first thing we’re doing is exploring how you can all communicate with businesses.

Note his use of the second person you. He’s speaking to audience members as individual human beings. He continued,

You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I’ve never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there’s gotta be a better way to do this.

We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn’t take your full attention, like a phone call would. And you shouldn’t have to install a new app.

This promised pure VRM: a way for a customer to relate to a vendor. For example, to issue a service request, or to intentcast for bids on a new washing machine or a car.

So at this point Mark seemed to be talking about a new communication channel that could relieve the typical pains of being a customer while also opening the floodgates of demand notifying supply when it’s ready to buy. Now here’s where it goes sideways:

So today we’re launching Messenger Platform. So you can build bots for Messenger.

By “you” Zuck now means developers. He continues,

And it’s a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let’s take a look.

See the shift there? Up until that last sentence, he seemed to be promising something for people, for customers, for you and me: a better way to deal with business. But alas, it’s just shit:

CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.

And right there the opportunity was lost. And all the promise, up there at the to of the hype cycle. Note how Aaron Batalion uses the word “reach” in  ‘Bot’ is the wrong name…and why people who think it’s silly are wrong, written not long after Zuck’s F8 speech: “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”

What we needed, and still need, is for reach to go the other way: a standard bot design that would let lots of developers give us better ways to reach businesses. Today lots of developers compete to give us better ways to use the standards-based tools we call browsers and email clients. The same should be true of bots.

In Market intelligence that flows both ways, I describe one such approach, based on open source code, that doesn’t require locating your soul inside a giant personal data extraction business.

Here’s a diagram that shows how one person (me in this case) can relate to a company whose moccasins he owns:


The moccasins have their own pico: a cloud on the Net for a thing in the physical world: one that becomes a standard-issue conduit between customer and company.

A pico of this type might come in to being when the customer assigns a QR code to the moccasins and scans it. The customer and company can then share records about the product, or notify the other party when there’s a problem, a bargain on a new pair, or whatever. It’s tabula rasa: wide open.

The current code for this is called Wrangler. It’s open source and in Github. For the curious, Phil Windley explains how picos work in Reactive Programming With Picos.

It’s not bots yet, but it’s a helluva lot better place to start re-thinking and re-developing what bots should have been in the first place. Let’s start developing there, and not inside giant silos.

[Note: the image at the top is from this 2014 video by Capgemini explaining #VRM. Maybe now that Facebook is doing face-plants in the face of the GDPR, and privacy is finally a thing, the time is ripe, not only for #booos, but for the rest of the #VRM portfolio of unfinished and un-begun work on the personal side.]

Nature and the Internet both came without privacy.

The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.

When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.

In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.

Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”

Two such answers arrived with this morning’s New York TimesFacebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with with personal tech gives each of us power not just resist encroachments by others, but to have agency. (Merriam Websterthe capacity, condition, or state of acting or of exerting power.)

We acquired agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of it with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech.

I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop.

On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.

Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.

The purpose is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws follow.*

But hey, let’s invent the tech we need first.

*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.

Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.

Bonus link.

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring readers’ bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of aiming “interest-based” advertising at those same readers, wherever those readers’ eyeballs may appear—or reappear in the case of “retargeted” advertising.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach, standard, experience or audit trail), and no blood valving by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data?

Answer: nobody knows, because the whole adtech “ecosystem” is a four-dimensional shell game with hundreds of players

or, in the case of “martech,” thousands:

For one among many views of what’s going on, here’s a compressed screen shot of what Privacy Badger showed going on in my browser behind Zeynep’s op-ed in the Times:

[Added later…] @ehsanakhgari tweets pointage to WhoTracksMe’s page on the NYTimes, which shows this:

And here’s more irony: a screen shot of the home page of RedMorph, another privacy protection extension:

That quote is from Free Tools to Keep Those Creepy Online Ads From Watching You, by Brian X. Chen and Natasha Singer, and published on 17 February 2016 in the Times.

The same irony applies to countless other correct and important reporting on the Facebook/Cambridge Analytica mess by other writers and pubs. Take, for example, Cambridge Analytica, Facebook, and the Revelations of Open Secrets, by Sue Halpern in yesterday’s New Yorker. Here’s what RedMorph shows going on behind that piece:

Note that I have the data leak toward blocked by default.

Here’s a view through RedMorph’s controller pop-down:

And here’s what happens when I turn off “Block Trackers and Content”:

By the way, I want to make clear that Zeynep, Brian, Natasha and Sue are all innocents here, thanks both to the “Chinese wall” between the editorial and publishing functions of the Times, and the simple fact that the route any ad takes between advertiser and reader through any number of adtech intermediaries is akin to a ball falling through a pinball machine. Refresh your page while reading any of those pieces and you’ll see a different set of ads, no doubt aimed by automata guessing that you, personally, should be “impressed” by those ads. (They’ll count as “impressions” whether you are or not.)


What will happen when the Times, the New Yorker and other pubs own up to the simple fact that they are just as guilty as Facebook of leaking their readers’ data to other parties, for—in many if not most cases—God knows what purposes besides “interest-based” advertising? And what happens when the EU comes down on them too? It’s game-on after 25 May, when the EU can start fining violators of the General Data Protection Regulation (GDPR). Key fact: the GDPR protects the data blood of what they call “EU data subjects” wherever those subjects’ necks are exposed in borderless digital world.

To explain more about how this works, here is the (lightly edited) text of a tweet thread posted this morning by @JohnnyRyan of PageFair:

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)

Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

The website does this to let these companies “bid” to show their ad to this visitor. Here is a video of how the system works. In Europe this accounts for about a quarter of publishers’ gross revenue.

Once these personal data leave the publisher, via “bid request”, the publisher has no control over what happens next. I repeat that: personal data are routinely sent, every time a page loads, to hundreds/thousands of companies, with no control over what happens to them.

This means that every person, and what they look at online, is routinely profiled by companies that receive these data from the websites they visit. Where possible, these data and combined with offline data. These profiles are built up in “DMPs”.

Many of these DMPs (data management platforms) are owned by data brokers. (Side note: The FTC’s 2014 report on data brokers is shocking. See There is no functional difference between an #adtech DMP and Cambridge Analytica.

—Terrell McSweeny, Julie Brill and EDPS

None of this will be legal under the #GDPR. (See one reason why at Publishers and brands need to take care to stop using personal data in the RTB system. Data connections to sites (and apps) have to be carefully controlled by publishers.

So far, #adtech’s trade body has been content to cover over this wholesale personal data leakage with meaningless gestures that purport to address the #GDPR (see my note on @IABEurope current actions here: It is time for a more practical position.

And advertisers, who pay for all of this, must start to demand that safe, non-personal data take over in online RTB targeting. RTB works without personal data. Brands need to demand this to protect themselves – and all Internet users too. @dwheld @stephan_lo @BobLiodice

Websites need to control
1. which data they release in to the RTB system
2. whether ads render directly in visitors’ browsers (where DSPs JavaScript can drop trackers)
3. what 3rd parties get to be on their page
@jason_kint @epc_angela @vincentpeyregne @earljwilkinson 11/12

Lets work together to fix this. 12/12

Those last three recommendations are all good, but they also assume that websites, advertisers and their third party agents are the ones with the power to do something. Not readers.

But there’s lots readers will be able to do. More about that shortly. Meanwhile, publishers can get right with readers by dropping #adtech and going back to publishing the kind of high-value brand advertising they’ve run since forever in the physical world.

That advertising, as Bob Hoffman (@adcontrarian) and Don Marti (@dmarti) have been making clear for years, is actually worth a helluva lot more than adtech, because it delivers clear creative and economic signals and comes with no cognitive overhead (for example, wondering where the hell an ad comes from and what it’s doing right now).

As I explain here, “Real advertising wants to be in a publication because it values the publication’s journalism and readership” while “adtech wants to push ads at readers anywhere it can find them.”

Doing real advertising is the easiest fix in the world, but so far it’s nearly unthinkable for the ad industry because it has been defaulted for more than twenty years to an asymmetric power relationship between readers and publishers called client-server. I’ve been told that client-server was chosen as the name for this relationship because “slave-master” didn’t sound so good; but I think the best way to visualize it is calf-cow:

As I put it at that link (way back in 2012), Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own.

On the Net, we don’t need to be slaves, cattle or throbbing veins. We can be fully human. In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Customer Commons is working on exactly those terms. The first publication to agree to readers terms is Linux Journal, where I am now the editor-in-chief. The first of those terms is #P2B1(beta), says “Just show me ads not based on tracking me,” and is hashtagged #NoStalking.

In Help Us Cure Online Publishing of Its Addiction to Personal Data, I explain how this models the way advertising ought to be done: by the grace of readers, with no spying.

Obeying readers’ terms also carries no risk of violating privacy laws, because every pub will have contracts with its readers to do the right thing. This is totally do-able. Read that last link to see how.

As I say there, we need help. Linux Journal still has a small staff, and Customer Commons (a California-based 501(c)(3) nonprofit) so far consists of five board members. What it aims to be is a worldwide organization of customers, as well as the place where terms we proffer can live, much as Creative Commons is where personal copyright licenses live. (Customer Commons is modeled on Creative Commons. Hats off to the Berkman Klein Center for helping bring both into the world.)

I’m also hoping other publishers, once they realize that they are no less a part of the surveillance economy than Facebook and Cambridge Analytica, will help out too.

[Later…] Not long after this post went up I talked about these topics on the Gillmor Gang. Here’s the video, plus related links.

I think the best push-back I got there came from Esteban Kolsky, (@ekolsky) who (as I recall anyway) saw less than full moral equivalence between what Facebook and Cambridge Analytica did to screw with democracy and what the New York Times and other ad-supported pubs do by baring the necks of their readers to dozens of data vampires.

He’s right that they’re not equivalent, any more than apples and oranges are equivalent. The sins are different; but they are still sins, just as apples and oranges are still both fruit. Exposing readers to data vampires is simply wrong on its face, and we need to fix it. That it’s normative in the extreme is no excuse. Nor is the fact that it makes money. There are morally uncompromised ways to make money with advertising, and those are still available.

Another push-back is the claim by many adtech third parties that the personal data blood they suck is anonymized. While that may be so, correlation is still possible. See Study: Your anonymous web browsing isn’t as anonymous as you think, by Barry Levine (@xBarryLevine) in Martech Today, which cites De-anonymizing Web Browsing Data with Social Networks, a study by Jessica Su (@jessicatsu), Ansh Shukla (@__anshukla__) and Sharad Goel (@5harad)
of Stanford and Arvind Narayanan (@random_walker) of Princeton.

(Note: Facebook and Google follow logged-in users by name. They also account for most of the adtech business.)

One commenter below noted that this blog as well carries six trackers (most of which I block).. Here is how those look on Ghostery:

So let’s fix this thing.

[Later still…] Lots of comments in Hacker News as well.

[Later again (8 April 2018)…] About the comments below (60+ so far): the version of commenting used by this blog doesn’t support threading. If it did, my responses to comments would appear below each one. Alas, some not only appear out of sequence, but others don’t appear at all. I don’t know why, but I’m trying to find out. Meanwhile, apologies.

« Older entries