It’s misses like this that have people thinking there’s nothing to fear from AI.
You are currently browsing the archive for the Business category.
It’s misses like this that have people thinking there’s nothing to fear from AI.
We live in two worlds now: the natural one where we have bodies that obey the laws of gravity and space/time, and the virtual one where there is no gravity or distance (though there is time).
In other words, we are now digital as well as physical beings, and this is new to a human experience where, so far, we are examined and manipulated like laboratory animals by giant entities that are out of everybody’s control—including theirs.
The collateral effects are countless and boundless.
Take journalism, for example. That’s what I did in a TEDx talk I gave last month in Santa Barbara:
I next visited several adjacent territories with a collection of brilliant folk at the Ostrom Workshop on Smart Cities. (Which was live-streamed, but I’m not sure is archived yet. Need to check.)
Among those folk was Brett Frischmann, whose canonical work on infrastructure I covered here, and who in Re-Engineering Humanity (with Evan Selinger) explains exactly how giants in the digital infrastructure business are hacking the shit out of us—a topic I also visit in Engineers vs. Re-Engineering (my August editorial in Linux Journal).
Now also comes Bruce Schneier, with his perfectly titled book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, which Farhad Manjoo in The New York Times sources in A Future Where Everything Becomes a Computer Is as Creepy as You Feared. Pull-quote: “In our government-can’t-do-anything-ever society, I don’t see any reining in of the corporate trends.”
In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, a monumental work due out in January (and for which I’ve seen some advance galleys) Shoshana Zuboff makes both cases (and several more) at impressive length and depth.
Privacy plays in all of these, because we don’t have it yet in the digital world. Or not much of it, anyway.
In reverse chronological order, here’s just some what I’ve said on the topic:
So here we are: naked in the virtual world, just like we were in the natural one before we invented clothing and shelter.
And that’s the challenge: to equip ourselves to live private and safe lives, and not just public and endangered ones, in our new virtual world.
And I’m optimistic about our prospects.
I’ll also be detailing that optimism in the midst of a speech titled “Why adtech sucks and needs to be killed” next Wednesday (October 17th) at An Evening with Advertising Heretics in NYC. Being at the Anne L. Bernstein Theater on West 50th, it’s my off-Broadway debut. The price is a whopping $10.
If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?
Not yet. Or maybe not really.
Either way, that’s the idea behind the urge by some lately to claim personal data as personal property, and then to make money (in cash, tokens or cryptocurrency) by selling or otherwise monetizing it. The idea in all these cases is to somehow participate in existing (entirely extractive) commodity markets for personal data.
ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of the ProjectVRM wiki is a heading called Markets for Personal Data. Listed there are:
So we respect that work. It is also essential to recognize problems it faces.
The first problem is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from a Linux Journal column I wrote a few years ago:
|Rivalness||YES||Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers||Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents|
|Rivalness||NO||Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works||Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting|
The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation
Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is important for us to get our heads around amidst the rising chorus of voices insistenting that data is a form of property.
The third problem is that there are better legal frameworks than property law for protecting personal data. In Do we really want to “sell” ourselves? The risks of a property law paradigm for personal data ownership, Elizabeth Renieris and Dazza Greenwood write,
Who owns your data? It’s a popular question of late in the identity community, particularly in the wake of Cambridge Analytica, numerous high-profile Equifax-style data breaches, and the GDPR coming into full force and effect. In our view, it’s not only the wrong question to be asking but it’s flat out dangerous when it frames the entire conversation. While ownership implies a property law model of our data, we argue that the legal framework for our identity-related data must also consider constitutional or human rights laws rather than mere property law rules…
Under common law, ownership in property is a bundle of five rights — the rights of possession, control, exclusion, enjoyment, and disposition. These rights can be separated and reassembled according to myriad permutations and exercised by one or more parties at the same time. Legal ownership or “title” of real property (akin to immovable property under civil law) requires evidence in the form of a deed. Similarly, legal ownership of personal property (i.e. movable property under civil law) in the form of commercial goods requires a bill of lading, receipt, or other document of title. This means that proving ownership or exerting these property rights requires backing from the state or sovereign, or other third party. In other words, property rights emanate from an external source and, in this way, can be said to be extrinsic rights. Moreover, property rights are alienable in the sense that they can be sold or transferred to another party.
Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. Rather, human rights emanate from an internal source and require no evidence of their existence. In this way, they can be said to be intrinsic rights that are self-evident. While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.
In the physical realm, it is long settled that people and organs are not treated like property. Moreover, rights to freedom from unreasonable search and seizure, to associate and peaceably assemble with others, and the rights to practice religion and free speech are not property rights — rather, they are constitutional rights under U.S. law. Just as constitutional and international human rights laws protect our personhood, they also protect things that are property-like or exhibit property-like characteristics. The Fourth Amendment of the U.S. Constitution provides “the right of the people to be secure in their persons” but also their “houses, papers, and effects.” Similarly, the Universal Declaration of Human Rights and the European Convention on Human Rights protect the individual’s right to privacy and family life, but also her “home and correspondence”…
Obviously some personal data may exist in property-form just as letters and diaries in paper form may be purchased and sold in commerce. The key point is that sometimes these items are also defined as papers and effects and therefore subject to Fourth Amendment and other legal frameworks. In other words, there are some uses of (and interests in) our data that transform it from an interest in property to an interest in our personal privacy — that take it from the realm of property law to constitutional or human rights law. Location data, biological, social, communications and other behavioral data are examples of data that blend into personal identity itself and cross this threshold. Such data is highly revealing and the big-data, automated systems that collect, track and analyze this data make the need to establish proportional protections and safeguards even more important and more urgent. It is critical that we apply the correct legal framework.
The fourth problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data. Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, combustible or not.
Put another way, why would you want to make almost nothing (the likely price) from selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist, and where rights are fully understood and protected within existing legal frameworks?
What makes us fully powerful as human beings is our ability to generate and share ideas and other goods that are expansible over all space, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.
Important note: I’m not knocking labor here. Most of us have to work for wages, either as parts of industrial machines, or as independent actors. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.
Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage. JP and I called this way of making money a because effect. The entire Internet, the World Wide Web and the totality of free and open source code all have vast because effects in money made with products and services that depend on those graces. Each are rising free tides that lift all commercial boats. Non-commercial ones too.
Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.
The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.
Worse, surveillance capitalism’s business is making guesses about you, so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:
Trying to get in on that business is an awful proposition.
Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?
Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)
And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?
What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.
It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.
But it’s still early. Web 2.0 is an archaic stage in the formation of the digital world. Surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. The whole thing is too absurd, corrupt, complex and annoying to keep living forever.
So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting better human powers to work.
If we’re going to obsess over personal data, let’s look instead toward ways to regulate or control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.
Enforcing Data Protection: A Model for Risk-Based Supervision Using Responsive Regulatory Tools, a post by Dvara Research, summarizes Effective Enforcement of a Data Protection Regime, a deeply thought and researched paper by Beni Chugh (@BeniChugh), Malavika Raghavan (@teninthemorning), Nishanth Kumar (@beamboybeamboy) and Sansiddha Pani (@julupani). While it addresses proximal concerns in India, it provides useful guidance for data regulators everywhere.
Any data protection regulator faces certain unique challenges. The ubiquitous collection and use of personal data by service providers in the modern economy creates a vast space for a regulator to oversee. Contraventions of a data protection regime may not immediately manifest and when they do, may not have a clear monetary or quantifiable harm. The enforcement perimeter is market-wide, so a future data protection authority will necessarily interface with other sectoral institutions. In light of these challenges, we present a model for enforcement of a data protection regime based on risk-based supervision and the use of a range of responsive enforcement tools.
This forward-looking approach considers the potential for regulators to employ a range of softer tools before a breach to prevent it and after a breach to mitigate the effects. Depending on the seriousness of contraventions, the regulator can escalate up to harder enforcement actions. The departure from the focus on post-data breach sanctions (that currently dominate data protection regimes worldwide) is an attempt to consider how the regulatory community might act in coordination with entities processing data to minimise contraventions of the regime.
I hope European regulators are looking at this. Because, as I said in a headline to a post last month, without enforcement, the GDPR is a fail.
Bonus link from the IAPP (International Association of Privacy Professionals): When will we start seeing GDPR enforcement actions? We guess Feb. 22, 2019.
In Chatbots were the next big thing: what happened?, Justin Lee (@justinleejw) nicely unpacks how chatbots were overhyped to begin with and continue to fail their Turing tests, especially since humans in nearly all cases would rather talk to humans than to mechanical substitutes.
There’s also a bigger and more fundamental reason why bots still aren’t a big thing: we don’t have them. If we did, they’d be our robot assistants, going out to shop for us, to get things fixed, or to do whatever.
Why didn’t we get bots of our own?
I can pinpoint the exact time and place where bots of our own failed to happen, and all conversation and development went sideways, away from the vector that takes us to bots of our own (hashtag: #booo), and instead toward big companies doing more than ever to deal with us robotically, mostly to sell us shit.
The time was April 2016, and the place was Facebook’s F8 conference. It was on stage there that Mark Zuckerberg introduced “messenger bots”. He began,
Now that Messenger has scaled, we’re starting to develop ecosystems around it. And the first thing we’re doing is exploring how you can all communicate with businesses.
Note his use of the second person you. He’s speaking to audience members as individual human beings. He continued,
You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I’ve never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there’s gotta be a better way to do this.
We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn’t take your full attention, like a phone call would. And you shouldn’t have to install a new app.
So at this point Mark seemed to be talking about a new communication channel that could relieve the typical pains of being a customer while also opening the floodgates of demand notifying supply when it’s ready to buy. Now here’s where it goes sideways:
So today we’re launching Messenger Platform. So you can build bots for Messenger.
By “you” Zuck now means developers. He continues,
And it’s a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let’s take a look.
See the shift there? Up until that last sentence, he seemed to be promising something for people, for customers, for you and me: a better way to deal with business. But alas, it’s just shit:
CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.
And right there the opportunity was lost. And all the promise, up there at the to of the hype cycle. Note how Aaron Batalion uses the word “reach” in ‘Bot’ is the wrong name…and why people who think it’s silly are wrong, written not long after Zuck’s F8 speech: “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”
What we needed, and still need, is for reach to go the other way: a standard bot design that would let lots of developers give us better ways to reach businesses. Today lots of developers compete to give us better ways to use the standards-based tools we call browsers and email clients. The same should be true of bots.
In Market intelligence that flows both ways, I describe one such approach, based on open source code, that doesn’t require locating your soul inside a giant personal data extraction business.
Here’s a diagram that shows how one person (me in this case) can relate to a company whose moccasins he owns:
The moccasins have their own pico: a cloud on the Net for a thing in the physical world: one that becomes a standard-issue conduit between customer and company.
A pico of this type might come in to being when the customer assigns a QR code to the moccasins and scans it. The customer and company can then share records about the product, or notify the other party when there’s a problem, a bargain on a new pair, or whatever. It’s tabula rasa: wide open.
It’s not bots yet, but it’s a helluva lot better place to start re-thinking and re-developing what bots should have been in the first place. Let’s start developing there, and not inside giant silos.
[Note: the image at the top is from this 2014 video by Capgemini explaining #VRM. Maybe now that Facebook is doing face-plants in the face of the GDPR, and privacy is finally a thing, the time is ripe, not only for #booos, but for the rest of the #VRM portfolio of unfinished and un-begun work on the personal side.]
In The Big Short, investor Michael Burry says “One hallmark of mania is the rapid rise in the incidence and complexity of fraud.” (Burry shorted the mania- and fraud-filled subprime mortgage market and made a mint in the process.)
One would be equally smart to bet against the mania for the tracking-based form of advertising called adtech.
Since tracking people took off in the late ’00s, adtech has grown to become a four-dimensional shell game played by hundreds (or, if you include martech, thousands) of companies, none of which can see the whole mess, or can control the fraud, malware and other forms of bad acting that thrive in the midst of it.
And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.
“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.
Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”
Of course what I just said greatly simplifies what the GDPR actually utters, in bureaucratic legalese. The GDPR is also full of loopholes only snakes can thread; but the spirit of the law is clear, and the snakes will be easy to shame, even if they don’t get fined. (And legitimate interest—an actual loophole in the GDPR, may prove hard to claim.)
Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?
Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.
To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in AdExchanger by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:
Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.
The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.
Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.
Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”
And that’s not all:
Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.
Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.
This is also Google’s way of scraping off GDPR liability on publishers.
Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,
The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.
The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…
Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.
In other words, good luck with that.
[Later, 26 May…] Well, Google caved on this one, so apparently Google is coming to IAB Europe’s table.
[And on 30 May…] Axel Springer is also going its own way.
One big upside for IAB Europe is that its Framework contains open source code and an SDK. For a full unpacking of what’s there see the Consent String and Vendor List Format: Transparency & Consent Framework on GitHub and IAB Europe’s own FAQ. More about this shortly.
Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.
One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there. Here’s the most revealing graphic in the bunch:
Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.
“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”
Pretty cynical, no?
The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.
The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.
Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.
Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.
Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.
Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)
Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.
For more on how that will work, read The Intention Economy: When Customers Take Charge. Six years after Harvard Business Review Press published that book, what it says will start to come true. Thank you, GDPR.
Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.
What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)
Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.
When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.
Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:
I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)
Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.
Nature and the Internet both came without privacy.
The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.
When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.
In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.
Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”
Two such answers arrived with this morning’s New York Times: Facebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with personal tech that gives each of us power not just resist encroachments by others, but to have agency. (Merriam Webster: the capacity, condition, or state of acting or of exerting power.) When enough of us get personal agency, we can also have collective agency, for social as well as personal results.
We acquired both personal and social agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of both with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech. And that tech must be personal.
I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop. We have both twice every year.
On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.
Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.
The purpose of customertech is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws and social effects follow.*
But hey, let’s invent the tech we need first.
*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.
Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.
Power of the People is a great grabber of a headline, at least for me. But it’s a pitch for a report that requires filling out the form here on the right:
You see a lot of these: invitations to put one’s digital ass on mailing list, just to get a report that should have been public in the first place, but isn’t so personal data can be harvested and sold or given away to God knows who.
And you do more than just “agree to join” a mailing list. You are now what marketers call a “qualified lead” for countless other parties you’re sure to be hearing from.
Is the form above one of those “public areas”? Of course. What wouldn’t be? And are they are not discouraging caution by requiring you to fill out all the personal data fields marked with a *? You betcha. See here:
III. How we use and share your information
A. To deliver services
In order to facilitate our delivery of advertising, analytics and other services, we may use and/or share the information we collect, including interest-based segments and user interest profiles containing demographic information, location information, gender, age, interest information and information about your computer, device, or group of devices, including your IP address, with our affiliates and third parties, such as our service providers, data processors, business partners and other third parties.
B. With third party clients and partners
Our online advertising services are used by advertisers, websites, applications and other companies providing online or internet connected advertising services. We may share information, including the information described in section III.A. above, with our clients and partners to enable them to deliver or facilitate the delivery of online advertising. We strive to ensure that these parties act in accordance with applicable law and industry standards, but we do not have control over these third parties. When you opt-out of our services, we stop sharing your interest-based data with these third parties. Click here for more information on opting out.
No need to bother opting out, by the way, because there’s this loophole too:
D. To complete a merger or sale of assets
Okay, let’s be fair: this is boilerplate. Every marketing company—hell, every company period—puts jive like this in their privacy policies.
And Viant isn’t one of marketing’s bad guys. Or at least that’s not how they see themselves. They do mean well, kinda, if you forget they see no alternative to tracking people.
If you want to see what’s in that report without leaking your ID info to the world, the short cut is New survey by people-based marketer Viant promotes marketing to identified users in @Martech_Today.
What you’ll see there is a company trying to be good to users in a world where those users have no more power than marketers give them. And giving marketers that ability is what Viant does.
Curious… will Viant’s business persist after the GDPR trains heavy ordnance on it?
See, the GDPR forbids gathering personal data about an EU citizen without that person’s clear permission—no matter where that citizen goes in the digital world, meaning to any site or service anywhere. It arrives in full force, with fines of up to 4% of global revenues in the prior fiscal year, on 25 May of this year: about three months from now.
In case you’ve missed it, I’m not idle here.
To help give individuals fresh GDPR-fortified leverage, and to save the asses of companies like Viant (which probably has lawyers working overtime on GDPR compliance), I’m working with Customer Commons (on the board of which I serve) on terms individuals can proffer and companies can agree to, giving them a form of protection, and agreeable companies a path toward GDPR compliance. And companies should like to agree, because those terms will align everyone’s interests from the start.
I’m also working with Linux Journal (where I’ve recently been elevated to editor-in-chief) to make it one of the first publishers to agree to friendly terms its readers proffer. That’s why I posted Every User a Neo there. Other metaphors: turning everyone on the Net into an Archimedes, with levers to move the world, and turning the whole marketplace in to a Marvel-like universe where all of us are enhanced.
If you want to help with any of that, talk to me.
Linux Journal is folding.
Carlie Fairchild, who has run the magazine almost since it started in 1994, posted Linux Journal Ceases Publication today on the website. So far all of the comments have been positive, which they should be. Throughout its life, Linux Journal has been about as valuable as a trade pub can be, and it’s a damn shame to see it go. I just hope a way can be found to keep the site and the archives alive for the duration, as a living legacy.
I suppose a rescue might still be possible. But, as Carlie wrote in her post, “While we see a future like publishing’s past—a time when advertisers sponsor a publication because they value its brand and readers—the advertising world we have today would rather chase eyeballs, preferably by planting tracking beacons in readers’ browsers and zapping them with ads anywhere those readers show up. But that future isn’t here, and the past is long gone.”
I’m working hard at making that future happen (see the list below), and it bums me deeply that we didn’t succeeded in time to save Linux Journal. But here we are.
My own history with Linux Journal began when Phil Hughes pulled me into an email discussion of his plan to start a free software magazine. That was in 1993: twenty-four years ago. Phil ended that discussion when he announced, to everyone else’s surprise, that he had found this kid who had written a new version of UNIX that would likely take over the world. The kid was Linus Torvalds and his operating system was called Linux. I thought, what? But, as he was about so many things, Phil was right. Our first issue came out in April 1994, when Linux hit version 1.0. Linux Journal’s editor for that issue Bob Young, who left shortly after that to start Red Hat and much else. (I once asked Bob—by then a billionaire but no less a great guy—if Phil actually taught Bob how to spell Linux. Bob said yes.)
I first appeared on the masthead in 1996, and I haven’t left it since 1998. For many years I wrote the “Linux for Suits” column, and for many after that “EOF,” which ran inside the back cover. I also wrote a newsletter called “Suitwatch” and a spin-off blog called IT Garage (which you can still find at that link in the Internet Archive). I was the least technical of all Linux Journal‘s editors, but readers mostly seemed to appreciate my elevated but devoted perspective on Linux’s role in the world.
There were heady times in that history. Linux Journal succeeded fast, got fat during the dot-com craze in the late ’90s, and managed to survive the crash when many other rags went down. Remember Upside? Red Herring? The original FastCompany? (Tip your hat to Brewster Kahle and friends for the fossils of those you’ll still find in the Internet Archive.)
We can thank resourceful management and devoted subscribers for our persistence. And, of course, Linux itself. Today all 500 of the world’s top supercomputers run Linux. Since Android is built on Linux, most of the world’s smartphones run on Linux. Name a giant tech company (e.g. Google, Amazon, Akamai) and chances are the services it deploys run on Linux too. Month after month, Netcraft‘s Most Reliable Hosting Company Sites lists are either all-Linux or close enough. Linux is also embedded in countless devices, from clocks to wi-fi routers to flat-screen TVs.
In its own small but significant way, Linux Journal helped make that happen. Wish it could keep doing that, but alas.
So a hearty thanks to everyone who helped us through all those years. It’s been great, and will remain so.
Now, in hope that other publications might be saved, here are some of the posts and essays I’ve written toward that goal—and toward saving the advertising business from itself as well: