It’s misses like this that have people thinking there’s nothing to fear from AI.
You are currently browsing the archive for the adtech category.
It’s misses like this that have people thinking there’s nothing to fear from AI.
Publishing and advertising both need to bend back toward where they came from, and what works. I see hope for that in the news today.
In Refinery29 Lays Off 10% of Staff as 2018 Revenue Comes Up Short, by Todd Spangler, (@xpangler) of Variety reports,
Digital media company Refinery29, facing a 5% revenue shortfall for the year, is cutting 10% of its workforce, or about 40 employees.Digital media company Refinery29, facing a 5% revenue shortfall for the year, is cutting 10% of its workforce, or about 40 employees.
Company co-founders and co-CEOs Philippe von Borries and Justin Stefano announced the cuts in an internal memo. “While our 2018 revenue will show continued year-over-year growth, we are projecting to come in approximately 5% short of our goal,” they wrote. As a result of its financial pressures, “we will be parting ways with approximately 10% of our workforce.”
The latest cuts, first reported by the Wall Street Journal, come after New York-based Refinery29 laid off 34 employees in December 2017.
Refinery29, which targets a millennial female audience, is going to cut back on content “with a short shelf life,” according to the execs. “While this type of content has been driving views, it has not yielded a great monetization strategy to justify the same level of continued investment.” Von Borries and Stefano wrote that they see sustainable growth in “premium, evergreen” programming, and plan to produce more video (both short- and long-form) on that front.
I’ve boldfaced the important stuff. To explain why it’s important, dig this, from Refinery29 Lays Off 10% of Its Staff, Unifies Sales Team, by Melynda Fuller (@MGrace_Fuller) in MediaPost:
As part of the restructuring, Refinery29 will also unify its sales teams into a unified Customer Solutions Group, in addition to a Sales Planning and Operations Group.
This suggests that Refinery29 is becoming a high-integrity publication, and not just another content pump and eyeball-shooting gallery for adtech (tracking-based advertising). (This Digiday piece by @maxwillens may suggest the same.) If that’s so, then there is new hope: not just for publishing online, but for the kind of brand advertising that actually sponsors publications, and which has worked for both brands and publications since forever in the offline world.
By now pretty much all of online advertising is adtech, which doesn’t sponsor publishers. Instead it uses publishers to mark and track eyeballs wherever they might go. It does that by planting tracking beacons (mixed like poison blueberries into those cookies sites now require “consent” to) on readers’ browsers or phones, and then shoots the readers’ eyeballs with ads when they show up elsewhere on the Web, preferably on the cheapest possible site, so those eyeballs can be hit as often as possible within the budget the advertiser has paid adtech intermediaries. (To readers the most obvious example of this is “retargeting,” perfectly described by The Onion in Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)
Advertising, real advertising—the kind that makes brands and sponsors publications—doesn’t do any of that. Here’s how I explain the difference in GDPR will pop the adtech bubble:
By focusing less on “content-production” (that stuff with a short shelf life) and consolidating its sales staff, Refinery29 appears to be re-making itself as a publication that can attract actual sponsors—real brands, doing real branding—and not just eyeball-hunting intermediaries that deliver lots of data and numbers to advertisers but nothing with rich value.
[Later…] This Digiday piece may support that t
If that’s the case, online publishing is starting to turn a corner, led by Refinery29, and heading back to what makes it valuable: to its readers, to its advertisers and to itself.
We live in two worlds now: the natural one where we have bodies that obey the laws of gravity and space/time, and the virtual one where there is no gravity or distance (though there is time).
In other words, we are now digital as well as physical beings, and this is new to a human experience where, so far, we are examined and manipulated like laboratory animals by giant entities that are out of everybody’s control—including theirs.
The collateral effects are countless and boundless.
Take journalism, for example. That’s what I did in a TEDx talk I gave last month in Santa Barbara:
I next visited several adjacent territories with a collection of brilliant folk at the Ostrom Workshop on Smart Cities. (Which was live-streamed, but I’m not sure is archived yet. Need to check.)
Among those folk was Brett Frischmann, whose canonical work on infrastructure I covered here, and who in Re-Engineering Humanity (with Evan Selinger) explains exactly how giants in the digital infrastructure business are hacking the shit out of us—a topic I also visit in Engineers vs. Re-Engineering (my August editorial in Linux Journal).
Now also comes Bruce Schneier, with his perfectly titled book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, which Farhad Manjoo in The New York Times sources in A Future Where Everything Becomes a Computer Is as Creepy as You Feared. Pull-quote: “In our government-can’t-do-anything-ever society, I don’t see any reining in of the corporate trends.”
In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, a monumental work due out in January (and for which I’ve seen some advance galleys) Shoshana Zuboff makes both cases (and several more) at impressive length and depth.
Privacy plays in all of these, because we don’t have it yet in the digital world. Or not much of it, anyway.
In reverse chronological order, here’s just some what I’ve said on the topic:
So here we are: naked in the virtual world, just like we were in the natural one before we invented clothing and shelter.
And that’s the challenge: to equip ourselves to live private and safe lives, and not just public and endangered ones, in our new virtual world.
And I’m optimistic about our prospects.
I’ll also be detailing that optimism in the midst of a speech titled “Why adtech sucks and needs to be killed” next Wednesday (October 17th) at An Evening with Advertising Heretics in NYC. Being at the Anne L. Bernstein Theater on West 50th, it’s my off-Broadway debut. The price is a whopping $10.
If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?
Yet there is lately a widespread urge to claim personal data as personal property, and to create commodity markets for personal data, so people can start making money by selling or otherwise monetizing their own.
ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of its wiki a heading called Markets for Personal Data. Listed there are:
Yet, while I salute these efforts’ respect for individuals, and their righteous urges to right the wrongs of wanton and rude harvesting of personal data from approximately everybody, I also think there are problems with this approach. And, since I’ve been asked lately to spell out those problems, I shall. Here goes.
|Rivalness||YES||Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers||Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents|
|Rivalness||NO||Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works||public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting|
The second problem is that nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation
Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is essential in a very human way.
The third problem is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data.
Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, even though it’s combustible.
Put another way, why would you want to make almost nothing (the likely price) selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist?
What makes us fully powerful as human beings is our ability to generate and share ideas and other combustible public goods, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.
Important note: I’m not knocking labor here. Most of us have to work for wages as parts of industrial machines, or as independent actors. I do too. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.
Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something. It’s a helpful one.
Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage.
Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.
The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.
Worse, surveillance capitalism’s business is making guesses about you so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:
Trying to get in on that business is just an awful proposition.
Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?
Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)
And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?
What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.
It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.
It’s still early. Web 2.0 is an archaic stage in the formation of the digital world. surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. It’s too absurd, corrupt, complex and annoying to keep living forever.
So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting our most human of powers to work.
The most basic form of agency in the digital world is control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.
Enforcing Data Protection: A Model for Risk-Based Supervision Using Responsive Regulatory Tools, a post by Dvara Research, summarizes Effective Enforcement of a Data Protection Regime, a deeply thought and researched paper by Beni Chugh (@BeniChugh), Malavika Raghavan (@teninthemorning), Nishanth Kumar (@beamboybeamboy) and Sansiddha Pani (@julupani). While it addresses proximal concerns in India, it provides useful guidance for data regulators everywhere.
Any data protection regulator faces certain unique challenges. The ubiquitous collection and use of personal data by service providers in the modern economy creates a vast space for a regulator to oversee. Contraventions of a data protection regime may not immediately manifest and when they do, may not have a clear monetary or quantifiable harm. The enforcement perimeter is market-wide, so a future data protection authority will necessarily interface with other sectoral institutions. In light of these challenges, we present a model for enforcement of a data protection regime based on risk-based supervision and the use of a range of responsive enforcement tools.
This forward-looking approach considers the potential for regulators to employ a range of softer tools before a breach to prevent it and after a breach to mitigate the effects. Depending on the seriousness of contraventions, the regulator can escalate up to harder enforcement actions. The departure from the focus on post-data breach sanctions (that currently dominate data protection regimes worldwide) is an attempt to consider how the regulatory community might act in coordination with entities processing data to minimise contraventions of the regime.
I hope European regulators are looking at this. Because, as I said in a headline to a post last month, without enforcement, the GDPR is a fail.
Bonus link from the IAPP (International Association of Privacy Professionals): When will we start seeing GDPR enforcement actions? We guess Feb. 22, 2019.
The GDPR has been in force since May 25th, and it has done almost nothing to stop websites that make money from tracking-based-advertising stop participating in the tracking of readers. Instead almost all we’ve seen so far are requests for from websites to keep doing what they’re doing.
Only worse. Because now when you click “Accept” under an interruptive banner saying the site’s “cookies and other technologies collect data to enhance your experience and personalize the content and advertising you see,” you’ve just consented to being spied on. And they’re covered. They can carry on with surveillance-as-usual.
Score: Adtech 1, privacy 0.
Or so it seems. So far.
Are there any examples of publications that aren’t participating in #adtech’s spy game? Besides Linux Journal?
This is what greets me when I go to the Washington Post site from here in Germany:
Note that last item in the Premium EU Subscription column: “No on-site advertising or third-party tracking.”
Ponder for a moment how the Sunday (or any) edition of the Post‘s print edition would look with no on-paper advertising. It would be woefully thin and kind of worthless-looking. Two more value-adds for advertising in the print edition:
So here’s the message I want the Post to hear from me, and from every reader who values what they do:
That’s what I get from the print edition, and that’s what I want from the online edition as well.
So I want two things here.
One is an answer to this question: Are ANY publishers in the online world selling old-fashioned ads that aren’t based on tracking and therefore worth more than the tracking kind? (And are GDPR-compliant as well, since the ads aren’t aimed by collected personal data.)
The other is to subscribe to the Post as soon as they show me they’re willing to do what I ask: give me those real ads again. And stop assuming that all ads need to be the tracking-based kind.
Thanks in advance.
In The Big Short, investor Michael Burry says “One hallmark of mania is the rapid rise in the incidence and complexity of fraud.” (Burry shorted the mania- and fraud-filled subprime mortgage market and made a mint in the process.)
One would be equally smart to bet against the mania for the tracking-based form of advertising called adtech.
Since tracking people took off in the late ’00s, adtech has grown to become a four-dimensional shell game played by hundreds (or, if you include martech, thousands) of companies, none of which can see the whole mess, or can control the fraud, malware and other forms of bad acting that thrive in the midst of it.
And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.
“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.
Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”
Of course what I just said greatly simplifies what the GDPR actually utters, in bureaucratic legalese. The GDPR is also full of loopholes only snakes can thread; but the spirit of the law is clear, and the snakes will be easy to shame, even if they don’t get fined. (And legitimate interest—an actual loophole in the GDPR, may prove hard to claim.)
Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?
Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.
To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in AdExchanger by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:
Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.
The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.
Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.
Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”
And that’s not all:
Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.
Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.
This is also Google’s way of scraping off GDPR liability on publishers.
Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,
The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.
The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…
Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.
In other words, good luck with that.
[Later, 26 May…] Well, Google caved on this one, so apparently Google is coming to IAB Europe’s table.
[And on 30 May…] Axel Springer is also going its own way.
One big upside for IAB Europe is that its Framework contains open source code and an SDK. For a full unpacking of what’s there see the Consent String and Vendor List Format: Transparency & Consent Framework on GitHub and IAB Europe’s own FAQ. More about this shortly.
Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.
One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there. Here’s the most revealing graphic in the bunch:
Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.
“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”
Pretty cynical, no?
The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.
The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.
Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.
Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.
Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.
Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)
Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.
For more on how that will work, read The Intention Economy: When Customers Take Charge. Six years after Harvard Business Review Press published that book, what it says will start to come true. Thank you, GDPR.
Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.
What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)
Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.
When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.
Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:
I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)
Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.
Nature and the Internet both came without privacy.
The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.
When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.
In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.
Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”
Two such answers arrived with this morning’s New York Times: Facebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with with personal tech gives each of us power not just resist encroachments by others, but to have agency. (Merriam Webster: the capacity, condition, or state of acting or of exerting power.)
We acquired agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of it with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech.
I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop.
On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.
Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.
The purpose is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws follow.*
But hey, let’s invent the tech we need first.
*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.
Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.