Research

You are currently browsing the archive for the Research category.

Enforcing Data Protection: A Model for Risk-Based Supervision Using Responsive Regulatory Tools, a post by Dvara Research, summarizes Effective Enforcement of a Data Protection Regime, a deeply thought and researched paper by Beni Chugh (@BeniChugh), Malavika Raghavan (@teninthemorning), Nishanth Kumar (@beamboybeamboy) and Sansiddha Pani (@julupani). While it addresses proximal concerns in India, it provides useful guidance for data regulators everywhere.

An excerpt:

Any data protection regulator faces certain unique challenges. The ubiquitous collection and use of personal data by service providers in the modern economy creates a vast space for a regulator to oversee. Contraventions of a data protection regime may not immediately manifest and when they do, may not have a clear monetary or quantifiable harm. The enforcement perimeter is market-wide, so a future data protection authority will necessarily interface with other sectoral institutions.  In light of these challenges, we present a model for enforcement of a data protection regime based on risk-based supervision and the use of a range of responsive enforcement tools.

This forward-looking approach considers the potential for regulators to employ a range of softer tools before a breach to prevent it and after a breach to mitigate the effects. Depending on the seriousness of contraventions, the regulator can escalate up to harder enforcement actions. The departure from the focus on post-data breach sanctions (that currently dominate data protection regimes worldwide) is an attempt to consider how the regulatory community might act in coordination with entities processing data to minimise contraventions of the regime.

I hope European regulators are looking at this. Because, as I said in a headline to a post last month, without enforcement, the GDPR is a fail.

Bonus link from the IAPP (International Association of Privacy Professionals): When will we start seeing GDPR enforcement actions? We guess Feb. 22, 2019.

In The Big Short, investor Michael Burry says “One hallmark of mania is the rapid rise in the incidence and complexity of fraud.” (Burry shorted the mania- and fraud-filled subprime mortgage market and made a mint in the process.)

One would be equally smart to bet against the mania for the tracking-based form of advertising called adtech.

Since tracking people took off in the late ’00s, adtech has grown to become a four-dimensional shell game played by hundreds (or, if you include martech, thousands) of companies, none of which can see the whole mess, or can control the fraud, malware and other forms of bad acting that thrive in the midst of it.

And that’s on top of the main problem: tracking people without their knowledge, approval or a court order is just flat-out wrong. The fact that it can be done is no excuse. Nor is the monstrous sum of money made by it.

Without adtech, the EU’s GDPR (General Data Protection Regulation) would never have happened. But the GDPR did happen, and as a result websites all over the world are suddenly posting notices about their changed privacy policies, use of cookies, and opt-in choices for “relevant” or “interest-based” (translation: tracking-based) advertising. Email lists are doing the same kinds of things.

“Sunrise day” for the GDPR is 25 May. That’s when the EU can start smacking fines on violators.

Simply put, your site or service is a violator if it extracts or processes personal data without personal permission. Real permission, that is. You know, where you specifically say “Hell yeah, I wanna be tracked everywhere.”

Of course what I just said greatly simplifies what the GDPR actually utters, in bureaucratic legalese. The GDPR is also full of loopholes only snakes can thread; but the spirit of the law is clear, and the snakes will be easy to shame, even if they don’t get fined. (And legitimate interest—an actual loophole in the GDPR, may prove hard to claim.)

Toward the aftermath, the main question is What will be left of advertising—and what it supports—after the adtech bubble pops?

Answers require knowing the differences between advertising and adtech, which I liken to wheat and chaff.

First, advertising:

    1. Advertising isn’t personal, and doesn’t have to be. In fact, knowing it’s not personal is an advantage for advertisers. Consumers don’t wonder what the hell an ad is doing where it is, who put it there, or why.
    2. Advertising makes brands. Nearly all the brands you know were burned into your brain by advertising. In fact the term branding was borrowed by advertising from the cattle business. (Specifically by Procter and Gamble in the early 1930s.)
    3. Advertising carries an economic signal. Meaning that it shows a company can afford to advertise. Tracking-based advertising can’t do that. (For more on this, read Don Marti, starting here.)
    4. Advertising sponsors media, and those paid by media. All the big pro sports salaries are paid by advertising that sponsors game broadcasts. For lack of sponsorship, media—especially publishers—are hurting. @WaltMossberg learned why on a conference stage when an ad agency guy said the agency’s ads wouldn’t sponsor Walt’s new publication, recode. Walt: “I asked him if that meant he’d be placing ads on our fledgling site. He said yes, he’d do that for a little while. And then, after the cookies he placed on Recode helped him to track our desirable audience around the web, his agency would begin removing the ads and placing them on cheaper sites our readers also happened to visit. In other words, our quality journalism was, to him, nothing more than a lead generator for target-rich readers, and would ultimately benefit sites that might care less about quality.” With friends like that, who needs enemies?

Second, Adtech:

    1. Adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media, and it causes negative associations with brands. Consider this: perhaps a $trillion or more has been spent on adtech, and not one brand known to the world has been made by it. (Bob Hoffman, aka the Ad Contrarian, is required reading on this.)
    2. Adtech wants to be personal. That’s why it’s tracking-based. Though its enthusiasts call it “interest-based,” “relevant” and other harmless-sounding euphemisms, it relies on tracking people. In fact it can’t exist without tracking people. (Note: while all adtech is programmatic, not all programmatic advertising is adtech. In other words, programmatic advertising doesn’t have to be based on tracking people. Same goes for interactive. Programmatic and interactive advertising will both survive the adtech crash.)
    3. Adtech spies on people and violates their privacy. By design. Never mind that you and your browser or app are anonymized. The ads are still for your eyeballs, and correlations can be made.
    4. Adtech is full of fraud and a vector for malware. @ACFou is required reading on this.
    5. Adtech incentivizes publications to prioritize “content generation” over journalism. More here and here.
    6. Intermediators take most of what’s spent on adtech. Bob Hoffman does a great job showing how as little as 3¢ of a dollar spent on adtech actually makes an “impression. The most generous number I’ve seen is 12¢. (When I was in the ad agency business, back in the last millennium, clients complained about our 15% take. Media our clients bought got 85%.)
    7. Adtech gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.
    8. Adtech incentivizes hate speech and tribalism by giving both—and the platforms that host them—a business model too.
    9. Adtech relies on misdirection. See, adtech looks like advertising, and is called advertising; but it’s really direct marketing, which is descended from junk mail and a cousin of spam. Because of that misdirection, brands think they’re placing ads in media, while the systems they hire are actually chasing eyeballs to anywhere. (Pro tip: if somebody says every ad needs to “perform,” or that the purpose of advertising is “to get the right message to the right person at the right time,” they’re actually talking about direct marketing, not advertising. For more on this, read Rethinking John Wanamaker.)
    10. Compared to advertising, adtech is ugly. Look up best ads of all time. One of the top results is for the American Advertising Awards. The latest winners they’ve posted are the Best in Show for 2016. Tops there is an Allstate “Interactive/Online” ad pranking a couple at a ball game. Over-exposure of their lives online leads that well-branded “Mayhem” guy to invade and trash their house. In other words, it’s a brand ad about online surveillance.
    11. Adtech has caused the largest boycott in human history. By more than a year ago, 1.7+ billion human beings were already blocking ads online.

To get a sense of what will be left of adtech after GDPR Sunrise Day, start by reading a pair of articles in AdExchanger by @JamesHercher. The first reports on the Transparency and Consent Framework published by IAB Europe. The second reports on how Google is pretty much ignoring that framework and going direct with their own way of obtaining consent to tracking:

Google’s and other consent-gathering solutions are basically a series of pop-up notifications that provide a mechanism for publishers to provide clear disclosure and consent in accordance with data regulations.

Specifically,

The Google consent interface greets site visitors with a request to use data to tailor advertising, with equally prominent “no” and “yes” buttons. If a reader declines to be tracked, he or she sees a notice saying the ads will be less relevant and asking to “agree” or go back to the previous page. According to a source, one research study on this type of opt-out mechanism led to opt-out rates of more than 70%.

Meaning only 30% of site visitors will consent to being tracked. So, say goodbye to 70% of adtech’s eyeball targets right there.

Google’s consent gathering system, dubbed “Funding Choices,” also screws most of the hundreds of other adtech intermediaries fighting for a hunk of what’s left of their market. Writes James, “It restricts the number of supply chain partners a publisher can share consent with to just 12 vendors, sources with knowledge of the product tell AdExchanger.”

And that’s not all:

Last week, Google alerted advertisers it would sharply limit use of the DoubleClick advertising ID, which brands and agencies used to pull log files from DoubleClick so campaigns could be cohesively measured across other ad servers, incentivizing buyers to consolidate spend on the Google stack.

Google also raised eyebrows last month with a new policy insisting that all DFP publishers grant it status as a data controller, giving Google the right to collect and use site data, whereas other online tech companies – mere data processors – can only receive limited data assigned to them by the publisher, i.e., the data controller.

This is also Google’s way of scraping off GDPR liability on publishers.

Publishers and adtech intermediaries can attempt to avoid Google by using Consent Management Platforms (CMPs), a new category of intermediary defined and described by IAB Europe’s Consent Management Framework. Writes James,

The IAB Europe and and IAB Tech Lab framework includes a list of registered vendors that publishers can pass consent to for data-driven advertising. The tech companies pay a one-time fee between $1,000 and $2,000 to join the vendor list, according to executives from three participating companies…Although now that the framework is live, the barriers to adoption are painfully real as well.

The CMP category is pretty bare at the moment, and it may be greeted with suspicion by some publishers.There are eight initial CMPs: two publisher tech companies with roots in ad-blocker solutions, Sourcepoint and Admiral, as well as the ad tech companies Quantcast and Conversant and a few blockchain-based advertising startups…

Digital Content Next, a trade group representing online news publishers, is advising publishers to reject the framework, which CEO Jason Kint said “doesn’t meet the letter or spirit of GDPR.” Only two publishers have publicly adopted the Consent and Transparency Framework, but they’re heavy hitters with blue-chip value in the market: Axel Springer, Europe’s largest digital media company, and the 180-year-old Schibsted Media, a respected newspaper publisher in Sweden and Norway.

In other words, good luck with that.

[Later, 26 May…] Well, Google caved on this one, so apparently Google is coming to IAB Europe’s table.

[And on 30 May…] Axel Springer is also going its own way.

One big upside for IAB Europe is that its Framework contains open source code and an SDK. For a full unpacking of what’s there see the Consent String and Vendor List Format: Transparency & Consent Framework on GitHub and IAB Europe’s own FAQ. More about this shortly.

Meanwhile, the adtech business surely knows the sky is falling. The main question is how far.

One possibility is 95% of the way to zero. That outcome is suggested by results published in PageFair last October by Dr. Johnny Ryan (@JohnnyRyan) there. Here’s the most revealing graphic in the bunch:

Note that this wasn’t a survey of the general population. It was a survey of ad industry people: “300+ publishers, adtech, brands, and various others…” Pause for a moment and look at that chart again. Nearly all those proffesionals in the business would not accept what their businesses do to other human beings.

“However,” Johnny adds, “almost a third believe that users will consent if forced to do so by ‘tracking walls’, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR…”

Pretty cynical, no?

The good news for both advertising and publishing is that neither needs adtech. What’s more, people can signal what they want out of the sites they visit—and from the whole marketplace. In fact the Internet itself was designed for exactly that. The GDPR just made the market a lot more willing to start hearing clues from customers that have been laying in plain sight for almost twenty years.

The first clues that fully matter are the ones we—the individuals they’ve been calling “users,” will deliver. Look for details on that in another post.

Meanwhile::::

Pro tip #1: don’t bet against Google, except maybe in the short term, when sunrise will darken the whole adtech business.

Instead, bet against companies that stake their lives on tracking people, and doing that without the clear and explicit consent of the tracked. That’s most of the adtech “ecosystem” not called Google or Facebook.

Google can say it already has consent, and that it is also has a legitimate interest (one of the six “lawful bases” for tracking) in the personal data it harvests from us.

Google can also live without the tracking. Most of its income comes from AdWords—its search advertising business—which is far more guided by what visitors are searching for than by whatever Google knows about those visitors.

Google is also also relatively trusted, as tech companies go. Its parent, Alphabet, is also increasingly diversified. Facebook, on the other hand, does stake its life on tracking people. (I say more about Facebook’s odds here.)

Pro tip #2: do bet on any business working for customers rather than sellers. Because signals of personal intent will produce many more positive outcomes in the digital marketplace than surveillance-fed guesswork by sellers ever could, even with the most advanced AI behind it.

For more on how that will work, read The Intention Economy: When Customers Take Charge. Six years after Harvard Business Review Press published that book, what it says will start to come true. Thank you, GDPR.

Pro tip #3: do bet on developers building tools that give each of us scale in dealing with the world’s companies and governments, because those are the tools businesses working for customers will rely on to scale up their successes as well.

What it comes down to is the need for better signaling between customers and companies than can ever be possible in today’s doomed tracking-fed guesswork system. (All the AI and ML in the world won’t be worth much if the whole point of it is to sell us shit.)

Think about what customers and companies want and need about each other: interests, intentions, competencies, locations, availabilities, reputations—and boundaries.

When customers can operate both privately and independently, we’ll get far better markets than today’s ethically bankrupt advertising and marketing system could ever give us.

Pro tip #4: do bet on publishers getting back to what worked since forever offline and hardly got a chance online: plain old brand advertising that carries both an economic and a creative signal, and actually sponsors the publication rather than using the publication as a way to gather eyeballs that can be advertised at anywhere. The oeuvres of Don Marti (@dmarti) and Bob Hoffman (the @AdContrarian) are thick with good advice about this. I’ve also written about it extensively in the list compiled at People vs. Adtech. Some samples, going back through time:

  1. An easy fix for a broken advertising system (12 October 2017 in Medium and in my blog)
  2. Without aligning incentives, we can’t kill fake news or save journalism (15 September 2017 in Medium)
  3. Let’s get some things straight about publishing and advertising (9 September 2017 and the same day in Medium)
  4. Good news for publishers and advertisers fearing the GDPR (3 September 2017 in ProjectVRM and 7 October in Medium).
  5. Markets are about more than marketing (2 September 2017 in Medium).
  6. Publishers’ and advertisers’ rights end at a browser’s front door (17 June 2017 in Medium). It updates one of the 2015 blog posts below.
  7. How to plug the publishing revenue drain (9 June 2017 in Medium). It expands on the opening (#publishing) section of my Daily Tab for that date.
  8. How True Advertising Can Save Journalism From Drowning in a Sea of Content (22 January 2017 in Medium and 26 January 2017 in my blog.)It’s People vs. Advertising, not Publishers vs. Adblockers (26 August 2016 in ProjectVRM and 27 August 2016 in Medium)
  9. Why #NoStalking is a good deal for publishers (11 May 2016, and in Medium)
  10. How customers can debug business with one line of code (19 April 2016 in ProjectVRM and in Medium)
  11. An invitation to settle matters with @Forbes, @Wired and other publishers (15 April 2016 and in Medium)
  12. TV Viewers to Madison Avenue: Please quit driving drunk on digital (14 Aprl 2016, and in Medium)
  13. The End of Internet Advertising as We’ve Known It(11 December 2015 in MIT Technology Review)
  14. Ad Blockers and the Next Chapter of the Internet (5 November in Harvard Business Review)
  15. How #adblocking matures from #NoAds to #SafeAds (22 October 2015)
  16. Helping publishers and advertisers move past the ad blockade (11 October 2015 on the ProjectVRM blog)
  17. Beyond ad blocking — the biggest boycott in human history (28 Septemper 2015)
  18. A way to peace in the adblock war (21 September 2015, on the ProjectVRM blog)
  19. How adtech, not ad blocking, breaks the social contract (23 September 2015)
  20. If marketing listened to markets, they’d hear what ad blocking is telling them (8 September 2015)
  21. Apple’s content blocking is chemo for the cancer of adtech (26 August 2015)
  22. Separating advertising’s wheat and chaff (12 August 2015, and on 2 July 2016 in an updated version in Medium)
  23. Thoughts on tracking based advertising (18 February 2015)
  24. On marketing’s terminal addiction to data fracking and bad guesswork (10 January 2015)
  25. Why to avoid advertising as a business model (25 June 2014, re-running Open Letter to Meg Whitman, which ran on 15 October 2000 in my old blog)
  26. What the ad biz needs is to exorcize direct marketing (6 October 2013)
  27. Bringing manners to marketing (12 January 2013 in Customer Commons)
  28. What could/should advertising look like in 2020, and what do we need to do now for this future?(Wharton’s Future of Advertising project, 13 November 2012)
  29. An olive branch to advertising (12 September 2012, on the ProjectVRM blog)

I expect, once the GDPR gets enforced, I can start writing about People + Publishing and even People + Advertising. (I have long histories in both publishing and advertising, by the way. So all of this is close to home.)

Meanwhile, you can get a jump on the GDPR by blocking third party cookies in your browsers, which will stop most of today’s tracking by adtech. Customer Commons explains how.

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring readers’ bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of aiming “interest-based” advertising at those same readers, wherever those readers’ eyeballs may appear—or reappear in the case of “retargeted” advertising.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach, standard or experience), and no blood valving by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data?

Answer: nobody can, because the whole adtech “ecosystem” is a four-dimensional shell game with hundreds of players

or, in the case of “martech,” thousands:

For one among many views of what’s going on, here’s a compressed screen shot of what Privacy Badger showed going on in my browser behind Zeynep’s op-ed in the Times:

[Added later…] @ehsanakhgari tweets pointage to WhoTracksMe’s page on the NYTimes, which shows this:

And here’s more irony: a screen shot of the home page of RedMorph, another privacy protection extension:

That quote is from Free Tools to Keep Those Creepy Online Ads From Watching You, by Brian X. Chen and Natasha Singer, and published on 17 February 2016 in the Times.

The same irony applies to countless other correct and important reporting on the Facebook/Cambridge Analytica mess by other writers and pubs. Take, for example, Cambridge Analytica, Facebook, and the Revelations of Open Secrets, by Sue Halpern in yesterday’s New Yorker. Here’s what RedMorph shows going on behind that piece:

Note that I have the data leak toward Facebook.net blocked by default.

Here’s a view through RedMorph’s controller pop-down:

And here’s what happens when I turn off “Block Trackers and Content”:

By the way, I want to make clear that Zeynep, Brian, Natasha and Sue are all innocents here, thanks both to the “Chinese wall” between the editorial and publishing functions of the Times, and the simple fact that the route any ad takes between advertiser and reader through any number of adtech intermediaries is akin to a ball falling through a pinball machine. Refresh your page while reading any of those pieces and you’ll see a different set of ads, no doubt aimed by automata guessing that you, personally, should be “impressed” by those ads. (They’ll count as “impressions” whether you are or not.)

Now…

What will happen when the Times, the New Yorker and other pubs own up to the simple fact that they are just as guilty as Facebook of leaking their readers’ data to other parties, for—in many if not most cases—God knows what purposes besides “interest-based” advertising? And what happens when the EU comes down on them too? It’s game-on after 25 May, when the EU can start fining violators of the General Data Protection Regulation (GDPR). Key fact: the GDPR protects the data blood of what they call “EU data subjects” wherever those subjects’ necks are exposed in borderless digital world.

To explain more about how this works, here is the (lightly edited) text of a tweet thread posted this morning by @JohnnyRyan of PageFair:

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)

Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

The website does this to let these companies “bid” to show their ad to this visitor. Here is a video of how the system works. In Europe this accounts for about a quarter of publishers’ gross revenue.

Once these personal data leave the publisher, via “bid request”, the publisher has no control over what happens next. I repeat that: personal data are routinely sent, every time a page loads, to hundreds/thousands of companies, with no control over what happens to them.

This means that every person, and what they look at online, is routinely profiled by companies that receive these data from the websites they visit. Where possible, these data and combined with offline data. These profiles are built up in “DMPs”.

Many of these DMPs (data management platforms) are owned by data brokers. (Side note: The FTC’s 2014 report on data brokers is shocking. See https://www.ftc.gov/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014. There is no functional difference between an #adtech DMP and Cambridge Analytica.

—Terrell McSweeny, Julie Brill and EDPS

None of this will be legal under the #GDPR. (See one reason why at https://t.co/HXOQ5gb4dL). Publishers and brands need to take care to stop using personal data in the RTB system. Data connections to sites (and apps) have to be carefully controlled by publishers.

So far, #adtech’s trade body has been content to cover over this wholesale personal data leakage with meaningless gestures that purport to address the #GDPR (see my note on @IABEurope current actions here: https://t.co/FDKBjVxqBs). It is time for a more practical position.

And advertisers, who pay for all of this, must start to demand that safe, non-personal data take over in online RTB targeting. RTB works without personal data. Brands need to demand this to protect themselves – and all Internet users too. @dwheld @stephan_lo @BobLiodice

Websites need to control
1. which data they release in to the RTB system
2. whether ads render directly in visitors’ browsers (where DSPs JavaScript can drop trackers)
3. what 3rd parties get to be on their page
@jason_kint @epc_angela @vincentpeyregne @earljwilkinson 11/12

Lets work together to fix this. 12/12

Those last three recommendations are all good, but they also assume that websites, advertisers and their third party agents are the ones with the power to do something. Not readers.

But there’s lots readers will be able to do. More about that shortly. Meanwhile, publishers can get right with readers by dropping #adtech and go back to publishing the kind of high-value brand advertising they’ve run since forever in the physical world.

That advertising, as Bob Hoffman (@adcontrarian) and Don Marti (@dmarti) have been making clear for years, is actually worth a helluva lot more than adtech, because it delivers clear creative and economic signals and comes with no cognitive overhead (for example, wondering where the hell an ad comes from and what it’s doing right now).

As I explain here, “Real advertising wants to be in a publication because it values the publication’s journalism and readership” while “adtech wants to push ads at readers anywhere it can find them.”

Going back to real advertising is the easiest fix in the world, but so far it’s nearly unthinkable because we’ve been defaulted for more than twenty years to an asymmetric power relationship between readers and publishers called client-server. I’ve been told that client-server was chosen as the name for this relationship because “slave-master” didn’t sound so good; but I think the best way to visualize it is calf-cow:

As I put it at that link (way back in 2012), Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own.

On the Net, we don’t need to be slaves, cattle or blood bags. We can be human. In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Customer Commons is working on exactly those terms. The first publication to agree to readers terms is Linux Journal, where I am now the editor-in-chief. The first of those terms will say “just show me ads not based on tracking me,” and is hashtagged #DoNotByte.

In Help Us Cure Online Publishing of Its Addiction to Personal Data, I explain how this models the way advertising ought to be done: by the grace of readers, with no spying.

Obeying readers’ terms also carries no risk of violating privacy laws, because every pub will have contracts with its readers to do the right thing. This is totally do-able. Read that last link to see how.

As I say there, we need help. Linux Journal still has a small staff, and Customer Commons (a California-based 501(c)(3) nonprofit) so far consists of five board members. What it aims to be is a worldwide organization of customers, as well as the place where terms we proffer can live, much as Creative Commons is where personal copyright licenses live. (Customer Commons is modeled on Creative Commons. Hats off to the Berkman Klein Center for helping bring both into the world.)

I’m also hoping other publishers, once they realize that they are no less a part of the surveillance economy than Facebook and Cambridge Analytica, will help out too.

[Later…] Not long after this post went up I talked about these topics on the Gillmor Gang. Here’s the video, plus related links.

I think the best push-back I got there came from Esteban Kolsky, (@ekolsky) who (as I recall anyway) saw less than full moral equivalence between what Facebook and Cambridge Analytica did to screw with democracy and what the New York Times and other ad-supported pubs do by baring the necks of their readers to dozens of data vampires.

He’s right that they’re not equivalent, any more than apples and oranges are equivalent. The sins are different; but they are still sins, just as apples and oranges are still both fruit. Exposing readers to data vampires is simply wrong on its face, and we need to fix it. That it’s normative in the extreme is no excuse. Nor is the fact that it makes money. There are morally uncompromised ways to make money with advertising, and those are still available.

Another push-back is the claim by many adtech third parties that the personal data blood they suck is anonymized. While that may be so, correlation is still possible. See Study: Your anonymous web browsing isn’t as anonymous as you think, by Barry Levine (@xBarryLevine) in Martech Today, which cites De-anonymizing Web Browsing Data with Social Networks, a study by Jessica Su (@jessicatsu), Ansh Shukla (@__anshukla__) and Sharad Goel (@5harad)
of Stanford and Arvind Narayanan (@random_walker) of Princeton.

(Note: Facebook and Google follow logged-in users by name. They also account for most of the adtech business.)

One commenter below noted that this blog as well carries six trackers (most of which I block).. Here is how those look on Ghostery:

So let’s fix this thing.

[Later still…] Lots of comments in Hacker News as well.

[Later again (8 April 2018)…] About the comments below (60+ so far): the version of commenting used by this blog doesn’t support threading. If it did, my responses to comments would appear below each one. Alas, some not only appear out of sequence, but others don’t appear at all. I don’t know why, but I’m trying to find out. Meanwhile, apologies.

Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):

Geologist Ed Keller

Our speakers were geology professor Ed Keller of UCSB and Engineering Geologist Larry Gurrola, who also works and studies with Ed. That’s Ed in the shot below.

As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs. The whole thing is beyond sad and awful.

The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.

I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the worst she has ever seen, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.

Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)

Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):

Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.

I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:

I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.

Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.

Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:

Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)

Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.

Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.

It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:

From bottom to top, here’s what it says:

  1. Fires happen almost regularly, spreading most widely where chaparral has matured to become abundant fuel, as the firefighters like to call it.
  2. Flood events are more random, given the relative rarity of rain and even more rare rains of “biblical” volume. But they do happen.
  3. Stream beds in the floors of canyons accumulate rocks and boulders that roll down the gradually eroding slopes over time. The depth of these is expressed as basin instablity. Debris flows clear out the rocks and boulders when a big flood event comes right after a fire and basin becomes stable (relatively rock-free) again.
  4. The sediment yield in a flood (F) is maximum when a debris flow (DF) occurs.
  5. Debris flows tend to happen once every few hundred years. And you’re not going to get the big ones if you don’t have the canyon stream bed full of rocks and boulders.

About this set of debris flows in particular:

  1. Destruction down Oak Creek wasn’t as bad as on Montecito, San Ysidro, Buena Vista and Romero Creeks because the canyon feeding it is smaller.
  2. When debris flows hit an obstruction, such as a bridge, they seek out a new bed to flow on. This is one of the actions that creates an alluvial fan. From the map it appears something like that happened—
    1. Where the flow widened when it hit Olive Mill Road, fanning east of Olive Mill to destroy all three blocks between Olive Mill and Santa Elena Lane before taking the Olive Mill bridge across 101 and down to the Biltmore while also helping other flows fill 101 as well. (See Mac’s comment below, and his link to a top map.)
    2. In the area between Buena Vista Creek and its East Fork, which come off different watersheds
    3. Where a debris flow forked south of Mountain Drive after destroying San Ysidro Ranch, continuing down both Randall and El Bosque Roads.

For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)

Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:

See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:

Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.

But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.

Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.

Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.

For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:

This is deep in the heart of Santa Barbara.

The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.

All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.

To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:

What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.

Here is an image that helps, from that same link:

Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.

I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.

30 January

 

This post continues the inquiry I started with Making sense of what happened to Montecito. That post got a record number of reads for this blog, and 57 comments as well.

I expect to learn more at the community meeting this evening with UCSB geologist Ed Keller in the Faulkner Room in the main library in Santa Barbara. Here’s the Library schedule. Note that the meeting will be streamed live on Facebook.

Meanwhile, to help us focus on the geology questions, here is the final post-mudslide damage inspection map of Montecito:

I left out Carpinteria, because of the four structures flagged there, three were blue (affected) and one was yellow (minor), and none were orange (major) or red (destroyed). I’m also guessing they were damaged by flooding rather than debris flow. I also want to make the map as legible as possible, so we can focus on where the debris flows happened, and how we might understand the community’s prospects for the future.

So here are my questions, all subject to revision and correction.

  1. How much of the damage was due to debris flow alone, and how much to other factors (e.g. rain-caused flooding, broken water pipes)?
  2. Was concentration of rain the main reason why we saw flows in the canyons above Montecito, but not (or less so) elsewhere?
  3. Where exactly did the debris flow from? And has the area been surveyed well enough to predict what future debris flows might happen if we get big rains this winter and ones to follow?
  4. Do we need bigger catch basins for debris, like they have at the base of the San Gabriels, above Los Angeles’ basin?
  5. How do the slopes above Montecito and Santa Barbara differ from other places (e.g. the San Gabriels) where debris flows (and rock falls) are far more common?
  6. What geology-advised changes in our infrastructure (especially water and gas) might we make, based on what we’ve learned so far?
  7. What might we expect (that most of us don’t now) in the form of other catastrophes that show up in the geologic record? For example, earthquakes and tsunamis. See here: “This earthquake was associated with by far the largest seismic sea wave ever reported for one originating in California. Descriptive accounts indicate that it may have reached elevations of 15 feet at Gaviota, 30 to 35 feet at Santa Barbara, and 15 feet or more in Ventura. It may have even shown visible effects in the San Francisco harbor.” There is also this, which links to questions about the former report. (Still, there have been a number of catastrophic earthquakes on or affecting the South Coast, and it has been 93 years since the 1925 quake — and the whole Pacific Coast is subject to tsunamis. Here are some photos of the quake.)

Note that I don’t want to ask Ed to play a finger-pointing role here. Laying blame isn’t his job, unless he’s blaming nature for being itself.

Additional reading:

  • Dan McCaslin: Rattlesnake Canyon Fine Now for Day Hiking (Noozhawk) Pull-quote: “Santa Barbara geologist Ed Keller has said that all of Santa Barbara is built on debris flows piled up during the past 60,000 years. Around 1100 A.D., a truly massive debris flow slammed through Rattlesnake Canyon into Mission Canyon, leaving large boulders as far down as the intersection of Alamar Avenue and State Street (go check). There were Chumash villages in the area, and they may have been completely wiped out then. While some saddened Montecitans claim that sudden flash floods and debris flows should have been forecast more accurately, this seems impossible.”
  • Those deadly mudslides you’ve read about? Expect worse in the future. (Wall Street Journal) Pull-quote: “Montecito is particularly at risk as the hill slopes above town are oversteepened by faulting and rapid uplift, and much of the town is built on deposits laid down by previous floods. Some debris basins were in place, but they were quickly overtopped by the hundreds of thousands of cubic yards of water and sediment. While high post-fire runoff and erosion rates could be expected, it was not possible to accurately predict the exact location and extreme magnitude of this particular storm and resulting debris flows.”
  • Evacuation Areas Map.
  • Thomas Fire: Forty Days of Devastation (LA Times) Includes what happened to Montecito. Excellent step-by-step 3D animation.

Take a look at this chart:

CryptoCurrency Market Capitalizations

screen-shot-2017-06-21-at-10-37-51-pm

As Neo said, Whoa.

To help me get my head fully around all that’s going on behind that surge, or mania, or whatever it is, I’ve composed a lexicon-in-process that I’m publishing here so I can find it again. Here goes:::

Bitcoin. “A cryptocurrency and a digital payment system invented by an unknown programmer, or a group of programmers, under the name Satoshi Nakamoto. It was released as open-source software in 2009. The system is peer-to-peer, and transactions take place between users directly, without an intermediary. These transactions are verified by network nodes and recorded in a public distributed ledger called a blockchain. Since the system works without a central repository or single administrator, bitcoin is called the first decentralized digital currency.” (Wikipedia.)

Cryptocurrency. “A digital asset designed to work as a medium of exchange using cryptography to secure the transactions and to control the creation of additional units of the currency. Cryptocurrencies are a subset of alternative currencies, or specifically of digital currencies. Bitcoin became the first decentralized cryptocurrency in 2009. Since then, numerous cryptocurrencies have been created. These are frequently called altcoins, as a blend of bitcoin alternative. Bitcoin and its derivatives use decentralized control as opposed to centralized electronic money/centralized banking systems. The decentralized control is related to the use of bitcoin’s blockchain transaction database in the role of a distributed ledger.” (Wikipedia.)

“A cryptocurrency system is a network that utilizes cryptography to secure transactions in a verifiable database that cannot be changed without being noticed.” (Tim Swanson, in Consensus-as-a-service: a brief report on the emergence of permissioned, distributed ledger systems.)

Distributed ledger. Also called a shared ledger, it is “a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions.” (Wikipedia, citing a report by the UK Government Chief Scientific Adviser: Distributed Ledger Technology: beyond block chain.) A distributed ledger requires a peer-to-peer network and consensus algorithms to ensure replication across nodes. The ledger is sometimes also called a distributed database. Tim Swanson adds that a distributed ledger system is “a network that fits into a new platform category. It typically utilizes cryptocurrency-inspired technology and perhaps even part of the Bitcoin or Ethereum network itself, to verify or store votes (e.g., hashes). While some of the platforms use tokens, they are intended more as receipts and not necessarily as commodities or currencies in and of themselves.”

Blockchain.”A peer-to-peer distributed ledger forged by consensus, combined with a system for ‘smart contracts’ and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints.” (Hyperledger.)

“To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements. Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past. The ever-growing size of the blockchain is considered by some to be a problem due to issues like storage and synchronization. On an average, every 10 minutes, a new block is appended to the block chain through mining.” (Investopedia.)

“Think of it as an operating system for marketplaces, data-sharing networks, micro-currencies, and decentralized digital communities. It has the potential to vastly reduce the cost and complexity of getting things done in the real world.” (Hyperledger.)

Permissionless system. “A permissionless system [or ledger] is one in which identity of participants is either pseudonymous or even anonymous. Bitcoin was originally designed with permissionless parameters although as of this writing many of the on-ramps and off-ramps for Bitcoin are increasingly permission-based. (Tim Swanson.)

Permissioned system. “A permissioned system -[or ledger] is one in which identity for users is whitelisted (or blacklisted) through some type of KYB or KYC procedure; it is the common method of managing identity in traditional finance.” (Tim Swanson)

Mining. “The process by which transactions are verified and added to the public ledger, known as the blockchain. (It is) also the means through which new bitcoin are released. Anyone with access to the Internet and suitable hardware can participate in mining. The mining process involves compiling recent transactions into blocks and trying to solve a computationally difficult puzzle. The participant who first solves the puzzle gets to place the next block on the block chain and claim the rewards. The rewards, which incentivize mining, are both the transaction fees associated with the transactions compiled in the block as well as newly released bitcoin.” (Investopedia.)

Ethereum. “An open-source, public, blockchain-based distributed computing platform featuring smart contract (scripting) functionality, which facilitates online contractual agreements. It provides a decentralized Turing-complete virtual machine, the Ethereum Virtual Machine (EVM), which can execute scripts using an international network of public nodes. Ethereum also provides a cryptocurrency token called “ether”, which can be transferred between accounts and used to compensate participant nodes for computations performed. Gas, an internal transaction pricing mechanism, is used to mitigate spam and allocate resources on the network. Ethereum was proposed in late 2013 by Vitalik Buterin, a cryptocurrency researcher and programmer. Development was funded by an online crowdsale during July–August 2014. The system went live on 30 July 2015, with 11.9 million coins “premined” for the crowdsale… In 2016 Ethereum was forked into two blockchains, as a result of the collapse of The DAO project. The two chains have different numbers of users, and the minority fork was renamed to Ethereum Classic.” (Wikipedia.)

Decentralized Autonomous Organization. This is “an organization that is run through rules encoded as computer programs called smart contracts. A DAO’s financial transaction record and program rules are maintained on a blockchain… The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency… This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration.(Wikipedia)

Initial Coin Offering. “A means of crowdfunding the release of a new cryptocurrency. Generally, tokens for the new cryptocurrency are sold to raise money for technical development before the cryptocurrency is released. Unlike an initial public offering (IPO), acquisition of the tokens does not grant ownership in the company developing the new cryptocurrency. And unlike an IPO, there is little or no government regulation of an ICO.” (Chris Skinner.)

“In an ICO campaign, a percentage of the cryptocurrency is sold to early backers of the project in exchange for legal tender or other cryptocurrencies, but usually for Bitcoin…During the ICO campaign, enthusiasts and supporters of the firm’s initiative buy some of the distributed cryptocoins with fiat or virtual currency. These coins are referred to as tokens and are similar to shares of a company sold to investors in an Initial Public Offering (IPO) transaction.” (Investopedia.)

Tokens. “In the blockchain world, a token is a tiny fraction of a cryptocurrency (bitcoin, ether, etc) that has a value usually less than 1/1000th of a cent, so the value is essentially nothing, but it can still go onto the blockchain…This sliver of currency can carry code that represents value in the real world — the ownership of a diamond, a plot of land, a dollar, a share of stock, another cryptocurrency, etc. Tokens represent ownership of the underlying asset and can be traded freely. One way to understand it is that you can trade physical gold, which is expensive and difficult to move around, or you can just trade tokens that represent gold. In most cases, it makes more sense to trade the token than the asset. Tokens can always be redeemed for their underlying asset, though that can often be a difficult and expensive process. Though technically they could be redeemed, many tokens are designed never to be redeemed but traded forever. On the other hand, a ticket is a token that is designed to be redeemed and may or may not be trade-able” (TokenFactory.)

“Tokens in the ethereum ecosystem can represent any fungible tradable good: coins, loyalty points, gold certificates, IOUs, in game items, etc. Since all tokens implement some basic features in a standard way, this also means that your token will be instantly compatible with the ethereum wallet and any other client or contract that uses the same standards. (Ethereum.org/token.)

“The most important takehome is that tokens are not equity, but are more similar to paid API keys. Nevertheless, they may represent a >1000X improvement in the time-to-liquidity and a >100X improvement in the size of the buyer base relative to traditional means for US technology financing — like a Kickstarter on steroids.” (Thoughts on Tokens, by Balaji S. Srinivasan.)

“A blockchain token is a digital token created on a blockchain as part of a decentralized software protocol. There are many different types of blockchain tokens, each with varying characteristics and uses. Some blockchain tokens, like Bitcoin, function as a digital currency. Others can represent a right to tangible assets like gold or real estate. Blockchain tokens can also be used in new protocols and networks to create distributed applications. These tokens are sometimes also referred to as App Coins or Protocol Tokens. These types of tokens represent the next phase of innovation in blockchain technology, and the potential for new types of business models that are decentralized – for example, cloud computing without Amazon, social networks without Facebook, or online marketplaces without eBay. However, there are a number of difficult legal questions surrounding blockchain tokens. For example, some tokens, depending on their features, may be subject to US federal or state securities laws. This would mean, among other things, that it is illegal to offer them for sale to US residents except by registration or exemption. Similar rules apply in many other countries. (A Securities Law Framework for Blockchain Tokens.)

In fact tokens go back. All the way.

In Before Writing Volume I: From Counting to Cuneiform, Denise Schmandt-Besserat writes, “Tokens can be traced to the Neolithic period starting about 8000 B.C. They evolved following the needs of the economy, at first keeping track of the products of farming…The substitution of signs for tokens was the first step toward writing.” (For a compression of her vast scholarship on the matter, read Tokens: their Significance for the Origin of Counting and Writing.

I sense that we are now at a threshold no less pregnant with possibilities than we were when ancestors in Mesopotamia rolled clay into shapes, made marks on them and invented t-commerce.

And here is a running list of sources I’ve visited, so far:

You’re welcome.

To improve it, that is.

 

amsterdam-streetImagine you’re on a busy city street where everybody who disagrees with you disappears.

We have that city now. It’s called media—especially the social kind.

You can see how this works on Wall Street Journal‘s Blue Feed, Red Feed page. Here’s a screen shot of the feed for “Hillary Clinton” (one among eight polarized topics):

blue-red-wsj

Both invisible to the other.

We didn’t have that in the old print and broadcast worlds, and still don’t, where they persist. (For example, on news stands, or when you hit SCAN on a car radio.)

But we have it in digital media.

Here’s another difference: a lot of the stuff that gets shared is outright fake. There’s a lot of concern about that right now:

fakenews

Why? Well, there’s a business in it. More eyeballs, more advertising, more money, for more eyeballs for more advertising. And so on.

Those ads are aimed by tracking beacons planted in your phones and browsers, feeding data about your interests, likes and dislikes to robot brains that work as hard as they can to know you and keep feeding you more stuff that stokes your prejudices. Fake or not, what you’ll see is stuff you are likely to share with others who do the same. This business that pays for this is called “adtech,” also known as “interest based” or “interactive” advertising. But those are euphemisms. Its science is all about stalking. They can plausibly deny it’s personal. But it is.

The “social” idea is “markets as conversations” (a personal nightmare for me, gotta say). The business idea is to drag as many eyeballs as possible across ads that are aimed by the same kinds of creepy systems. The latter funds the former.

Rather than unpack that, I’ll leave that up to the rest of ya’ll, with a few links:

 

I want all the help I can get unpacking this, because I’m writing about it in a longer form than I’m indulging in here. Thanks.

Save

Tags: , , ,

cropped-wst-logo-main[3 December update: Here is a video of the panel.]

So I was on a panel at WebScience@10 in London (@WebScienceTrust, #WebSci10), where the first question asked was, “What are two aspects of ‘trust and the Web’ that you think are most relevant/important at the moment?” My answer went something like this::::

1) The Net is young, and the Web with it.

Both were born in their current forms on 30 April 1995, when the NSFnet backed off on its forbidding commercial traffic on its pipes. This opened the whole Net to absolutely everything, exactly when the graphical Web browser became fully useful.

Twenty-one years in the history of a world is nothing. We’re still just getting started here.

2) The Internet, like nature, did not come with privacy. And privacy is personal. We need to start there.

We arrived naked in this new world, and — like Adam and Eve — still don’t have clothing and shelter.

The browser should have been a private tool in the first place, but it wasn’t; and it won’t be, so long as we leave improving it mostly up to companies with more interest in violating our privacy than providing it.

Just 21 years into this new world, we still need our own clothing, shelter, vehicles and private spaces. Browsers included. We will only get privacy if our tools provide it as a simple fact.

We also need to be the first parties, rather than the second ones, in our social and business agreements. In other words, others need to accept our terms, rather than vice versa. As first parties, we are independent. As second parties, we are dependent. Simple as that. Without independence, without agency, without the ability to initiate, without the ability to obtain agreement on our own terms, it’s all just more of the same old industrial model.

In the physical world, our independence earns respect, and that’s what we give to others as a matter of course. Without that respect, we don’t have civilization. This is why the Web we have today is still largely uncivilized.

We can only civilize the Net and the Web by inventing digital clothing and doors for people, and by providing standard agreements private individuals can assert in their dealings with others.

Inventing yet another wannabe unicorn to provide “privacy as a service” won’t do it. Nor will regulating the likes of Facebook and Google, or expecting them to become interested in building protections, when their businesses depend on the absence of those protections.

Fortunately, work has begun on personal privacy tools, and agreements we can each assert. And we can talk about those.

Save

Save

shackles

Who Owns the Mobile Experience? is a report by Unlockd on mobile advertising in the U.K. To clarify the way toward an answer, the report adds, “mobile operators or advertisers?”

The correct answer is neither. Nobody’s experience is “owned” by somebody else.

True, somebody else may cause a person’s experience to happen. But causing isn’t the same as owning.

We own our selves. That includes our experiences.

This is an essential distinction. For lack of it, both mobile operators and advertisers are delusional about their customers and consumers. (That’s an important distinction too. Operators have customers. Advertisers have consumers. Customers pay, consumers may or may not. That the former also qualifies as the latter does not mean the distinction should not be made. Sellers are far more accountable to customers than advertisers are to consumers.)

It’s interesting that Unlockd’s survey shows almost identically high levels of delusion by advertisers and operators…

  • 85% of advertisers and 82% of operators “think the mobile ad experience is positive for end users”
  • 3% of advertisers and 1% of operators admit “it could be negative”
  • Of the 85% of advertisers who think the experience is positive, 50% “believe it’s because products advertised are relevant to the end user”
  • “the reasons for this opinion is driven from the belief that users are served detail around products that are relevant to them.”

… while:

  • 47% of consumers think “the mobile phone ad experience (for them) is positive”
  • 39% of consumers “think ads are irrelevant
  • 36% blame “poor or irritating format”
  • 40% “believe the volume of ads served to them are a main reason for the negative experience”

It’s amazing but not surprising to me that mobile operators apparently consider their business to be advertising more than connectivity. This mindset is also betrayed by AT&T charging a premium for privacy and Comcast wanting to do the same. (Advertising today, especially online, does not come with privacy. Quite the opposite, in fact. A great deal of it is based on tracking people. Shoshana Zuboff calls this surveillance capitalism.)

Years ago, when I consulted BT, JP Rangaswami (@jobsworth), then BT’s Chief Scientist, told me phone companies’ core competency was billing, not communications. Since those operators clearly wish to be in the “content” business now, and to make money the same way print and broadcast did for more than a century, it makes sense that they imagine themselves now to be one-way conduits for ad-fortified content, and not just a way people and things (including the ones called products and companies) can connect to each other.

The FCC and other regulators need to bear this in mind as they look at what operators are doing to the Internet. I mean, it’s good and necessary for regulators to care about neutrality and privacy of Internet services, but a category error is being made if regulators fail to recognize that the operators want to be “content distributors” on the models of commercial broadcasting (funded by advertising) and the post office (funded by junk mail, which is the legacy model of today’s personalized direct response advertising  online).

I also have to question how consumers were asked by this survey about their mobile ad experiences. Let me see a show of hands: how many here consider their mobile phone ad experience “positive?” Keep your hands down if you are associated in any way with advertising, phone companies or publishing. When I ask this question, or one like it (e.g. “Who here wants to see ads on their phone?”) in talks I give, the number of raised hands is usually zero. If it’s not, the few parties with raised hands offer qualified responses, such as, “I’d like to see coupons when I’m in a store using a shopping app.”

Another delusion of advertisers and operators is that all ads should be relevant. They don’t need to be. In fact, the most valuable ads are not targeted personally, but across populations, so large populations can become familiar with advertised products and services.

It’s a simple fact that branding wouldn’t exist without massive quantities of ads being shown to people for whom the ads are irrelevant. Few of us would know the brands of Procter & Gamble, Unilever, L’Oreal, Coca-Cola, Nestlé, General Motors, Volkswagen, Mars or McDonald’s (the current top ten brand advertisers worldwide) if not for the massive amounts of money those companies spend advertising to people who will never buy their products but will damn sure known those products’ names. (Don Marti explains this well.)

A hard fact that the advertising industry needs to face is that there is very little appetite for ads on the receiving end. People put up with it on TV and radio, and in print, but for the most part they don’t like it. (The notable exceptions are print ads in fashion magazines and other high-quality publications. And classifieds.)

Appetites for ads, and all forms of content, should be consumers’ own. This means consumers need to be able to specify the kind of advertising they’re looking for, if any.

Even then, the far more valuable signal coming from consumers is (or will be) an actual desire for certain products and services. In marketing lingo, these signals are qualified leads. In VRM lingo, these signals  are intentcasts. With intentcasting, the customers do the advertising, and are in full control of the process. And they are no longer mere consumers (which Jerry Michalski calls “gullets with wallets and eyeballs”).

It helps that there are dozens of companies in this business already.

So it would be far more leveraged for operators to work with those companies than with advertising systems so disconnected from reality that they’ve caused hundreds of millions of people to block ads on their mobile devices — and are in such deep denial of the market’s clear messages that they deny the legitimacy of a clear personal choice, misdirecting attention toward the makers of ad blocking tools, and away from what’s actually happening: people asserting power over their own lives and private spaces (e.g. their browsers) online.

If companies actually believe in free markets, they need to believe in free customers. Those are people who, at the very least, are in charge of their own experiences in the networked world.

Save

Tags: , , , ,

It didn't happen in 2010, but it will in 2016.

It didn’t happen in 2010, but it will in 2016.

This Post ran on my blog almost six years ago. I was wrong about the timing, but not about the turning: because it’s about to happen this month at the Computer History Museum in Silicon Valley. More about that below the post.
_________________

The tide turned today. Mark it: 31 July 2010.

That’s when The Wall Street Journal published The Web’s Gold Mine: Your Secrets, subtitled A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series. It has ten links to other sections of today’s report.

It’s pretty freaking amazing — and amazingly freaky, when you dig down to the business assumptions behind it. Here’s the gist:

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

It gets worse:

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen — tracking companies, data brokers and advertising networks — competing to meet the growing demand for data on individual behavior and interests.The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges. “It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.” The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, WSJ.com.) It then analyzed the tracking files and programs these sites downloaded onto a test computer. As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles. But over two-thirds — 2,224 — were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

Here’s what’s delusional about all this: There is no demand for tracking by individual customers. All the demand comes from advertisers — or from companies selling to advertisers. For now.

Here is the difference between an advertiser and an ordinary company just trying to sell stuff to customers: nothing. If a better way to sell stuff comes along — especially if customers like it better than this crap the Journal is reporting on — advertising is in trouble.

Here is the difference between an active customer who wants to buy stuff and a consumer targeted by secretive tracking bullshit: everything.

Two things are going to happen here. One is that we’ll stop putting up with it. The other is that we’ll find better ways for demand and supply to meet — ways that don’t involve tracking or the guesswork called advertising.

Improving a pain in the ass doesn’t make it a kiss. The frontier here is on the demand side, not the supply side.

Advertising may pay for lots of great stuff (such as search) that we take for granted, but advertising even at its best is guesswork. It flourishes in the absence of more efficient and direct demand-supply interactions.

The idea of making advertising perfectly personal has been a holy grail of the business since Day Alpha. Now that Day Omega is approaching, thanks to creepy shit like this, the advertsing business is going to crash up against a harsh fact: “consumers” are real people, and most real people are creeped out by this stuff.

Rough impersonal guesswork is tolerable. Totally personalized guesswork is not.

Trust me, if I had exposed every possible action in my life this past week, including every word I wrote, every click I made, everything I ate and smelled and heard and looked at, the guesswork engine has not been built that can tell any seller the next thing I’ll actually want. (Even Amazon, widely regarded as the best at this stuff, sucks to some degree.)

Meanwhile I have money ready to spend on about eight things, right now, that I’d be glad to let the right sellers know, provided that information is confined to my relationship with those sellers, and that it doesn’t feed into anybody’s guesswork mill. I’m ready to share that information on exactly those conditions.

Tools to do that will be far more leveraged in the ready-to-spend economy than any guesswork system. (And we’re working on those tools.) Chris Locke put it best in Cluetrain eleven years ago. He said, if you only have time for one clue this year, this is the one to get…

Thanks to the Wall Street Journal, that dealing may finally come in 2010.

[Later…] Jeff Jarvis thinks the Journal is being silly. I love Jeff, and I agree that the Journal may be blurring some concerns, off-base on some of the tech and even a bit breathless; but I also think they’re on to something, and I’m glad they’re on it.

Most people don’t know how much they’re being followed, and I think what the Journal’s doing here really does mark a turning point.

I also think, as I said, that the deeper story is the market for advertising, which is actually threatened by absolute personalization. (The future market for real engagement, however, is enormous. But that’s a different business than advertising — and it’s no less thick with data… just data that’s voluntarily shared with trusted limits to use by others.)

[Later still…] TechCrunch had some fun throwing Eric Clemons and Danny Sullivan together. Steel Cage Debate On The Future Of Online Advertising: Danny Sullivan Vs. Eric Clemons, says the headline. Eric’s original is Why Advertising is Failing on the Internet. Danny’s reply is at that first link. As you might guess, I lean toward Eric on this one. But this post is a kind of corollary to Eric’s case, which is compressed here (at the first link again):

I stand by my earlier points:

  • Users don’t trust ads
  • Users don’t want to view ads
  • Users don’t need ads
  • Ads cannot be the sole source of funding for the internet
  • Ad revenue will diminish because of brutal competition brought on by an oversupply of inventory, and it will be replaced in many instances by micropayments and subscription payments for content.
  • There are numerous other business models that will work on the net, that will be tried, and that will succeed.

The last point, actually, seemed to be the most important. It was really the intent of the article, and the original title was “Business Models for Monetizing the Internet: Surely There Must Be Something Other Than Advertising.” This point got lost in the fury over the title of the article and in rage over the idea that online advertising might lose its importance.

My case is that advertisers themselves will tire of the guesswork business when something better comes along. Whether or not that “something better” funds Web sites and services is beside the points I am making, though it could hardly be a more important topic.

For what it’s worth, I believe that the Googles of the world are well positioned to take advantage of a new economy in which demand drives supply at least as well as supply drives demand. So, in fact, are some of those back-end data companies. (Disclosure: I currently consult one of them.)

Look at it this way…

  • What if all that collected data were yours and not just theirs?
  • What if you could improve that data voluntarily?
  • What if there were standard ways you could get that data back, and use it in your own ways?
  • What if those same companies were in the business of helping you buy stuff, and not just helping sellers target you?

Those questions are all on the table now.

___________________

9 April 2016 — The What They Know series ran in The Wall Street Journal until 2012. Since then the tracking economy has grown into a monster that Shoshana Zuboff calls The Big Other, and Surveillance Capitalism.

The tide against surveillance began to turn with the adoption of ad blockers and tracking blockers. But, while those provide a measure of relief, they don’t fix the problem. For that we need tools that engage the publishers and advertisers of the world, in ways that work for them as well.

They might think it’s working for them today; but it’s clearly not, and this has been apparent for a long time.

In Identity and the Independent Web, published in October 2010, John Battelle said “the fact is, the choices provided to us as we navigate are increasingly driven by algorithms modeled on the service’s understanding of our identity. We know this, and we’re cool with the deal.”

In The Data Bubble II (also in October 2010) I replied,

In fact we don’t know, we’re not cool with it, and it isn’t a deal.

If we knew, The Wall Street Journal wouldn’t have a reason to clue us in at such length.

We’re cool with it only to the degree that we are uncomplaining about it — so far.

And it isn’t a “deal” because nothing was ever negotiated.

To have a deal, both parties need to come to the table with terms the other can understand and accept. For example, we could come with a term that says, Just show me ads that aren’t based on tracking me. (In other words, Just show me the kind of advertising we’ve always had in the offline world — and in the online one before the surveillance-based “interactive” kind gave brain cancer to Madison Avenue.)

And that’s how we turn the tide. This month. We’ll prepare the work on VRM Day (25 April), and then hammer it into code at IIW (26–28 April). By the end of that week we’ll post the term and the code at Customer Commons (which was designed for that purpose, on the Creative Commons model).

Having this term (which needs a name — help us think of one) is a good deal for advertisers because non-tracking based ads are not only perfectly understood and good at doing what they’ve always done, but because they are actually worth more (thank you, Don Marti) than the tracking-based kind.

It’s a good deal for high-reputation publishers, because it gets them out of a shitty business that tracks their readers to low reputation sites where placing ads is cheaper. And it lets them keep publishing ads that readers can appreciate because the ads clearly support the publication. (Bet they can charge more for the ads too, simply because they are worth more.)

It’s even good for the “interactive” advertising business because it allows the next round of terms to support advertising based on tracking that the reader actually welcomes. If there is such a thing, however, it needs to be on terms the reader asserts, and not on labor-intensive industry-run opt-out systems such as Ad Choices.

If you have a stake in these outcomes, come to VRM Day and IIW and help us make it happen. VRM Day is free, and IIW is very cheap compared to most other conferences. It is also an unconference. That means it has no keynotes or panels. Instead it’s about getting stuff done, over three days of breakouts, all on topics chosen by you, me and anybody else who shows up.

When we’re done, the Data Bubble will start bursting for real. It won’t mean that data goes away, however. It will just mean that data gets put to better uses than the icky ones we’ve put up with for at least six years too long.

_________________

This post also appears in Medium.

Tags: ,

« Older entries