You are currently browsing the archive for the Harvard category.

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.

For some reason, many or most of the images in this blog don’t load in some browsers. Same goes for the ProjectVRM blog as well. This is new, and I don’t know exactly why it’s happening.

So far, I gather it happens only when the URL is https and not http.

Okay, here’s an experiment. I’ll add an image here in the WordPress (4.4.2) composing window, and center it in the process, all in Visual mode. Here goes:


Now I’ll hit “Publish,” and see what we get.

When the URL starts with https, it fails to show in—

  • Firefox ((46.0.1)
  • Chrome (50.0.2661.102)
  • Brave (0.9.6)

But it does show in—

  • Opera (12.16)
  • Safari (9.1).

Now I’ll go back and edit the HTML for the image in Text mode, taking out class=”aligncenter size-full wp-image-10370 from between the img and src attributes, and bracket the whole image with the <center> and </center> tags. Here goes:


Hmm… The <center> tags don’t work, and I see why when I look at the HTML in Text mode: WordPress removes them. That’s new. Thus another old-school HTML tag gets sidelined. 🙁

Okay, I’ll try again to center it, this by time by taking out class=”aligncenter size-full wp-image-10370 in Text mode, and clicking on the centering icon in Visual mode. When I check back in Text mode, I see WordPress has put class=”aligncenter” between img and src. I suppose that attribute is understood by WordPress’ (or the theme’s) CSS while the old <center> tags are not. Am I wrong about that?

Now I’ll hit the update button, rendering this—


—and check back with the browsers.

Okay, it works with all of them now, whether the URL starts with https or http.

So the apparent culprit, at least by this experiment, is centering with anything other than class=”aligncenter”, which seems to require inserting a centered image Visual mode, editing out size-full wp-image-whatever (note: whatever is a number that’s different for every image I put in a post) in Text mode, and then going back and centering it in Visual mode, which puts class=”aligncenter” in place of what I edited out in text mode. Fun.

Here’s another interesting (and annoying) thing. When I’m editing in the composing window, the url is https. But when I “view post” after publishing or updating, I get the http version of the blog, where I can’t see what doesn’t load in the https version. But when anybody comes to the blog by way of an external link, such as a search engine or Twitter, they see the https version, where the graphics won’t load if I haven’t fixed them manually in the manner described above.

So https is clearly breaking old things, but I’m not sure if it’s https doing it, something in the way WordPress works, or something in the theme I’m using. (In WordPress it’s hard — at least for me — to know where WordPress ends and the theme begins.)

Dave Winer has written about how https breaks old sites, and here we can see it happening on a new one as well. WordPress, or at least the version provided for https://blogs.harvard.edu bloggers, may be buggy, or behind the times with the way it marks up images. But that’s a guess.

I sure hope there is some way to gang-edit all my posts going back to 2007. If not, I’ll just have to hope readers will know to take the s out of https and re-load the page. Which, of course, nearly all of them won’t.

It doesn’t help that all the browser makers now obscure the protocol, so you can’t see whether a site is http or https, unless you copy and paste it. They only show what comes after the // in the URL. This is a very unhelpful dumbing-down “feature.”

Brave is different. The location bar isn’t there unless you mouse over it. Then you see the whole URL, including the protocol to the left of the //. But if you don’t do that, you just see a little padlock (meaning https, I gather), then (with this post) “blogs.harvard.edu | Doc Searls Weblog • Help: why don’t images load in https?” I can see why they do that, but it’s confusing.

By the way, I probably give the impression of being a highly technical guy. I’m not. The only code I know is Morse. The only HTML I know is vintage. I’m lost with <span> and <div> and wp-image-whatever, don’t hack CSS or PHP, and don’t understand why <em> is now preferable to <i> if you want to italicize something. (Fill me in if you like.)

So, technical folks, please tell us wtf is going on here (or give us your best guess), and point to simple and global ways of fixing it.



Some answer links, mostly from the comments below:

That last one, which is cited in two comments, says this:

Chris Cree who experienced the same problem discovered that the WP_SITEURL and WP_HOME constants in the wp-config.php file were configured to structure URLs with http instead of https. Cree suggests users check their settings to see which URL type is configured. If both the WordPress address and Site URLs don’t show https, it’s likely causing issues with responsive images in WordPress 4.4.

Two things here:

  1. I can’t see where in Settings the URL type is mentioned, much less configurable. But Settings has a load of categories and choices within categories, so I may be missing it.
  2. I wonder what will happen to old posts I edited to make images responsive. (Some background on that. “Responsive design,” an idea that seems to have started here in 2010, has since led to many permutations of complications in code that’s mostly hidden from people like me, who just want to write something on a blog or a Web page. We all seem to have forgotted that it was us for whom Tim Berners-Lee designed HTML in the first place.) My “responsive” hack went like this: a) I would place the image in Visual mode; b) go into Text mode; and c) carve out the stuff between img and src and add new attributes for width and height. Those would usually be something like width=”50%” and height=”image”. This was an orthodox thing to do in HTML 4.01, but not in HTML 5. Browsers seem tolerant of this approach, so far, at least for pages viewed with the the http protocol. I’ve checked old posts that have images marked up that way, and it’s not a problem. Yet. (Newer browser versions may not be so tolerant.) Nearly all images, however, fail to load in Firefox, Chrome and Brave when viewed through https.

So the main question remaining are:

  1. Is this something I can correct globally with a hack in my own blogs?
  2. If so, is the hack within the theme, the CSS, the PHP, or what?
  3. If not, is it something the übergeeks at Harvard blogs can fix?
  4. If it’s not something they can fix, is my only choice to go back and change every image from the blogs’ beginnings (or just live with the breakage)?
  5. If that’s required, what’s to keep some new change in HTML 5, or WordPress, or the next “best practice” from breaking everything that came before all over again?

Thanks again for all your help, folks. Much appreciated. (And please keep it coming. I’m sure I’m not alone with this problem.)

In There Is No More Social Media — Just Advertising, Mike Proulx (@McProulx) begins,

CluetrainFifteen years ago, the provocative musings of Levine, Locke, Searls and Weinberger set the stage for a grand era of social media marketing with the publication of “The Cluetrain Manifesto” and their vigorous declaration of “the end of business as usual.”

For a while, it really felt like brands were beginning to embrace online communities as a way to directly connect with people as human beings. But over the years, that idealistic vision of genuine two-way exchange eroded. Brands got lazy by posting irrelevant content and social networks needed to make money.

Let’s call it what it is: Social media marketing is now advertising. It’s largely a media planning and buying exercise — emphasizing viewed impressions. Brands must pay if they really want their message to be seen. It’s the opposite of connecting or listening — it’s once again broadcasting.

Twitter’s Dick Costello recently said that ads will “make up about one in 20 tweets.” It’s also no secret that Facebook’s organic reach is on life support, at best. And when Snapchat launched Discover, it was quick to point out that “This is not social media.”

The idealistic end to business as usual, as “The Cluetrain Manifesto” envisioned, never happened. We didn’t reach the finish line. We didn’t even come close. After a promising start — a glimmer of hope — we’re back to business as usual. Sure, there have been powerful advances in ad tech. Media is more automated, targeted, instant, shareable and optimized than ever before. But is there anything really social about it? Not below its superficial layer.

First, a big thanks to Mike and @AdAge for such a gracious hat tip toward @Cluetrain. It’s amazing and gratifying to see the old meme still going strong, sixteen years after the original manifesto went up on the Web. (And it’s still there, pretty much unchanged — since 24 March 1999.) If it weren’t for marketing and advertising’s embrace of #Cluetrain, it might have been forgotten by now. So a hat tip to those disciplines as well.

An irony is that Cluetrain wasn’t meant for marketing or advertising. It was meant for everybody, including marketing, advertising and the rest of business. (That’s why @DWeinberger and I recently appended dillo3#NewClues to the original.) Another irony is that Cluetrain gets some degree of credit for helping social media come along. Even if that were true, it wasn’t what we intended. What we were looking for was more independence and agency on the personal side — and for business to adapt.

When that didn’t happen fast enough to satisfy me, I started ProjectVRM in 2006, to help the future along. We are now many people and many development projects strong. (VRM stands for Vendor Relationship Management: the customer-side counterpart of Customer Relationship Management — a $20+ billion business on the sellers’ side.)

Business is starting to notice. To see how well, check out the @Capgemini videos I unpack here. Also see how some companies (e.g. @Mozilla) are hiring VRM folks to help customers and companies shake hands in more respectful and effective ways online.

Monday, at VRM Day (openings still available), Customer Commons (ProjectVRM’s nonprofit spinoff) will be vetting a VRM maturity framework that will help businesses and their advisors (e.g. @Gartner, @Forrester, @idc, @KuppingerCole and @Ctrl-Shift) tune in to the APIs (and other forms of signaling) of customers expressing their intentions through tools and services from VRM developers. (BTW, big thanks to KuppingerCole and Ctrl-Shift for their early and continuing support for VRM and allied work toward customer empowerment.)

The main purpose of VRM Day is prep toward discussions and coding that will follow over the next three days at the XXth Internet Identity Workshop, better known as IIW, organized by @Windley, @IdentityWoman and myself. IIW is an unconference: no panels, no keynotes, no show floor. It’s all breakouts, demos and productive conversation and hackery, with topics chosen by participants. There are tickets left for IIW too. Click here. Both VRM Day and IIW are at the amazing and wonderful Computer History Museum in downtown Silicon Valley.

Mike closes his piece by offering five smart things marketers can do to “make the most of this era of #NotReally social media marketing.” All good advice.

Here’s one more that leverages the competencies of agencies like Mike’s own (@HillHolliday): Double down on old-fashioned Madison Avenue-type brand advertising. It’s the kind of advertising that carries the strongest brand signal. It’s also the most creative, and the least corrupted by tracking and other jive that creeps people out. (That stuff doesn’t come from Madison Avenue, by the way. Its direct ancestor is direct marketing, better known as junk mail. I explain the difference here.) For more on why that’s good, dig what Don Marti has been saying.

(BTW & FWIW, I was also with an ad agency business, as a founder and partner in Hodskins Simone & Searls, which did kick-ass work from 1978 to 1998. More about that here.)

Bottom line: business as usual will end. Just not on any schedule.


Tags: , , , ,

— is happening this weekend in New York, San Francisco and elsewhere. Read all about it here, here and here.

I’ll be there to help start things off, at 10am tomorrow. (Registration starts at 9am.) My job on the opening panel is to make a 2-3 minute statement of what I’d like to see in the form of legal hackery. Here goes:

  1. Restore freedom of contract and obsolete contracts of adhesion by creating standardized terms individuals can assert. I have two chapters in The Intention Economy devoted to this. (The Cyberlaw Clinic at the Berkman Center is also working on these — and corresponding terms on the business side — for Customer Commons. What gets hacked this weekend can feed into that work.)
  2. Create better means for expressing personal policies and preferences (such as Do Not Track) than are currently available — and putting these in the individual’s own tool box, rather than appearing only as choices presented by others, such as browser makers.
  3. Create graphical elements (e.g. the r-button) for both the above.

On the panel I will advocate for individuals as independent entities with full agency, rather than merely “users” of others’ systems, or victims of privacy abuse awaiting policy relief. This means I will argue for thinking and hacking toward building and filing the individual’s own tool box, rather than just tweaking the broken technical and legal systems we already have. (Though doing that is good too. Others will be there to advocate and hack on that.)

It is essential that we think outside the browser for this. While the browser began as something like your car on the information superhighway, it has since become a shopping cart that gets re-skinned with every commercial site you visit, and infested at each with tracking beacons so you can be a subject of constant surveillance. This is even true of Firefox, which I love (and within which I am writing this), and which (through Mozilla) is providing space for the San Francisco hackathon.

Let me go a little deeper on this. An example of what’s right and wrong in the browser space right now can be found Christian Heilmann‘s post, Why “Just Use Adblock” Should Never Be a Professional Answer. In it he says many good things that I agree with, enthusiastically. But he also gets one big thing wrong:

Whether we like it or not, ads are what makes the current internet work. They are what ensures the “free” we crave and benefit from, and if you dig deep enough you will find that nobody working in web development or design is not in one way or another paid by income stemming from ad sales on the web.

Saying ads are what make the Internet work is like like saying cities are what make geology work. Yes, the Internet supports commercial activity, but it is not reducible to it. For each of us to enjoy full agency on the Web, this distinction needs to be clear from the start.

Browser makers are stuck right now between many rocks (their users) and a hard place (advertising-supported websites). On the one hand they want to do right for users, and on the other they want to do right for what the ad industry now calls “publishers”. Since surveillance-fed “personalization” is big with those publishers, and lots of users don’t like it (AdBlock Plus is the top browser extention, by far), the browser makers are caught in the middle. You can see the trouble they have with this conflict in A User Personalization Proposal for Firefox, which was floated by Justin Fox of Mozilla last July. In it he writes,

We want to see even more personalization across the Web from large and small sites, but in a transparent way that retains user control. The team at Mozilla Labs is focused on exploring ways to move the Web forward, and has thought a lot about how the browser could play a role in making useful content personalization a reality.

The blowback in the comments was harsh and huge. One sample:

The last thing the internet needs is more “personalization” (read: “invasion of my privacy”). All your marketing jargon does nothing to hide the fact that this is just another tool to allow advertisers, website owners, the NSA, and others to track users online habits and, despite any good intentions you might have, it’s rife with the potential for abuse.

I’m not bringing this up to give Mozilla or the other browser makers a hard time, but to suggest that the solutions we need start outside the browser. (And seeing them that way may also be good for the browser folks.)

Simply put, what we need most are tools for ourselves, that help in our dealings with all other parties. Not just protections from bad actors, or ways to make bad practices less bad.

See ya there.

Obamacare matters. But the debate about it also misdirects attention away from massive collateral damage to patients. How massive? Dig To Make Hospitals Less Deadly, a Dose of Data, by Tina Rosenberg in The New York Times. She writes,

Until very recently, health care experts believed that preventable hospital error caused some 98,000 deaths a year in the United States — a figure based on 1984 data. But a new report from the Journal of Patient Safety using updated data holds such error responsible for many more deaths — probably around some 440,000 per year. That’s one-sixth of all deaths nationally, making preventable hospital error the third leading cause of death in the United States. And 10 to 20 times that many people suffer nonlethal but serious harm as a result of hospital mistakes.

The bold-facing is mine. In 2003, one of those statistics was my mother. I too came close in 2008, though the mistake in that case wasn’t a hospital’s, but rather a consequence of incompatibility between different silo’d systems for viewing MRIs, and an ill-informed rush into a diagnostic procedure that proved unnecessary and caused pancreatitis (which happens in 5% of those performed — I happened to be that one in twenty). That event, my doctors told me, increased my long-term risk of pancreatic cancer.

Risk is the game we’re playing here: the weighing of costs and benefits, based on available information. Thus health care is primarily the risk-weighing business we call insurance. For generations, the primary customers of health care — the ones who pay for the services — have been insurance companies. Their business is selling bets on outcomes to us, to our employers, or both. They play that game, to a large extent, by knowing more than we do. Asymmetrical knowledge R them.

Now think about the data involved. Insurance companies live in a world of data. That world is getting bigger and bigger. And yet, McKinsey tells us, it’s not big enough. In The big-data revolution in US health care: Accelerating value and innovation (subtitle: Big data could transform the health-care sector, but the industry must undergo fundamental changes before stakeholders can capture its full value), McKinsey writes,

Fiscal concerns, perhaps more than any other factor, are driving the demand for big-data applications. After more than 20 years of steady increases, health-care expenses now represent 17.6 percent of GDP—nearly $600 billion more than the expected benchmark for a nation of the United States’s size and wealth.1 To discourage overutilization, many payors have shifted from fee-for-service compensation, which rewards physicians for treatment volume, to risk-sharing arrangements that prioritize outcomes. Under the new schemes, when treatments deliver the desired results, provider compensation may be less than before. Payors are also entering similar agreements with pharmaceutical companies and basing reimbursement on a drug’s ability to improve patient health. In this new environment, health-care stakeholders have greater incentives to compile and exchange information.

While health-care costs may be paramount in big data’s rise, clinical trends also play a role. Physicians have traditionally used their judgment when making treatment decisions, but in the last few years there has been a move toward evidence-based medicine, which involves systematically reviewing clinical data and making treatment decisions based on the best available information. Aggregating individual data sets into big-data algorithms often provides the most robust evidence, since nuances in subpopulations (such as the presence of patients with gluten allergies) may be so rare that they are not readily apparent in small samples.

Although the health-care industry has lagged behind sectors like retail and banking in the use of big data—partly because of concerns about patient confidentiality—it could soon catch up. First movers in the data sphere are already achieving positive results, which is prompting other stakeholders to take action, lest they be left behind. These developments are encouraging, but they also raise an important question: is the health-care industry prepared to capture big data’s full potential, or are there roadblocks that will hamper its use

The word “patient” appears nowhere in that long passage. The word “stakeholder” appears twice, plus eight more times in the whole piece. Still, McKinsey brooks some respect for the patient, though more as a metric zone than as a holder of a stake in outcomes:

Health-care stakeholders are well versed in capturing value and have developed many levers to assist with this goal. But traditional tools do not always take complete advantage of the insights that big data can provide. Unit-price discounts, for instance, are based primarily on contracting and negotiating leverage. And like most other well-established health-care value levers, they focus solely on reducing costs rather than improving patient outcomes. Although these tools will continue to play an important role, stakeholders will only benefit from big data if they take a more holistic, patient-centered approach to value, one that focuses equally on health-care spending and treatment outcomes.

McKinsey’s customers are not you and me. They are business executives, many of which work in health care. As players in their game, we have zero influence. As voters in the democracy game, however, we have a bit more. That’s one reason we elected Barack Obama.

So, viewed from the level at which it plays out, the debate over health care, at least in the U.S., is between those who believe in addressing problems with business (especially the big kind) and those who believe in addressing problems with policy (especially the big kind, such as Obamacare).

Big business has been winning, mostly. This is why Obamacare turned out to be a set of policy tweaks on a business that was already highly regulated, mostly by captive lawmakers and regulators.

Meanwhile we have this irony to contemplate: while dying of bad data at a rate rivaling war and plague, our physical bodies are being doubled into digital ones. It is now possible to know one’s entire genome, including clear markers of risks such as cancer and dementia. That’s in addition to being able to know one’s quantified self (QS), plus one’s health care history.

Yet all of that data is scattered and silo’d. This is why it is hard to integrate all our available QS data, and nearly impossible to integrate all our health care history. After I left the Harvard University Health Services (HUHS) system in 2010, my doctor at the time (Richard Donohue, MD, whom I recommend highly) obtained and handed over to me the entirety of my records from HUHS. It’s not data, however. It’s a pile of paper, as thick as the Manhattan phone book. Its utility to other doctors verges on nil. Such is the nature of the bizarre information asymmetry (and burial) in the current system.

On top of that, our health care system incentivizes us to conceal our history, especially if any of that history puts us in a higher risk category, sure to pay more in health insurance premiums.

But what happens when we solve these problems, and our digital selves become fully knowable — by both our selves and our health care providers? What happens to the risk calculation business we have today, which rationalizes more than 400,000 snuffed souls per annum as collateral damage? Do we go to single-payer then, for the simple reason that the best risk calculations are based on the nation’s entire population?

I don’t know.

I do know the current system doesn’t want to go there, on either the business or the policy side. But it will. Inevitably.

At the end of whatever day this is, our physical selves will know our data selves better than any system built to hoard and manage our personal data for their interests more than for ours. When that happens the current system will break, and another one will take its place.

How many more of us will die needlessly in the meantime? And does knowing (or guessing at) that number make any difference? It hasn’t so far.

But that shouldn’t stop us. Hats off to leadership in the direction of actually solving these problems, starting with Adrian Gropper, ePatient Dave, Patient Privacy RightsBrian Behlendorf, Esther Dyson, John Wilbanks, Tom Munnecke and countless other good people and organizations who have been pushing this rock up a hill for a long time, and aren’t about to stop. (Send me more names or add them in the comments below.)

On February 25, 2008, the FCC held a hearing on network management practices in the Ames Courtroom at Harvard Law Schoolhosted by the Berkman Center. In that hearing David P. Reed, one of the Internet’s founding scientists, used a plain envelope to explain how the Internet worked, and why it is wrong for anybody other than intended recipients to look inside the contents of the virtual envelopes in which communications are sent over the Internet. It was a pivotal moment in the debate, because the metaphor illustrated clearly how the Internet was designed to respect privacy.

Respect, that is. Not protect.

In the early days of postal communications, the flaps of envelopes were sealed with blobs of wax, usually imprinted by the sender with a symbol. These expressed the intent of the sender — that the contents of the letter were for the eyes of the recipient only. Yes, a letter could be opened without breaking the seal, but not without violating the wishes of the sender.

The other day I wrote, “clothing, for example, is a privacy technology. So are walls, doors, windows and shades.” In the physical world we respect the intentions behind those technologies as well, even though it might be easy to pull open the shirts of strangers, or to open closed doors without knocking on them.

The virtual world is far less civilized. Proof of that is in the pudding of privacy rights violations by agencies of the U.S. government, which is clearly acting at variance with the Fourth Amendment of the Constitution, which says,

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

I see three ways to approach these violations.

One is to rely on geeks and whistleblowers to pull the pants down on violators. In Welcome to the end of secrecy says the very openness that invites privacy violations is our best protection against the secrecy concealing those violations.

Another is through the exercise of law. In The Only Way to Restore Trust in the NSA, security guru Bruce Schneier writes, “The public has no faith left in the intelligence community or what the president says about it. A strong, independent special prosecutor needs to clean up the mess.” And that’s on top of moves already being made by legislators, for example in South Africa. Given the scale of the offenses now coming to light, we’ll see a lot more of that, even if no special prosecutors get appointed. The law of the jungle will give way to a jungle of new laws. Count on it.

The third is through business — specifically, business modeled on postal services. For many generations, postal services have respected the closed envelope as a matter of course. Yes, we knew there were times and places when mail could be inspected for legitimate reasons. And there were also many things it was not legal to do, or to send, through postal systems. But, on the whole, we could trust them to keep our private communications private. And we paid for the service.

The Googles of the world — companies making their money on advertising — aren’t likely to take the lead here, because they have too much invested in surveillance (of the legal sort) already. But others will step forward. The market for privacy is clear and obvious, and will only become more so as the revelations of abuse continue to pour out.

Perhaps the businesses best positioned to offer secure communications are the postal services themselves. They’ve already been disrupted plenty. Maybe now is the time for them to do some positive disruption themselves.




I want to plug something I am very much looking forward to, and encourage you strongly to attend. It’s called The Overview Effect, and it’s the premiere of a film by that title. Here are the details:

Friday, December 7, 2012 – 5:30pm – 7:00pm
Askwith Lecture Hall
Longfellow Hall
13 Appian Way
Harvard University
Cambridge, MA

The world-premiere of the short documentary film Overview, directed by Guy Reid, edited by Steve Kennedy and photographed by Christoph Ferstad. The film details the cognitive shift in awareness reported by astronauts during spaceflight, when viewing the Earth from space.

Following the film screening, there will be a panel discussion with two NASA astronauts, Ronald J. Garan Jr. and Jeffrey A. Hoffman, discussing their experience with the filmmakers and with Douglas Trumbull, the visual effects producer on films such as 2001: A Space OdysseyClose Encounters of the Third Kind, and Star Trek: The Motion Picture. The event will be moderated by Harvard Extension School instructor Frank White, author of the book The Overview Effect, which first looked at this phenomenon experienced by astronauts.

This event will take place on the 40th anniversary of the Blue Marble, one of the most famous pictures of Earth, which was taken by the crew of the Apollo 17 spacecraft on December 7, 1972.

Seating is limited and will be assigned on a first-come first-serve basis. The event will also be streamed live at http://alumni.extension.harvard.edu/.

The Overview Effect is something I experience every time I fly, and why I take so many photos to share the experience (and license them permissively so they can be re-shared).

The effect is one of perspective that transcends humanity’s ground-based boundaries. When I look at the picture above, of the south end of Manhattan, flanked by the Hudson and East Rivers, with Brooklyn below and New Jersey above, I see more than buildings and streets and bridges. I see the varying competence of the geology below, of piers and ports active and abandoned. I see the palisades: a 200-million year old slab of rock that formed when North America and Africa were pulling apart, as Utah and California are doing now, stretching Nevada between them. I see what humans do to landscapes covering them with roads and buildings, and celebrating them with parks and greenways. I see the the glories of civilization, the race between construction and mortality, the certain risks of structures to tides and quakes. I see the Anthropocene — the geological age defined by human influence on the world — in full bloom, and the certainty that other ages will follow, as hundreds have in the past. I see in the work of a species that has been from its start the most creative in the 4.65 billion year history of the planet, and a pestilence determined to raid the planet’s cupboards of all the irreplaceable goods that took millions or billions of years to produce. And when I consider how for dozens of years this scene was at the crosshairs of Soviet and terrorist weapons (with the effects of one attack still evident at the southern tip of Manhattan), I begin to see what the great poet Robinson Jeffers describes in The Eye, which he saw from his home in Carmel during WWII.

But it is astronauts who see it best, and this film is theirs. Hope it can help make their view all of ours.

Take a look at these screenshots of maps on my iPhone 4, running iOS 6:


On the left, maps.google.com, made mobile. On the right, Apple’s new Maps app, which comes with iOS 6. The location in both cases is Harvard Square, not far from where I am right now.

Note how the Apple app not only lacks the Harvard Square T stop (essential information for any map of this type), but traffic information as well. (Not to mention a bunch of other stuff, such as landmarks and street names. (Neither is perfect at the last two, but Google is way better.)

This is beyond inexcusable, especially now that it’s going on two months since Tim Cook apologized for Apple’s Maps fail and promised improvements. How hard can it be, just to add essential subway info? Very, apparently.

I go a bit deeper in this response to this post by Dave a few hours ago. To sum it up, I think only two things will save Apple’s bacon with maps. One is that Nokia/Navteq, Google and others provide maps on iOS that are better than Apple’s, saving Apple the trouble of doing it all. The other is crowd-sourcing the required data, simply because Apple by itself can’t replicate the effort both Google and Nokia/Navteq have put into what they’ve already got. But with the rest of us, Apple can actually do better. It’ll take a sex change for them to un-close their approach to mapping. But they’ll leapfrog the competition in the process, and win loyalty as well.

[Later…] Here is a screenshot that helps enlarge some points I make below in response to Droidkin’s comment:

apple credits and feeback

Note how dim, dark and hidden the small print is here. “Data from TomTom, others” goes to this list of credits. Also “Report a Problem” is simplex, not duplex, far as I know. You can tell them something but it’s like dropping a pebble into the ocean. Who knows what happens to it?

Making the rounds is , a killer essay by in MIT Technology Review. The gist:

At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.

The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

This is the first time I have read anything from a major media writer (and Michael is very much that — in fact I believe he is the best in the biz) that is in full agreement with The Advertising Bubble, my chapter on this very subject in The Intention Economy: When Customers Take Charge. A sample:

One might think all this personalized advertising must be pretty good, or it wouldn’t be such a hot new business category. But that’s only if one ignores the bubbly nature of the craze, or the negative demand on the receiving end for most of advertising’s goods.  In fact, the results of personalized advertising, so far, have been lousy for actual persons…

Tracking and “personalizing”—the current frontier of online advertising—probe the limits of tolerance. While harvesting mountains of data about individuals and signaling nothing obvious about their methods, tracking and personalizing together ditch one of the few noble virtues to which advertising at its best aspires: respect for the prospect’s privacy and integrity, which has long included a default assumption of anonymity.

Ask any celebrity about the price of fame and they’ll tell you: it’s anonymity. This wouldn’t be a Faustian bargain (or a bargain at all) if anonymity did not have real worth. Tracking, filtering and personalizing advertising all compromise our anonymity, even if no PII (Personally Identifiable Information) is collected.  Even if these systems don’t know us by name, their hands are still in our pants…

The distance between what tracking does and what users want, expect and intend is so extreme that backlash is inevitable. The only question is how much it will damage a business that is vulnerable in the first place.

The first section of the book opens with a retrospective view of the present from a some point in the near future — say, five or ten years out. A relevant sample:

After the social network crash of 2013, when it became clear that neither friendship nor sociability were adequately defined or managed through proprietary and contained systems (no matter how large they might be), individuals began to assert their independence, and to zero-base their social networking using their own tools, and asserting their own policies regarding engagement.

Customers now manage relationships in their own ways, using standardized tools that embrace the complexities of relationship—including needs for privacy (and, in some cases, anonymity). Thus loyalty to vendors now has genuine meaning, and goes as deep as either party cares to go. In some (perhaps most) cases this isn’t very deep, while in others it can get quite involved.

When I first wrote that, I said 2012. But I decided that was too aggressive, and went with the following year. Maybe I was right in the first place. Time will tell.

Meanwhile, here’s what Michael says about the utopian exhaust Facebook and its “ecosystem” are smoking:

Well, it does have all this data. The company knows so much about so many people that its executives are sure that the knowledge must have value (see “You Are the Ad,” by Robert D. Hof, May/June 2011).

If you’re inside the Facebook galaxy (a constellation that includes an ever-expanding cloud of associated ventures) there is endless chatter about a near-utopian (but often quasi-legal or demi-ethical) new medium of marketing. “If we just … if only … when we will …” goes the conversation. If, for instance, frequent-flyer programs and travel destinations actually knew when you were thinking about planning a trip. Really we know what people are thinking about—sometimes before they know! If a marketer could identify the person who has the most influence on you … If a marketer could introduce you to someone who would relay the marketer’s message … get it? No ads, just friends! My God!

But so far, the sweeping, basic, transformative, and simple way to connect buyer to seller and then get out of the way eludes Facebook.

The buyer is a person. That person does not require either a social network or absolutely-informed guesswork to know who she is or what she wants to buy. Obviously advertising can help. It always has. But totally personalized advertising is icky and oxymoronic. And, after half a decade or more at the business of making maximally-personalized ads, the main result is what Michael calls “the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles.”

That’s one of mine on the right. It couldn’t be more wasted and wrong. Let’s take it from the top.

First, Robert Scoble is an old friend and a good guy. But I couldn’t disagree with him more on the subject of Facebook and the alleged virtues of the fully followed life. (Go to this Gillmor Gang, starting about an hour in, to see Robert and I go at it about this.) Clearly Facebook doesn’t know about that. Nor does any advertiser, I would bet. In any case, Robert likes so many things that his up-thumb has no value to me.

I have no interest in Social Referrals, and if Facebook followed what I’ve written on the subject of “social” (as defined by Facebook and its marketing cohorts), it wouldn’t imagine I would be interested in extole.com.

I’m 64, but married. “Boyfriend wanted” is a low-rent fail as well as an insult.

I get the old yearbook pitch every time I go on Facebook, which is as infrequently as I possibly can. (There are people I can only reach that way, which is why I bother.) I don’t even need to click on the the ad to discover that, as I suspected, 60s.yearbookarchives.com is a front for the scammy Classmates.com.

I’ve never been fly flishing, and haven’t fished since I was a kid, many decades ago.

And I don’t want more credit cards, of any kind, regardless of Scoble’s position on Capital One.

In a subchapter of  titled “A Bad Theory of You,”  calls both Facebook’s and Google’s data-based assumptions about us “pretty poor representations of who we are, in part because there is no one set of data that describes who we are.” He also says that at best they put us into the  — a “place where something is lifelike but not convincingly alive, and it gives people the creeps.” But what you see on the right isn’t the best, and it’s not uncanny. It’s typical, and it sucks, even if it does bring Facebook a few $billion per year in click-through-based revenues.

The amazing thing here is that business keeps trying to improve advertising — and always by making it more personal — as if that’s the only way we can get to Michael’s “sweeping, basic, transformative, and simple way to connect buyer to seller and then get out of the way.” Three problems here:

  1. By its nature advertising — especially “brand” advertising — is not personal.
  2. Making advertising personal changes it into something else that is often less welcome.
  3. There are better ways to get to achieve Michael’s objective — ways that start on the buyer’s side, rather than the seller’s.

Don Marti, former Editor-in-Chief of Linux Journal and a collaborator on the advertising chapters in my book, nails the first two problems in a pair of posts. In the first, Ad targeting – better is worse? he says,

Now, as targeting for online advertising gets more and more accurate, the signal is getting lost. On the web, how do you tell a massive campaign from a well-targeted campaign? And if you can’t spot the “waste,” how do you pick out the signal?

I’m thinking about this problem especially from an IT point of view. Much of the value of an IT product is network value, and economics of scale mean that a product with massive adoption can have much higher ROI than a niche product…. So, better targeting means that online advertising carries less signal. You could be part of the niche on which your vendor is dumping its last batch of a “boat anchor” product. This is kind of a paradox: the better online advertising is, the less valuable it is. Companies that want to send a signal are going to have to find a less fake-out-able medium.

In the second, Perfectly targeted advertising would be perfectly worthless, which he wrote in response to Michael’s essay, he adds this:

The more targeted that advertising is, the less effective that it is. Internet technology can be more efficient at targeting, but the closer it gets to perfectly tracking users, the less profitable it has to become.

The profits are in advertising that informs, entertains, or creates a spectacle—because that’s what sends a signal. Targeting is a dead end. Maybe “Do Not Track” will save online advertising from itself.

John Battelle, who is both a first-rate journalist and a leader in the online advertising industry, says this in Facebook’s real question: What’s the native model?:

Facebook makes 82% of its money by selling targeted display advertising – boxes on the top and right side of the site (it’s recently added ads at logout, and in newsfeeds). Not a particularly unique model on its face, but certainly unique underneath: Because Facebook knows so much about each person on its service, it can target in ways Google and others can only dream about. Over the years, Facebook has added new advertising products based on the unique identity, interest, and relationship data it owns: Advertisers can incorporate the fact that a friend of a friend “likes” a product, for example. Or they can incorporate their own marketing content into their ads, a practice known as “conversational marketing” that I’ve been on about for seven or so years (for more on that, see my post Conversational Marketing Is Hot – Again. Thanks Facebook!).

But as many have pointed out, Facebook’s approach to advertising has a problem: People don’t (yet) come to Facebook with the intention of consuming quality content (as they do with media sites), or finding an answer to a question (as they do at Google search). Yet Facebook’s ad system combines both those models – it employs a display ad unit (the foundation of brand-driven media sites) as well as a sophisticated ad-buying platform that’d be familiar to anyone who’s ever used Google AdWords.

I’m not sure how many advertisers use Facebook, but it’s probably a fair guess to say the number approaches or crosses the hundreds of thousands. That’s about how many used Overture and Google a decade ago. The big question is simply this: Do those Facebook ads work as well or better than other approaches? If the answer is yes, the question of valuation is rather moot. If the answer is no…Facebook’s got some work to do.

But Facebook isn’t the real issue here. Working only the sell side of the marketplace is the issue. It’s now time to work the buy side.

The simple fact is that we need to start equipping buyers with their own tools for connecting with sellers, and for engaging in respectful and productive ways. That is, to improve the ability of demand to drive supply, and not to constantly goose up supply to drive demand, and failing 99.x% of the time.

This is an old imperative.

In , which Chris Locke, David Weinberger, Rick Levine and I wrote in 1999, we laid into business — and marketing in particular — for failing to grok the fact that in networked markets, which the Internet gave us, individuals should lead, rather than just follow. So, since business failed to get Cluetrain’s message, I started in mid-2006 at Harvard’s Berkman Center. The idea was to foster development of tools that make customers both independent of vendors, and better able to engage with vendors. That is, for demand to drive supply, personally. (VRM stands for .)

Imagine being able to:

  • name your own terms of service
  • define for yourself what loyalty is, what stores you are loyal to, and how
  • be able to gather and examine your own data
  • advertise (or “intentcast”) your own needs in an anonymous and secure way
  • manage your own relationships with all the vendors and other organizations you deal with
  • … and to do all that either on your own or with the help of that work for you rather than for sellers (as most third parties do)

Today there are dozens of VRM developers working at all that stuff and more — to open floodgates of economic possibility when demand drives supply personally, rather than “socially” as part of some ad-funded Web giant’s wet dream. (And socially in the genuine sense, in which each of us knows who our friends, relatives and other associates really are, and in what contexts our actual social connections apply.) I report on those, and the huge implications of their work, in The Intention Economy.

Here’s the thing, and why now is the time to point this out: most of those developers have a hell of a time getting laid by VCs, which on the whole have their heads stuck in a of the Web, and can’t imagine a way to improve the marketplace that does not require breeding yet another cow, or creating yet another ranch for dependent customers. Maybe now that the bloom is off Facebook’s rose, and the Filter Bubble is ready to burst, they can start looking at possibilities over here on the demand side.

So this post is an appeal to investors. Start thinking outside the cow, and outside the ranch. If you truly believe in free markets, then start believing in free customers, and in the development projects that make them not only free, but able to drive sales at a 100% rate, and to form relationships that are worthy of the word.

Bonus links:

HT to John Salvador, for pointing to Life in the Vast Lane, where I kinda predicted some of the above in 2008.

Amazon is now shipping my new book, The Intention Economy. Yes, the Kindle version too. They even have the first chapter available for free. You can “look inside” as well.

Thanks to Amazon’s search, you can even find stuff that’s not in the index, such as the acknowledgements. Those include a lot of people, including everybody who has ever been active on the ProjectVRM list.

The book isn’t for me. It’s for customers. All customers, that is. Not just the ones buying the book. The first paragraph of the Introduction explains,

This book stands with the customer. This is out of necessity, not sympathy. Over the coming years customers will be emancipated from systems built to control them. They will become free and independent actors in the marketplace, equipped to tell vendors what they want, how they want it, where and when—even how much they’d like to pay—outside of any vendor’s system of customer control. Customers will be able to form and break relationships with vendors, on customers’ own terms, and not just on the take-it-or-leave-it terms that have been pro forma since Industry won the Industrial Revolution.

That’s what the VRM development community has been working toward since I launched ProjectVRM at the Berkman Center in 2006. Now that community is getting kinda large. Here at the European Identity and Cloud Conference (#EIC12) in Munich, I have met or learned about a bunch of VRM developers I hadn’t known  before. Pretty soon I won’t be able to keep up, and that’s a good thing.

The book has four main parts:

  1. Customer Captivity
  2. The Networked Marketplace
  3. The Liberated Customer
  4. The Liberated Vendor

In a way it follows up on work begun with The Cluetrain Manifesto. The subtitle there was The End of Business as Usual. The subhead for The Intention Economy is When Customers Take Charge. Hey, when one thing ends, another must begin. This is it.

We’re not there yet. If The Intention Economy speeds things up, it will do its job.




« Older entries