You are currently browsing the archive for the Future category.

Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):

Geologist Ed Keller

Our speakers were geology professor Ed Keller of UCSB and Engineering Geologist Larry Gurrola, who also works and studies with Ed. That’s Ed in the shot below.

As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs. The whole thing is beyond sad and awful.

The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.

I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the worst she has ever seen, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.

Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)

Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):

Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.

I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:

I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.

Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.

Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:

Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)

Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.

Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.

It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:

From bottom to top, here’s what it says:

  1. Fires happen almost regularly, spreading most widely where chaparral has matured to become abundant fuel, as the firefighters like to call it.
  2. Flood events are more random, given the relative rarity of rain and even more rare rains of “biblical” volume. But they do happen.
  3. Stream beds in the floors of canyons accumulate rocks and boulders that roll down the gradually eroding slopes over time. The depth of these is expressed as basin instablity. Debris flows clear out the rocks and boulders when a big flood event comes right after a fire and basin becomes stable (relatively rock-free) again.
  4. The sediment yield in a flood (F) is maximum when a debris flow (DF) occurs.
  5. Debris flows tend to happen once every few hundred years. And you’re not going to get the big ones if you don’t have the canyon stream bed full of rocks and boulders.

About this set of debris flows in particular:

  1. Destruction down Oak Creek wasn’t as bad as on Montecito, San Ysidro, Buena Vista and Romero Creeks because the canyon feeding it is smaller.
  2. When debris flows hit an obstruction, such as a bridge, they seek out a new bed to flow on. This is one of the actions that creates an alluvial fan. From the map it appears something like that happened—
    1. Where the flow widened when it hit Olive Mill Road, fanning east of Olive Mill to destroy all three blocks between Olive Mill and Santa Elena Lane before taking the Olive Mill bridge across 101 and down to the Biltmore while also helping other flows fill 101 as well. (See Mac’s comment below, and his link to a top map.)
    2. In the area between Buena Vista Creek and its East Fork, which come off different watersheds
    3. Where a debris flow forked south of Mountain Drive after destroying San Ysidro Ranch, continuing down both Randall and El Bosque Roads.

For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)

Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:

See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:

Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.

But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.

Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.

Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.

For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:

This is deep in the heart of Santa Barbara.

The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.

All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.

To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:

What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.

Here is an image that helps, from that same link:

Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.

I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.

30 January


crysalisIn The Adpocalypse: What it MeansVlogbrother Hank Green issues a humorous lament on the impending demise of online advertising. Please devote the next 3:54 of your life to watching that video, so you catch all his points and I don’t need to repeat them here.

Got them? Good.

All of Hank’s points are well-argued and make complete sense. They are also valid mostly inside the bowels of the Google beast where his video work has thrived for the duration, as well as inside the broadcast model that Google sort-of emulates. (That’s the one where “content creators” and “brands” live in some kind of partly-real and partly-imagined symbiosis.)

While I like and respect what the brothers are trying to do commercially inside Google’s belly, I also expect them, and countless other “content creators” will get partly or completely expelled after Google finishes digesting that market, and obeys its appetite for lucrative new markets that obsolesce its current one.

We can see that appetite at work now that Google Contributor screams agreement with ad blockers (which Google is also joining) and their half-billion human operators that advertising has negative value. This is at odds with the business model that has long sustained both YouTube and “content creators” who make money there.

So it now appears that being a B2B creature that sells eyeballs to advertisers is Google’s larval stage, and that Google intends to emerge from its chrysalis as a B2C creature that sells content directly to human customers. (And stays hedged with search advertising, which is really more about query-based notifications than advertising, and doesn’t require unwelcome surveillance that will get whacked by the GDPR anyway a year from now.) 

Google will do this two ways: 1) through Contributor (an “ad removal pass” you buy) and 2) through subscriptions to YouTube TV (a $35/month cable TV replacement) and/or YouTube Red ($9.99/month for “uninterrupted music, ad-free videos, and more”).

Contributor is a way for Google to raise its share of the adtech duopoly it comprises with Facebook. The two paid video offerings are ways for Google to maximize its wedge of a subscription pie also sliced up by Apple, Amazon, Netflix, HBO, ShowTime, all the ISPs and every publication you can name—and to do that before we all hit Peak Subscription. (Which I’m sure most of us can see coming. I haven’t written about it yet, but I have touched hard on it here and here.)

I hope the Vlogbrothers make money from YouTube Red once they’re behind that paywall. Or that they can sell their inventory outside all the silos, like some other creators do. Maybe they’ll luck out if EmanciPay or some other new and open customer-based way of paying for creative goods works out. Whether or not that happens, one or more of the new blockchain/distributed ledger/token systems will provide countless new ways that stuff will get offered and paid for in the world’s markets. Brave Payments is already pioneering in that space. (Get the Brave browser and give it a try.)

It helps to recognize that the larger context (in fact the largest one) is the Internet, not the Web (which sits on top of the Net), and not apps (which are all basically on loan from their makers and the distribution systems of Apple and Google). The Internet cannot be contained in, or reduced to, the feudal castles of Facebook and Google, which mostly live on the Web. Those are all provisional and temporary. Money made by and within them is an evanescent grace.

All the Net does is connect end points and pass data between them through any available path. This locates us on a second world alongside the physical one, where the distance between everything it connects rounds to zero. This is new to human experience and at least as transformative as language, writing, printing and electricity—and no less essential than any of those, meaning it isn’t going to go away, no matter how well the ISPs, governments and corporate giants succeed in gobbling up and spinctering business and populations inside their digestive tracts.

The Net is any-to-any, by any means, by design of its base protocols. This opens countless possibilities we have barely begun to explore, much less build out. It is also an experience for humanity that is not going to get un-experienced if some other base protocols replace the ones we have now.

I am convinced that we will find new ways in our connected environment to pay for goods and services, and to signal each other much more securely, efficiently and effectively than we do now. I am also convinced we will do all that in a two-party way rather than in the three-party ways that require platforms and bureaucracies. If this sounds like anarchy, well, maybe: yeah. I dunno. We already have something like that in many disrupted industries. (Some wise stuff got written about this by David Graeber in The Utopia of Rules.)

Not a day goes by that my mind isn’t blown by the new things happening that have not yet cohered into an ecosystem but still look like they can create and sustain many forms of economic and social life, new and old. I haven’t seen anything like this in tech since the late ’90s. And if that sounds like another bubble starting to form, yes it is. You see it clearly in the ICO market right now. (Look at what’s lined up so far. Wholly shit.)

But this one is bigger. It’s also going to bring down everybody whose business is guesswork filled with fraud and malware.

If you’re betting on which giants survive, hold Amazon and Apple. Short those other two.

doc036cThe NYTimes says the Mandarins of language are demoting the Internet to a common noun. It is to be just “internet” from now on. Reasons:

Thomas Kent, The A.P.’s standards editor, said the change mirrored the way the word was used in dictionaries, newspapers, tech publications and everyday life.

In our view, it’s become wholly generic, like ‘electricity or the ‘telephone,’ ” he said. “It was never trademarked. It’s not based on any proper noun. The best reason for capitalizing it in the past may have been that the word was new. But at one point, I’ve heard, ‘phonograph’ was capitalized.”

But we never called electricity “the Electricity.” And “the telephone” referred to a single thing of which there billions of individual examples.

What was it about “the Internet” that made us want to capitalize it in the first place? Is usage alone reason enough to stop respecting that?

Some of my tech friends say the “Internet” we’ve had for all these years is just one prototype: the first and best-known of many other possible ones.

All due respect, but: bah.

There is only one Internet just like there is only one Universe. There are other examples of neither.

Formalizing the lower-case “internet,” for whatever reason, dismisses what’s transcendent and singular about the Internet we have: a whole that is more, and other, than a sum of parts.

I know it looks like the Net is devolving into many separate systems, isolated and silo’d to some degree. We see that with messaging, for example. Hundreds of different ones, most of them incompatible, on purpose. We have specialized mobile systems that provide variously open vs. sphinctered access (such as T-Mobile’s “binge” allowance for some content sources but not others), zero-rated not-quite-internets (such as Facebook’s Free Basics) and countries such as China, where many domains and uses are locked out.

Some questions…

Would we enjoy a common network by any name today if the Internet had been lower-case from the start?

Would makers or operators of any of the parts that comprise the Internet’s whole feel any fealty to what at least ought to be the common properties of that whole? Or would they have made sure that their parts only got along, at most, with partners’ parts? Would the first considerations by those operators not have been billing and tariffs agreed to by national regulators?

Hell, would the four of us have written The Cluetrain Manifesto? Would David Weinberger and I have written World of Ends or New Clues if the Internet had lacked upper-case qualities?

Would the world experience absent distance and cost across a The Giant Zero in its midst were it not for the Internet’s founding design, which left out billing proprietary routing on purpose?

Would we have anything resembling the Internet of today if designing and building it had been left up to phone and cable companies? Or to governments (even respecting the roles government activities did play in creating the Net we do have)?

I think the answer to all of those would be no.

In The Compuserve of Things, Phil Windley begins, “On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?”

Would he, or anybody, ask such questions, or aspire to such purposes, were it not for the respect many of us pay to the upper-cased-ness of “the Internet?”

How does demoting Internet from proper to common noun not risk (or perhaps even assure) its continued devolution to a collection of closed and isolated parts that lack properties (e.g. openness and commonality) possessed only by the whole?

I don’t know. But I think these kinds of questions are important to ask, now that the keepers of usage standards have demoted what the Net’s creators made — and ignore why they made it.

If you care at all about this, please dig Archive.org‘s Locking the Web open: a Call for a Distributed Web, Brewster Kahle’s post by the same title, covering more ground, and the Decentralized Web Summit, taking place on June 8-9. (I’ll be there in spirit. Alas, I have other commitments on the East Coast.)

A photo readers find among the most interesting among the 13,000+ aerial photos I've put on Flickr

This photo of the San Juan River in Utah is among dozens of thousands I’ve put on Flickr. it might be collateral damage if Yahoo dies or fails to sell the service to a worthy buyer.

Flickr is far from perfect, but it is also by far the best online service for serious photographers. At a time when the center of photographic gravity is drifting form arts & archives to selfies & social, Flickr remains both retro and contemporary in the best possible ways: a museum-grade treasure it would hurt terribly to lose.

Alas, it is owned by Yahoo, which is, despite Marissa Mayer’s best efforts, circling the drain.

Flickr was created and lovingly nurtured by Stewart Butterfield and Caterina Fake, from its creation in 2004 through its acquisition by Yahoo in 2005 and until their departure in 2008. Since then it’s had ups and downs. The latest down was the departure of Bernardo Hernandez in 2015.

I don’t even know who, if anybody, runs it now. It’s sinking in the ratings. According to Petapixel, it’s probably up for sale. Writes Michael Zhang, “In the hands of a good owner, Flickr could thrive and live on as a dominant photo sharing option. In the hands of a bad one, it could go the way of MySpace and other once-powerful Internet services that have withered away from neglect and lack of innovation.”

Naturally, the natives are restless. (Me too. I currently have 62,527 photos parked and curated there. They’ve had over ten million views and run about 5,000 views per day. I suppose it’s possible that nobody is more exposed in this thing than I am.)

So I’m hoping a big and successful photography-loving company will pick it up. I volunteer Adobe. It has the photo editing tools most used by Flickr contributors, and I expect it would do a better job of taking care of both the service and its customers than would Apple, Facebook, Google, Microsoft or other possible candidates.

Less likely, but more desirable, is some kind of community ownership. Anybody up for a kickstarter?

[Later…] I’m trying out 500px. Seems better than Flickr in some respects so far. Hmm… Is it possible to suck every one of my photos, including metadata, out of Flickr by its API and bring it over to 500px?

I also like Thomas Hawk‘s excellent defense of Flickr, here.


Tags: , , , ,


You won’t find an AM radio in a Tesla Model X. You also won’t find it in other electric cars, such as the BMW i3. One reason is that AM reception is trashed by electrical noise, which computing things constantly cause. Another is that the best AM reception requires a whip antenna outside the car: the longer the better. These days car makers hide antennas in windows and little shark fins on the roof. Another is that car makers have been cheaping out on the chips used in their AM radios for years, and the ones in home radios are even worse.

Demand for AM has been waning for decades anyway. AM doesn’t sound as good as FM or digital streams on laptops and mobile things. (Well, it can sound good with HD Radio, but that’s been a non-starter on both the transmitting and receiving sides for many years.) About the only formats left on AM that get ratings in the U.S. are sports and news, and sports is moving to FM too, even though coverage on FM in some markets, relatively speaking, sucks. (Compare WFAN/660am and 101.9fm, which simulcast.)

AM stations are back in the pack at best in the ratings. In Raleigh-Durham, WPTF/680 ruled the “the book” for decades, and is now the top of the bottom-feeders, with just a 1.0% share. KGO/810, which was #1 for a lifetime in the Bay Area, is now #19 with a 2.0% share. Much of KGO’s talent has been fired, and there’s a Facebook page for disgruntled fans. Not that it matters.

In Europe, AM is being clear-cut like a diseased forest. Norway ended AM broadcasting a while back, and will soon kill FM too. Germany killed all AM broadcasting at end of last year, just a few days ago. The American AFN (Armed Forces Network), which I used to love listening to over its 150,000-watt signal on 873Khz from Frankfurt, is also completely gone on AM in Germany. All transmitters are down. The legendary Marnach transmitter of Radio Luxembourg, “planet Earth’s biggest commercial radio station,” also shut down when 2016 arrived, and its towers will soon be down too. E

Europe’s other AM band, LW or longwave, is also being abandoned. The advantage of longwave is coverage. Signals on longwave spread over enormous territories, and transmitters can run two million watts strong. But listening has gone steadily down, and longwave is even more vulnerable to electrical noise than AM/MW. And running megawatt transmitters is expensive. So now Germany’s monster signal at 153KHz is gone, and France’s at 162KHz (one of 2 million watt ones) is due to go down later this year. And this report says all that’s keeping BBC’s landmark Radio 4 signal going on 198KHz is a collection of giant vacuum tubes that are no longer made. Brazil is moving from AM to FM as well. For an almost daily report on the demise of AM broadcasting around the world, read MediumWave News.

FM isn’t safe either. The UK is slowly phasing out both AM and FM, while phasing in Digital Audio Broadasting. Norway is the DAB pioneer and will soon kill following suit, and kill off FM. No other countries have announced the same plans, but the demographics of radio listening are shifting from FM to online anyway, just as they shifted from AM to FM in past decades. Streaming stats are only going up and up. So is podcasting. (Here are Pew’s stats from a year ago.)

Sure, there’s still plenty of over-the-air listening. But ask any college kid if he or she listens to over-the-air radio. Most, in my experience anyway, say no, or very little. They might listen in a car, but their primary device for listening — and watching video, which is radio with pictures — is their phone or tablet. So the Internet today is doing to FM what FM has been doing to AM for decades. Only faster.

Oh, and then there’s the real estate issue. AM/MW and LW transmission requires a lot of land. As stations lose value, often the land under their transmitters is worth more. (We saw this last year with WMAL/630 in Washington, which I covered here.) FM and TV transmission requires height, which is why their transmitters crowd the tops of buildings and mountains. The FCC is now auctioning off TV frequencies, since nearly everybody is now watching TV on cable, satellite or computing devices. And at some point it becomes cheaper and easier for radio stations, groups and networks to operate servers than to pay electricity and rent for transmitters.

This doesn’t mean radio goes away. It just goes online, where it will stay. It’ll suck that you can’t get stations where there isn’t cellular or wi-fi coverage, but that matters less than this: there are many fewer limits to broadcasting and listening online, obsoleting the “station” metaphor, along with its need for channels and frequencies. Those are just URLs now.

On the Internet band, anybody can stream or podcast to the whole world. The only content limitations are those set by (or for) rights-holders to music and video content. If you’ve ever wondered why there’s very little music on podcasts (they’re almost all talk), it’s because “clearing rights” for popular — or any — recorded music for podcasting ranges from awful to impossible. Streaming is easier, but no bargain. To get a sense of how complex streaming is, copyright-wise, dig David Oxenford’s Broadcast Law Blog. If all you want to do is talk, however, feel free, because you are. (A rough rule: talk is cheap, music is expensive.)

The key thing is that radio will remain what it has been from the start: the most intimate broadcast medium ever created. And it might become even more intimate than ever, once it’s clear and easy to everyone that anyone can do it. So rock on.

Bonus links:


ice-floes-off-greenland(Cross posted from this at Facebook)

In Snow on the Water I wrote about the ‘low threshold of death” for what media folks call “content” — which always seemed to me like another word for packing material. But its common parlance now.

For example, a couple days ago I heard a guy on WEEI, my fave sports station in Boston, yell “Coming up! Twenty-five straight minutes of content!”

Still, it’s all gone like snow on the water, melting at the speed of short term memory decay. Unless it’s in a podcast. And then, even if it’s saved, it’ll still get flushed or 404’d in the fullness of time.

So I think about content death a lot.

Back around the turn of the millennium, John Perry Barlow said “I didn’t start hearing the word ‘content’ until the container business felt threatened.” Same here. But the container business now looks more like plumbing than freight forwarding. Everything flows. But to where?

My Facebook timeline, standing in the vertical, looks like a core sample of glacier ice, drilled back to 1947, the year I showed up. Memory, while it lasts, is of old stuff which in the physical world would rot, dry, disintegrate, vanish or lithify from the bottom up.

But here we are on the Web, which was designed as a way to share documents, not to save them. It presumed a directory structure, inherited from Unix (e.g. domain.something/folder/folder/file.html). Amazingly, it’s still there. Whatever longevity “content” enjoys on the Web is largely owed to that structure, I believe.

But in practice most of what we pile onto the top of the Web is packed into silos such as Facebook. What happens to everything we put there if Facebook goes away? Bear in mind that Facebook isn’t even yet a decade old. It may be huge, but it’s no more permanent than a sand dune. Nothing on the Web is.

Everything on the Web, silo’d or not, flows outward from its sources like icebergs from glaciers, melting at rates of their own.

The one exception to that rule is the Internet Archive, which catches as much as it can of all that flow. Huge thanks to Brewster Kahle and friends for giving us that.

Anyway, just wanted to share some thoughts on digital mortality this morning.

As you were. Or weren’t. Or will be. Or not.

Bonus link: Locking the Web open.

The Giant ZeroMany years ago, Craig Burton shared the best metaphor for the Internet that I have ever heard, or seen in my head. He called it hollow sphere: a giant three-dimensional zero. He called it that because a sphere’s geometry best illustrates a system in which every end, regardless of its physical location, is functionally zero distance away from every other end. Across the nothing in the Net’s hollow sphere, every point can “see” every other point, and connect to it, as if distance were not there. And at no cost.

It doesn’t matter that the Net’s base protocol, TCP/IP, is not perfect, that there are costs and latencies involved in the operation of connections and routers between end points — and that many people in the world still do not enjoy the Net’s graces. What matters is that our species’ experience of the Net, and of the world it creates, is of zero distance and cost. You and I can publish posts like this one, or send emails to each other, or even have live video conference calls, with little if any regard for distance and cost.

Our experience of this is as essential to our future as the discovery of language and fire was to our ancestors. The Net has already become as essential to human agency — the capacity to act with effect in the world — as the wheel and movable type. We are not going to un-discover it.

Yes, companies and governments can control or access to the Net, and sphincter what passes through it; but it’s too late for anybody or anything to keep our species from knowing what it’s like to be zero distance apart at zero cost. We now have that experience, and we will use it to change life on Earth. Hopefully for the better.

The Giant Zero of the Net has an analogue with the physical world, whose gravity pulls us all toward an invisible center we can’t see but know is there. As with the Net’s zero, we live on Earth’s surface. The difference is that, on the Earth’s zero, distance matters. So does the inverse square law. Sound, sight and radio waves fade across distances. We need to be close to hear and see each other. Not so on the Net.

The Giant Zero is also the title of my next book. Until then, if you dig the metaphor, you might also source World of Ends or NewClues, both of which are co-written by David Weinberger. For now I just want to post this so I can source something simple about The Giant Zero in one link.

HT to @dweinberger: every hyperlink travels across the zero. And thanks to Hugh McLeod for the image above. Way back in 2004, I asked him to draw me the Internet, and that’s what he did. I haven’t seen anything better since.

meerkatLook where Meerkat andperiscopeapp Periscope point. I mean, historically. They vector toward a future where anybody anywhere can send live video out to the glowing rectangles of the world.

If you’ve looked at the output of either, several things become clear about their inevitable evolutionary path:

  1. Mobile phone/data systems will get their gears stripped, in both directions. And it will get worse before it gets better.
  2. Stereo sound recording is coming. Binaural recording too. Next…
  3. 3D. Mobile devices in a generation or two will include two microphones and two cameras pointed toward the subject being broadcast. Next…
  4. VR, or virtual reality.

Since walking around like a dork holding a mobile in front of you shouldn’t be the only way to produce these videos, glasses like these are inevitable:


(That’s a placeholder design in the public domain, so it has no IP drag, other than whatever submarine patents already exist, and I am sure there are some.)

Now pause to dig Facebook’s 10-year plan to build The Matrix. How long before Facebook buys Meerkat and builds it into Occulus Rift? Or buys Twitter, just to get Periscope and do the same?

Whatever else happens, the rights clearing question gets very personal. Do you want to be recorded by others and broadcast to the world or not? What are the social and device protocols for that? (Some are designed into the glasses above. Hope they help.)

We should start zero-basing some answers today, while the inevitable is in sight but isn’t here yet.

It should help to remember that all copyright laws were created in times when digital life was unimaginable (e.g. Stature of Anne, ASCAP), barely known (Act of 1976), or highly feared (WIPO, CTEA, DMCA).

How would we write new laws for the new video age that has barely started? Or why start with laws at all? (Remember that nearly all regulation protects yesterday from last Thursday — and are often written by know-nothings.)

We’ve only been living the networked life since graphical browsers and ISPs arrived in the mid-90’s. Meanwhile we’ve had thousands of years to develop civilization in the physical world.

Relatively speaking, digital networked life is Eden, which also didn’t come with privacy. That’s why we made clothing and shelter, and eventually put both on hooves and wheels.

How will we create the digital equivalents of the privacy technologies we call clothing, shelter, buttons, zippers, doors, windows, shades, blinds and curtains? Are the first answers technical or policy ones? Or both? (I favor the technical, fwiw. Code is Law and all that.)

Protecting the need for artists to make money is part of the picture. But it’s not the only part. And laws are only one way to protect artists, or anybody.

Manners come first, and we don’t have those yet. Meaning we also lack civilization, which is built on, and with, manners of many kinds. Think about much manners are lacking in the digital world. So far.

None of the big companies that dominate our digital lives have fully thought out how to protect anybody’s privacy. Those that come closest are ones we pay directly, and are therefore accountable to us (to a degree). Apple and Microsoft, for example, are doing more and more to isolate personal data to spaces the individual controls and the company can’t see — and to keep personal data away from the advertising business that sustains Google and Facebook, which both seem to regard personal privacy as a bug in civilization, rather than a feature of it. Note that we also pay those two companies nothing for their services. (We are mere consumers, whose lives are sold to the company’s actual customers, which are advertisers.)

Bottom line: the legal slate is covered in chalk, but the technical one is close to clean. What do we want to write there?

Start here: privacy is personal. We need to be able to signal our intentions about privacy — both as people doing the shooting, and the people being shot. A red light on a phone indicating recording status (as we have on video cameras) is one good step for video producers. On the other side of the camera, we need to signal what’s okay and what’s not. Clothing does that to some degree. So do doors, and shades and shutters on windows. We need the equivalent in our shared networked space. The faster and better we do that, the better we’ll be able to make good TV.

In There Is No More Social Media — Just Advertising, Mike Proulx (@McProulx) begins,

CluetrainFifteen years ago, the provocative musings of Levine, Locke, Searls and Weinberger set the stage for a grand era of social media marketing with the publication of “The Cluetrain Manifesto” and their vigorous declaration of “the end of business as usual.”

For a while, it really felt like brands were beginning to embrace online communities as a way to directly connect with people as human beings. But over the years, that idealistic vision of genuine two-way exchange eroded. Brands got lazy by posting irrelevant content and social networks needed to make money.

Let’s call it what it is: Social media marketing is now advertising. It’s largely a media planning and buying exercise — emphasizing viewed impressions. Brands must pay if they really want their message to be seen. It’s the opposite of connecting or listening — it’s once again broadcasting.

Twitter’s Dick Costello recently said that ads will “make up about one in 20 tweets.” It’s also no secret that Facebook’s organic reach is on life support, at best. And when Snapchat launched Discover, it was quick to point out that “This is not social media.”

The idealistic end to business as usual, as “The Cluetrain Manifesto” envisioned, never happened. We didn’t reach the finish line. We didn’t even come close. After a promising start — a glimmer of hope — we’re back to business as usual. Sure, there have been powerful advances in ad tech. Media is more automated, targeted, instant, shareable and optimized than ever before. But is there anything really social about it? Not below its superficial layer.

First, a big thanks to Mike and @AdAge for such a gracious hat tip toward @Cluetrain. It’s amazing and gratifying to see the old meme still going strong, sixteen years after the original manifesto went up on the Web. (And it’s still there, pretty much unchanged — since 24 March 1999.) If it weren’t for marketing and advertising’s embrace of #Cluetrain, it might have been forgotten by now. So a hat tip to those disciplines as well.

An irony is that Cluetrain wasn’t meant for marketing or advertising. It was meant for everybody, including marketing, advertising and the rest of business. (That’s why @DWeinberger and I recently appended dillo3#NewClues to the original.) Another irony is that Cluetrain gets some degree of credit for helping social media come along. Even if that were true, it wasn’t what we intended. What we were looking for was more independence and agency on the personal side — and for business to adapt.

When that didn’t happen fast enough to satisfy me, I started ProjectVRM in 2006, to help the future along. We are now many people and many development projects strong. (VRM stands for Vendor Relationship Management: the customer-side counterpart of Customer Relationship Management — a $20+ billion business on the sellers’ side.)

Business is starting to notice. To see how well, check out the @Capgemini videos I unpack here. Also see how some companies (e.g. @Mozilla) are hiring VRM folks to help customers and companies shake hands in more respectful and effective ways online.

Monday, at VRM Day (openings still available), Customer Commons (ProjectVRM’s nonprofit spinoff) will be vetting a VRM maturity framework that will help businesses and their advisors (e.g. @Gartner, @Forrester, @idc, @KuppingerCole and @Ctrl-Shift) tune in to the APIs (and other forms of signaling) of customers expressing their intentions through tools and services from VRM developers. (BTW, big thanks to KuppingerCole and Ctrl-Shift for their early and continuing support for VRM and allied work toward customer empowerment.)

The main purpose of VRM Day is prep toward discussions and coding that will follow over the next three days at the XXth Internet Identity Workshop, better known as IIW, organized by @Windley, @IdentityWoman and myself. IIW is an unconference: no panels, no keynotes, no show floor. It’s all breakouts, demos and productive conversation and hackery, with topics chosen by participants. There are tickets left for IIW too. Click here. Both VRM Day and IIW are at the amazing and wonderful Computer History Museum in downtown Silicon Valley.

Mike closes his piece by offering five smart things marketers can do to “make the most of this era of #NotReally social media marketing.” All good advice.

Here’s one more that leverages the competencies of agencies like Mike’s own (@HillHolliday): Double down on old-fashioned Madison Avenue-type brand advertising. It’s the kind of advertising that carries the strongest brand signal. It’s also the most creative, and the least corrupted by tracking and other jive that creeps people out. (That stuff doesn’t come from Madison Avenue, by the way. Its direct ancestor is direct marketing, better known as junk mail. I explain the difference here.) For more on why that’s good, dig what Don Marti has been saying.

(BTW & FWIW, I was also with an ad agency business, as a founder and partner in Hodskins Simone & Searls, which did kick-ass work from 1978 to 1998. More about that here.)

Bottom line: business as usual will end. Just not on any schedule.


Tags: , , , ,

IIW XX, IIW_XX_logothe 20th Internet Identity Workshop, comes at a critical inflection point in the history of VRM: Vendor Relationship Management, the only business movement working toward giving you both

  1. independence from the silos and walled gardens of the world; and
  2. better means for engaging with every business in the world — your way, rather than theirs.

If you’re looking for a point of leverage on the future of customer liberation, independence and empowerment, IIW is it.

Wall Street-sized companies around the world are beginning to grok what Main Street ones have always known: customers aren’t just “targets” to be “acquired,” “managed,” “controlled” and “locked in.” In other words, Cluetrain was right when it said this, in 1999:

if you only have time for one clue this year, this is the one to get…

Now it is finally becoming clear that free customers are more valuable than captive ones: to themselves, to the companies they deal with, and to the marketplace.

But how, exactly? That’s what we’ll be working on at IIW, which runs from April 7 to 9 at the Computer History Museum, in the heart of Silicon Valley: the best venue ever created for a get-stuff-done unconference.

Focusing our work is a VRM maturity framework that gives every company, analyst and journalist a list of VRM competencies, and every VRM developer a context in which to show which of those competencies they provide, and how far along they are along the maturity path. This will start paving the paths along which individuals, tool and service providers and corporate systems (e.g. CRM) can finally begin to fit their pieces together. It will also help legitimize VRM as a category. If you have a VRM or related company, now is the time to jump in and participate in the conversation. Literally. Here are some of the VRM topics and technology categories that we’ll be talking about, and placing in context in the VRM maturity framework:

Note: Another version of this post appeared first on the ProjectVRM blog. I’m doing a rare cross-posting here because it that important.

Tags: , , , , , , , , , , , , , ,

« Older entries