Nature

You are currently browsing the archive for the Nature category.

2005 Landslide at La Conchita

Most of California has just two seasons: rain and fire. Rain is another name for Winter, and it peaks in January. In most years, January in California isn’t any more wet than, say, New York, Miami or Chicago. But every few years California gets monsoons. Big ones. This is one of those years.

The eighteen gallon storage tub in our yard is sixteen inches deep and serves as a rain gauge:

Yesterday morning it was less than half full. While it gathered rain, our devices blasted out alerts with instructions like this:

So we stayed home and watched the Web tell us how the drought was ending:

Wasn’t long ago that Lake Cachuna was at 7%.

So that’s good news. The bad news is about floods, ruined piers and wharfsdowned trees, power outages, levee breaches. The usual.

It should help to remember that the geology on both coasts is temporary and improvisational. The East Coast south of New England and Long Island (where coastal landforms were mostly dumped there or scraped bare by glaciers in the geologic yesterday) is a stretch of barrier islands that are essentially dunes shifted by storms. Same goes for the Gulf Coast. The West Coast looks more solid, with hills and mountains directly facing the sea. But Pacific storms in Winter routinely feature waves high as houses, pounding against the shores and sea cliffs.

Looking up the coast from Tijuana, within a few hundred years Coronado and Point Loma in San Diego, La Jolla, all the clifftop towns up the coast to Dana Point and Laguna, Palos Verdes Peninsula, Malibu and Point Dume, Carpinteria, the Santa Barbara Mesa and Hope Ranch, all of Isla Vista and UCSB, Pismo and Avila Beaches, all of Big Sur and the Pacific Coast Highway there, Carmel and the Monterey Peninsula, Aptos, Capitola and Santa Cruz, Davenport, Half Moon Bay, Pacifica, the headlands of San Francisco, Muir and Stimson Beaches and Bolinas in Marin, Fort Bragg in Mendicino County, and Crescent City in Humbolt—all in California—will be eaten away partially or entirely by weather and waves. Earthquakes will also weigh in.

The photo up top is of La Conchita, a stupidly located town on the South Coast, west of Ventura, four days after a landslide in 2005 took out 13 homes and killed 10 people. All the land above town is a pile of former and future landslides, sure to slide again when the ground is saturated with water. Such as now or soon.

So that’s a long view. For one that spans the next week, visit windy.com and slide the elevation up to FL (flight level) 340 (34000 feet):

That yellow river of wind is a jet stream hauling serious ass straight across the Pacific and into California. Jet streams are why the headwinds and tailwinds you see on seat-back displays showing flight progress on planes often say 100mph or more. Look at Windy before you fly coast to coast or overseas, and you can guess what the flight path will be. You can also see why it may take as little as five hours to get from Dulles to Heathrow, or more than seven hours to come back by a route that touches the Arctic Circle. Your plane is riding, fighting or circumventing high altitude winds that have huge influences on the weather below.

To see how, drop Windy down to the surface:

Those eddies alongside the jet stream are low pressure centers full of the moisture and wind we call storms. They spin along the sides of the jet stream the way dust devils twist up along the sides of highways full of passing trucks. Those two storm centers are spinning toward California and will bring more wind and rain.

Beside the sure damage those will bring, there will be two benefits. One is that California will be as green as Ireland for a few months. The other is that wildflowers will bloom all over the place.

The Death Valley folks are hedging their bet, but I’d put money on a nice bloom this Spring. Watch for it.

Bonus link: There’s An Underground City Beneath Sacramento In Northern California That Most People Don’t Know About. Excerpt: “…Old Sacramento was built up during the time of the gold rush, but the frequent flooding of this area obliterated its first level time and time again, until finally, the city abandoned that level altogether. It’s both fascinating and creepy to tour the abandoned level…”

Tags: , ,

In the library of Earth’s history, there are missing books, and within books there are missing chapters, written in rock that is now gone. John Wesley Powell recorded the greatest example of gone rock in 1869, on his expedition by boat through the Grand Canyon. Floating down the Colorado River, he saw the canyon’s mile-thick layers of reddish sedimentary rock resting on a basement of gray non-sedimentary rock, the layers of which were cocked at an angle from the flatnesses of every layer above. Observing this, he correctly assumed that the upper layers did not continue from the bottom one, because time had clearly passed between when the basement rock was beveled flat, against its own grain, and when the floors of rock above it were successively laid down. He didn’t know how much time had passed between basement and flooring, and could hardly guess.

The answer turned out to be more than a billion years. The walls of the Grand Canyon say nothing about what happened during that time. Geology calls that nothing an unconformity.

In the decades since Powell made his notes, the same gap has been found all over the world and is now called the Great Unconformity. Because of that unconformity, geology knows close to nothing about what happened in the world through stretches of time up to 1.6 billion years long.

All of those absent records end abruptly with the Cambrian Explosion, which began about 541 million years ago. That’s when the Cambrian period arrived and with it an amplitude of history, written in stone.

Many theories attempt to explain what erased such a large span of Earth’s history, but the prevailing guess is perhaps best expressed in “Neoproterozoic glacial origin of the Great Unconformity”, published on the last day of 2018 by nine geologists writing for the National Academy of Sciences. Put simply, they blame snow. Lots of it: enough to turn the planet into one giant snowball, informally called Snowball Earth. A more accurate name for this time would be Glacierball Earth, because glaciers, all formed from accumulated snow, apparently covered most or all of Earth’s land during the Great Unconformity—and most or all of the seas as well. Every continent was a Greenland or an Antarctica.

The relevant fact about glaciers is that they don’t sit still. They push immensities of accumulated ice down on landscapes and then spread sideways, pulverizing and scraping against adjacent landscapes, bulldozing their ways seaward through mountains and across hills and plains. In this manner, glaciers scraped a vastness of geological history off the Earth’s continents and sideways into ocean basins, where plate tectonics could hide the evidence. (A fact little known outside of geology is that nearly all the world’s ocean floors are young: born in spreading centers and killed by subduction under continents or piled up as debris on continental edges here and there. Example: the Bay Area of California is an ocean floor that wasn’t subducted into a trench.) As a result, the stories of Earth’s missing history are partly told by younger rock that remembers only that a layer of moving ice had erased pretty much everything other than a signature on its work.

I bring all this up because I see something analogous to Glacierball Earth happening right now, right here, across our new worldwide digital sphere. A snowstorm of bits is falling on the virtual surface of our virtual sphere, which itself is made of bits even more provisional and temporary than the glaciers that once covered the physical Earth. Nearly all of this digital storm, vivid and present at every moment, is doomed to vanish because it lacks even a glacier’s talent for accumulation.

The World Wide Web is also the World Wide Whiteboard.

Think about it: there is nothing about a bit that lends itself to persistence, other than the media it is written on. Form follows function; and most digital functions, even those we call “storage”, are temporary. The largest commercial facilities for storing digital goods are what we fittingly call “clouds”. By design, these are built to remember no more of what they once contained than does an empty closet. Stop paying for cloud storage, and away goes your stuff, leaving no fossil imprints. Old hard drives, CDs, and DVDs might persist in landfills, but people in the far future may look at a CD or a DVD the way a geologist today looks at Cambrian zircons: as hints of digital activities that may have happened during an interval about which nothing can ever be known. If those fossils speak of what’s happening now at all, it will be of a self-erasing Digital Earth that was born in the late 20th century.

This theory actually comes from my wife, who has long claimed that future historians will look at our digital age as an invisible one because it sucks so royally at archiving itself.

Credit where due: the Internet Archive is doing its best to make sure that some stuff will survive. But what will keep that archive alive, when all the media we have for recalling bits—from spinning platters to solid-state memory—are volatile by nature?

My own future unconformity is announced by the stack of books on my desk, propping up the laptop on which I am writing. Two of those books are self-published compilations of essays I wrote about technology in the mid-1980s, mostly for publications that are long gone. The originals are on floppy disks that can only be read by PCs and apps of that time, some of which are buried in lower strata of boxes in my garage. I just found a floppy with some of those essays. (It’s the one with a blue edge in the wood case near the right end of the photo above.) If those still retain readable files, I am sure there are ways to recover at least the raw ASCII text. But I’m still betting the paper copies of the books under this laptop will live a lot longer than will these floppies or my mothballed PCs, all of which are likely bricked by decades of un-use.

As for other media, the prospect isn’t any better.

At the base of my video collection is a stratum of VHS videotapes, atop of which are strata of MiniDV and Hi8 tapes, and then one of digital stuff burned onto CDs and stored in hard drives, most of which have been disconnected for years. Some of those drives have interfaces and connections (e.g. FireWire) no longer supported by any computers being made today. Although I’ve saved machines to play all of them, none I’ve checked still work. One choked to death on a CD I stuck in it. That was a failure that stopped me from making Christmas presents of family memories recorded on old tapes and DVDs. I meant to renew the project sometime before the following Christmas, but that didn’t happen. Next Christmas? The one after that? I still hope, but the odds are against it.

Then there are my parents’ 8mm and 16mm movies filmed between the 1930s and the 1960s. In 1989, my sister and I had all of those copied over to VHS tape. We then recorded our mother annotating the tapes onto companion cassette tapes while we all watched the show. I still have the original film in a box somewhere, but I haven’t found any of the tapes. Mom died in 2003 at age 90, and her whole generation is now gone.

The base stratum of my audio past is a few dozen open reel tapes recorded in the 1950s and 1960s. Above those are cassette and micro-cassette tapes, plus many Sony MiniDisks recorded in ATRAC, a proprietary compression algorithm now used by nobody, including Sony. Although I do have ways to play some (but not all) of those, I’m cautious about converting any of them to digital formats (Ogg, MPEG, or whatever), because all digital storage media are likely to become obsolete, dead, or both—as will formats, algorithms, and codecs. Already I have dozens of dead external hard drives in boxes and drawers. And, since no commercial cloud service is committed to digital preservation in the absence of payment, my files saved in clouds are sure to be flushed after neither my heirs nor I continue paying for their preservation. I assume my old open reel and cassette tapes are okay, but I can’t tell right now because both my Sony TCWE-475 cassette deck (high end in its day) and my Akai 202D-SS open-reel deck (a quadrophonic model from the early ’70s) are in need of work, since some of their rubber parts have rotted.

The same goes for my photographs. My printed photos—countless thousands of them dating from the late 1800s to 2004—are stored in boxes and albums of photos, negatives and Kodak slide carousels. My digital photos are spread across a mess of duplicated backup drives totaling many terabytes, plus a handful of CDs. About 60,000 photos are exposed to the world on Flickr’s cloud, where I maintain two Pro accounts (here and here) for $50/year apiece. More are in the Berkman Klein Center’s pro account (here) and Linux Journal‘s (here). I doubt any of those will survive after those entities stop getting paid their yearly fees. SmugMug, which now owns Flickr, has said some encouraging things about photos such as mine, all of which are Creative Commons-licensed to encourage re-use. But, as Geoffrey West tells us, companies are mortal. All of them die.

As for my digital works as a whole (or anybody’s), there is great promise in what the Internet Archive and Wikimedia Commons do, but there is no guarantee that either will last for decades more, much less for centuries or millennia. And neither are able to archive everything that matters (much as they might like to).

It should also be sobering to recognize that nobody truly “owns” a domain on the internet. All those “sites” with “domains” at “locations” and “addresses” are rented. We pay a sum to a registrar for the right to use a domain name for a finite period of time. There are no permanent domain names or IP addresses. In the digital world, finitude rules.

So the historic progression I see, and try to illustrate in the photo at the top of this post, is from hard physical records through digital ones we hold for ourselves, and then up into clouds… that go away. Everything digital is snow falling and disappearing on the waters of time.

Will there ever be a way to save for the very long term what we ironically call our digital “assets?” Or is all of it doomed by its own nature to disappear, leaving little more evidence of its passage than a Great Digital Unconformity, when everything was forgotten?

I can’t think of any technical questions more serious than those two.


The original version of this post appeared in the March 2019 issue of Linux Journal.

I had a bunch of errands to run today, but also a lot of calls. And, when I finally got up from my desk around 4pm with plans to head out in the car, I found five inches of snow already on the apartment deck. Another five would come after that. So driving was clearly a bad idea.

When I stepped out on the street, I saw it was impossible. Cars were stuck, even on our side street.

So I decided to walk down to the nearest dollar store, a few blocks north on Broadway, which is also downhill in this part of town, to check out the ‘hood and pick up some deck lights to replace the ones that had burned out awhile back.

What I found on Broadway was total gridlock, because too many cars and trucks couldn’t move. Tires all over spun in place, saying “zzzZZZZzzzZZZ.” After I picked up a couple 5-foot lengths of holiday lights for $1 each at the dollar store, I walked back up past the same stuck length of cars and trucks I saw on the way down. A cop car and an ambulance would occasionally fire up their sirens, but it made no difference. Everything was halted.

When I got back, I put the lights on the deck and later shot the scene above. It’s 10pm now, and rains have turned the scene to slush.

I do hope kids got to sled in the snow anyway. Bonus links: Snow difference and Wintry mixing.

Tags:

fruit thought

If personal data is actually a commodity, can you buy some from another person, as if that person were a fruit stand? Would you want to?

Not yet. Or maybe not really.

Either way, that’s the idea behind the urge by some lately to claim personal data as personal property, and then to make money (in cash, tokens or cryptocurrency) by selling or otherwise monetizing it. The idea in all these cases is to somehow participate in existing (entirely extractive) commodity markets for personal data.

ProjectVRM, which I direct, is chartered to “foster development of tools and services that make customers both independent and better able to engage,” and is a big tent. That’s why on the VRM Developments Work page of the ProjectVRM wiki is a heading called Markets for Personal Data. Listed there are:

So we respect that work. We are sure to learn from it. But we also need to respect the structural problems it faces.

PROBLEM #1 is that, economically speaking, data is a public good, meaning non-rivalrous and non-excludable. (Rivalrous means consumption or use by one party prevents the same by another, and excludable means you can prevent parties that don’t pay from access to it.) Here’s a table from Linux Journal column I wrote a few years ago:

Excludability Excludability
YES NO
Rivalness YES Private good: good: e.g., food, clothing, toys, cars, products subject to value-adds between first sources and final customers Common pool resource: e.g., sea, rivers, forests, their edible inhabitants and other useful contents
Rivalness NO Club good: e.g., bridges, cable TV, private golf courses, controlled access to copyrighted works Public good: e.g., data, information, law enforcement, national defense, fire fighting, public roads, street lighting

 

PROBLEM #2 is that the nature of data as a public good also inconveniences claims that it ought to be property. Thomas Jefferson explained this in his 1813 letter to Isaac MacPherson:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation

Of course Jefferson never heard of data. But what he says about “the thinking power called an idea,” and how ideas are like fire, is important for us to get our heads around amidst the rising chorus of voices insistenting that data is a form of property.

PROBLEM #3 is that there are better legal frameworks than property law for protecting personal data. In Do we really want to “sell” ourselves? The risks of a property law paradigm for personal data ownership, Elizabeth Renieris and Dazza Greenwood write,

Who owns your data? It’s a popular question of late in the identity community, particularly in the wake of Cambridge Analytica, numerous high-profile Equifax-style data breaches, and the GDPR coming into full force and effect. In our view, it’s not only the wrong question to be asking but it’s flat out dangerous when it frames the entire conversation. While ownership implies a property law model of our data, we argue that the legal framework for our identity-related data must also consider constitutional or human rights laws rather than mere property law rules

Under common law, ownership in property is a bundle of five rights — the rights of possession, control, exclusion, enjoyment, and disposition. These rights can be separated and reassembled according to myriad permutations and exercised by one or more parties at the same time. Legal ownership or “title” of real property (akin to immovable property under civil law) requires evidence in the form of a deed. Similarly, legal ownership of personal property (i.e. movable property under civil law) in the form of commercial goods requires a bill of lading, receipt, or other document of title. This means that proving ownership or exerting these property rights requires backing from the state or sovereign, or other third party. In other words, property rights emanate from an external source and, in this way, can be said to be extrinsic rights. Moreover, property rights are alienable in the sense that they can be sold or transferred to another party.

Human rights — in stark contrast to property rights — are universal, indivisible, and inalienable. They attach to each of us individually as humans, cannot be divided into sticks in a bundle, and cannot be surrendered, transferred, or sold. Rather, human rights emanate from an internal source and require no evidence of their existence. In this way, they can be said to be intrinsic rights that are self-evident. While they may be codified or legally recognized by external sources when protected through constitutional or international laws, they exist independent of such legal documents. The property law paradigm for data ownership loses sight of these intrinsic rights that may attach to our data. Just because something is property-like, does not mean that it is — or that it should be — subject to property law.

In the physical realm, it is long settled that people and organs are not treated like property. Moreover, rights to freedom from unreasonable search and seizure, to associate and peaceably assemble with others, and the rights to practice religion and free speech are not property rights — rather, they are constitutional rights under U.S. law. Just as constitutional and international human rights laws protect our personhood, they also protect things that are property-like or exhibit property-like characteristics. The Fourth Amendment of the U.S. Constitution provides “the right of the people to be secure in their persons” but also their “houses, papers, and effects.” Similarly, the Universal Declaration of Human Rights and the European Convention on Human Rights protect the individual’s right to privacy and family life, but also her “home and correspondence”…

Obviously some personal data may exist in property-form just as letters and diaries in paper form may be purchased and sold in commerce. The key point is that sometimes these items are also defined as papers and effects and therefore subject to Fourth Amendment and other legal frameworks. In other words, there are some uses of (and interests in) our data that transform it from an interest in property to an interest in our personal privacy — that take it from the realm of property law to constitutional or human rights law. Location data, biological, social, communications and other behavioral data are examples of data that blend into personal identity itself and cross this threshold. Such data is highly revealing and the big-data, automated systems that collect, track and analyze this data make the need to establish proportional protections and safeguards even more important and more urgent. It is critical that we apply the correct legal framework.

PROBLEM #4 is that all of us as human beings are able to produce forms of value that far exceed that of our raw personal data. Specifically, treating data as if it were a rivalrous and excludable commodity—such as corn, oil or fruit—not only takes Jefferson’s “thinking power” off the table, but misdirects attention, investment and development work away from supporting the human outputs that are fully combustible, and might be expansible over all space, without lessening density. Ideas can do that. Oil can’t, combustible or not.

Put another way, why would you want to make almost nothing (the likely price) from selling personal data on a commodity basis when you can make a lot more by selling your work where markets for work exist, and where rights are fully understood and protected within existing legal frameworks?

What makes us fully powerful as human beings is our ability to generate and share ideas and other goods that are expansible over all space, and not just to slough off data like so much dandruff. Or to be valued only for the labors we contribute as parts of industrial machines.

Important note: I’m not knocking labor here. Most of us have to work for wages, either as parts of industrial machines, or as independent actors. There is full honor in that. Yet our nature as distinctive and valuable human beings is to be more and other than a source of labor alone, and there are ways to make money from that fact too.

Many years ago JP Rangaswami (@jobsworth) and I made a distinction between making money with something and because of something.

Example: I don’t make money with this blog. But I do make money because of it—and probably a lot more money than I would if this blog carried advertising or if I did it for a wage. JP and I called this way of making money a because effect. The entire Internet, the World Wide Web and the totality of free and open source code all have vast because effects in money made with products and services that depend on those graces. Each are rising free tides that lift all commercial boats. Non-commercial ones too.

Which gets us to the idea behind declaring personal data as personal property, and creating marketplaces where people can sell their data.

The idea goes like this: there is a $trillion or more in business activity that trades or relies on personal data in many ways. Individual sources of that data should be able to get in on the action.

Alas, most of that $trillion is in what Shoshana Zuboff calls surveillance capitalism: a giant snake-ball of B2B activity wherein there is zero interest in buying what can be exploited for free.

Worse, surveillance capitalism’s business is making guesses about you, so it can sell you shit. On a per-message basis, this works about 0% of the time, even though massive amounts of money flow through that B2B snakeball (visualized as abstract rectangles here and here). Many reasons for that. Here are a few:

  1. Most of the time, such as right here and now, you’re not buying a damn thing, and not in a mood to be bothered by someone telling you what to buy.
  2. Companies paying other companies to push shit at you do not have your interests at heart—not even if their messages to you are, as they like to put it, “relevant” or “interest based.” (Which they almost always are not.)
  3. The entrails of surveillance capitalism are fully infected with fraud and malware.
  4. Surveillance capitalism is also quite satisfied to soak up to 97% of an advertising spend before an ad’s publisher gets its 3% for pushing an ad at you.

Trying to get in on that business is an awful proposition.

Yes, I know it isn’t just surveillance capitalists who hunger for personal data. The health care business, for example, can benefit enormously from it, and is less of a snakeball, on the whole. But what will it pay you? And why should it pay you?

Won’t large quantities of anonymized personal data from iOS and Android devices, handed over freely, be more valuable to medicine and pharma than the few bits of data individuals might sell? (Apple has already ventured in that direction, very carefully, also while not paying for any personal data.)

And isn’t there something kinda suspect about personal data for sale? Such as motivating the unscrupulous to alter some of their data so it’s worth more?

What fully matters for people in the digital world is agency, not data. Agency is the power to act with full effect in the world. It’s what you have when you put your pants on, when you walk, or drive, or tell somebody something useful while they listen respectfully. It’s what you get when you make a deal with an equal.

It’s not what any of us get when we’re just “users” on a platform. Or when we click “agree” to one-sided terms the other party can change and we can’t. Both of those are norms in Web 2.0 and desperately need to be killed.

But it’s still early. Web 2.0 is an archaic stage in the formation of the digital world. Surveillance capitalism has also been a bubble ready to pop for years. The matter is when, not if. The whole thing is too absurd, corrupt, complex and annoying to keep living forever.

So let’s give people ways to increase their agency, at scale, in the digital world. There’s no scale in selling one’s personal data. But there’s plenty in putting better human powers to work.

If we’re going to obsess over personal data, let’s look instead toward ways to regulate or control over how our personal data might be used by others. There are lots of developers at work on this already. Here’s one list at ProjectVRM.

Bonus links:

 

 

 

 

Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):

Geologist Ed Keller

Our speakers were geology professor Ed Keller of UCSB and Engineering Geologist Larry Gurrola, who also works and studies with Ed. That’s Ed in the shot below.

As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs.* The whole thing is beyond sad and awful.

The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.

I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the most difficult she has ever faced, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.

Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)

Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):

Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.

I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:

I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.

Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.

Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:

Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)

Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.

Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.

It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:

From bottom to top, here’s what it says:

  1. Fires happen almost regularly, spreading most widely where chaparral has matured to become abundant fuel, as the firefighters like to call it.
  2. Flood events are more random, given the relative rarity of rain and even more rare rains of “biblical” volume. But they do happen.
  3. Stream beds in the floors of canyons accumulate rocks and boulders that roll down the gradually eroding slopes over time. The depth of these is expressed as basin instablity. Debris flows clear out the rocks and boulders when a big flood event comes right after a fire and basin becomes stable (relatively rock-free) again.
  4. The sediment yield in a flood (F) is maximum when a debris flow (DF) occurs.
  5. Debris flows tend to happen once every few hundred years. And you’re not going to get the big ones if you don’t have the canyon stream bed full of rocks and boulders.

About this set of debris flows in particular:

  1. Destruction down Oak Creek wasn’t as bad as on Montecito, San Ysidro, Buena Vista and Romero Creeks because the canyon feeding it is smaller.
  2. When debris flows hit an obstruction, such as a bridge, they seek out a new bed to flow on. This is one of the actions that creates an alluvial fan. From the map it appears something like that happened—
    1. Where the flow widened when it hit Olive Mill Road, fanning east of Olive Mill to destroy all three blocks between Olive Mill and Santa Elena Lane before taking the Olive Mill bridge across 101 and down to the Biltmore while also helping other flows fill 101 as well. (See Mac’s comment below, and his link to a top map.)
    2. In the area between Buena Vista Creek and its East Fork, which come off different watersheds
    3. Where a debris flow forked south of Mountain Drive after destroying San Ysidro Ranch, continuing down both Randall and El Bosque Roads.

For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)

Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:

See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:

Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.

But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.

Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.

Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.

For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:

This is deep in the heart of Santa Barbara.

The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.

All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.

To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:

What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.

Here is an image that helps, from that same link:

Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.

I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.

30 January

6 April, 2020
*I was told later, by a rescue worker who was on the case, that it was possible that both victims’ bodies had washed all the way to the ocean, and thus will never be found.

In this Edhat story, Ed Keller visits a recently found prior debris flow. An excerpt:

The mud and boulders from a prehistoric debris flow, the second-to-last major flow in Montecito, have been discovered by a UCSB geologist at the Bonnymede condominiums and Hammond’s Meadow, just east of the Coral Casino.

The flow may have occurred between 1,000 and 2,000 years ago, said Ed Keller, a professor of earth science at the university. He’s calling it the “penultimate event.” It came down a channel of Montecito Creek and was likely larger on that creek than during the disaster of Jan. 9, 2018, Keller said. Of 23 people who perished on Jan. 9, 17 died along Montecito Creek.

The long interval between the two events means that the probability of another catastrophic debris flow occurring in Montecito in the next 1,000 years is very low, Keller said.

“It’s reassuring,” he said, “They’re still pretty rare events, if you consider you need a wildfire first and then an intense rainfall. But smaller debris flows could occur, and you could still get a big flash flood. If people are given a warning to evacuate, they should heed it.”

The term “fake news” was a casual phrase until it became clear to news media that a flood of it had been deployed during last year’s presidential election in the U.S. Starting in November 2016, fake news was the subject of strong and well-researched coverage by NPR (here and here), Buzzfeed, CBS (here and here), Wired, the BBC, Snopes, CNN (here and here), Rolling Stone and others. It thus became a thing…

… until Donald Trump started using it as an epithet for news media he didn’t like. He did that first during a press conference on February 16, and then the next day on Twitter:

And he hasn’t stopped. To Trump, any stick he can whup non-Fox mainstream media with is a good stick, and FAKE NEWS is the best.

So that pretty much took “fake news,” as a literal thing, off the table for everyone other than Trump and his amen chorus.

So, since we need a substitute, I suggest decoy news. Because that’s what we’re talking about: fabricated news meant to look like the real thing.

But the problem is bigger than news alone, because advertising-funded media have been in the decoy business since forever. (Example: sensationalism in tabloids.) The difference in today’s digital world is that it’s a lot easier to fabricate a decoy story than to research and produce a real one—and it pays just as well, or even better. Let’s face it: non-journalists and algorithms churning out and/or elevating the placements of heart- and eyeball-bait (e.g. “Pope Endorses Trump”) are both cheap to run and good at producing advertising income.

When you outsource editorial judgement to machines, and those machines are rigged to bait clicks and drive engagement, and that engagement pays the publishers, you’ve got a business that can’t help prioritizing and improving decoy news. (Also one that depends of the practice of marking and tracking people. If doing that wasn’t outright wrong, we wouldn’t now have the GDPR as proof of it.)

As a result, adtech (tracking-based advertising) has compromised and marginalized actual journalism to such an extreme degree that “editorial” (which journalism produced) has been devalued and displaced by “content production” (which cheap labor and machines can produce).

We can see one tragic result in a New York Times story titled In New Jersey, Only a Few Media Watchdogs Are Left, by David Chen (@davidwchen). In it he reports that “The Star-Ledger, which almost halved its newsroom eight years ago, has mutated into a digital media company requiring most reporters to reach an ever-increasing quota of page views as part of their compensation.”

This calls to mind how “Saturday Night Live” in 1977 introduced the Blues Brothers in a skit where Paul Shaffer, playing rock impresario Don Kirshner, proudly said the Brothers were “no longer an authentic blues act, but have managed to become a viable commercial product.”

To operate a viable commercial product in our Digital Age, news has become mostly a content production business, paid for by adtech, which is entirely driven by algorithms informed by surveillance-gathered personal data. The result looks like this:

To fully grok how we got here, it is essential to understand the difference between advertising and direct marketing, and how nearly all of online advertising is now the latter. I describe the shift from former to latter in Separating Advertising’s Wheat and Chaff:

Advertising used to be simple. You knew what it was, and where it came from.

Whether it was an ad you heard on the radio, saw in a magazine or spotted on a billboard, you knew it came straight from the advertiser through that medium. The only intermediary was an advertising agency, if the advertiser bothered with one.

Advertising also wasn’t personal. Two reasons for that.

First, it couldn’t be. A billboard was for everybody who drove past it. A TV ad was for everybody watching the show. Yes, there was targeting, but it was always to populations, not to individuals.

Second, the whole idea behind advertising was to send one message to lots of people, whether or not the people seeing or hearing the ad would ever use the product. The fact that lots of sports-watchers don’t drink beer or drive trucks was beside the point, which was making brands sponsoring a game familiar to everybody watching it.

In their landmark study, “The Waste in Advertising is the Part that Works” (Journal of Advertising Research, December, 2004, pp. 375–390), Tim Ambler and E. Ann Hollier say brand advertising does more than signal a product message; it also gives evidence that the parent company has worth and substance, because it can afford to spend the money. Thus branding is about sending a strong economic signal along with a strong creative one.

Plain old brand advertising also paid for the media we enjoyed. Still does, in fact. And much more. Without brand advertising, pro sports stars wouldn’t be getting eight and nine figure contracts.

But advertising today is also digital. That fact makes advertising much more data-driven, tracking-based and personal. Nearly all the buzz and science in advertising today flies around the data-driven, tracking-based stuff generally called adtech. This form of digital advertising has turned into a massive industry, driven by an assumption that the best advertising is also the most targeted, the most real-time, the most data-driven, the most personal — and that old-fashioned brand advertising is hopelessly retro.

In terms of actual value to the marketplace, however, the old-fashioned stuff is wheat and the new-fashioned stuff is chaff. In fact, the chaff was only grafted on recently.

See, adtech did not spring from the loins of Madison Avenue. Instead its direct ancestor is what’s called direct response marketing. Before that, it was called direct mail, or junk mail. In metrics, methods and manners, it is little different from its closest relative, spam.

Direct response marketing has always wanted to get personal, has always been data-driven, has never attracted the creative talent for which Madison Avenue has been rightly famous. Look up best ads of all time and you’ll find nothing but wheat. No direct response or adtech postings, mailings or ad placements on phones or websites.

Yes, brand advertising has always been data-driven too, but the data that mattered was how many people were exposed to an ad, not how many clicked on one — or whether you, personally, did anything.

And yes, a lot of brand advertising is annoying. But at least we know it pays for the TV programs we watch and the publications we read. Wheat-producing advertisers are called “sponsors” for a reason.

So how did direct response marketing get to be called advertising ? By looking the same. Online it’s hard to tell the difference between a wheat ad and a chaff one.

Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.

Thus today’s brain-snatched advertising business believes the best ad is the most targeted and personalized one. Worse, almost all the journalists covering the advertising business assume the same thing. And why wouldn’t they, given that this is how advertising is now done online, especially by the Facebook-Google duopoly.

And here is why those two platforms can’t fix it: both have algorithmically-driven engagement amplifiers built to give millions of advertisers ways to target the well-studied eyeballs of billions of people, using countless characterizations of those engaged eyeballs.In fact, the only (and highly ironic) way they can police bad acting on their platforms is by hiring people who do nothing but look for that bad acting.

One fix is regulation. We now have that, hugely, with the General Data Protection Regulation (GDPR). It’s an EU law, but it protects the privacy of EU citizens everywhere—with potentially massive fines. In spirit, if not also in letter (which the whole ad biz is struggling mightily to weasel around), the GDPR outlaws tracking people like tagged animals online. I’ve called the GDPR an extinction event for adtech, and the main reason brands (including the media kind) need to fire it first.

The other main fixes begin on the personal side. Don Marti (@dmarti) tweets, “Build technologies to implement people’s norms on sharing their personal data, and you’ll get technologies to help high-reputation sites build ad-supported business models ABSOLUTELY FREE!” Those models are all advertising wheat, not adtech chaff.

Now here’s the key: what we need most are single and simple ways for each of us to manage all our dealings with other entities online. Having separate means, each provided by the dozens or hundreds of sites and services we each deal with, all with different UIs, login/password gauntlets, forms to fill out, meaningless privacy policies and for-lawyers-only terms of service, cannot work. All that shit may give those companies scale across many consumers, but every one of them only adds to those consumers’ relationship overhead. I explain how this will work in Giving Customers Scale, plus many other posts, columns and essays compiled in my People vs. Adtech series, which is on its way to becoming a book. I’d say more about all of it, but need to catch a plane. See you on the other coast.

Meanwhile, the least we can do is start talking about decoy news and the business model that pays for it.

[Later…] I’m on the other coast now, but preoccupied by the #ThomasFire threatening our home in Santa Barbara. Since this blog appears to be mostly down, I’m writing about it at doc.blog.

 

 

 

Tags: , ,