history

You are currently browsing the archive for the history category.

I just ran across some research I did in December 2008, while working on the 10th Anniversary edition of The Cluetrain Manifesto:

  • Google Book Search results for cluetrain — 666[1]
  • Google Book Search results for markets are conversations — 2610
  • Google Web Search results for cluetrain — 394,000
  • Results for Web searches for markets are conversations — 626,000

[1] Increasing at more than one per day. The numbers were 647 on November 15, 2008 and 666 on December 1, 2008. Results passed 600 on September 30, 2008.

Here are the results today:

I added two more, with quotes, because I’m not sure if I used quotes the first time or not, for that phrase. I would like to have the exact URLs for the earlier searches too, but I don’t. To get useful, non-crufty URLs this time, I needed to fire up a virgin browser and scrape out everything not tied to the search itself (e.g. the browser name, prior searches, other context-related jive irrelevant to this post).

Of course, Google guesses about most of this stuff. (Though possibly not for the first item: a word in a book). Not sure how much any of it matters. I just thought it was interesting to share before I wander off and forget where I found these few bits of sort-of-historical data.

Bonus link (found via a Twitter search for #cluetrain): Jonathan Levi‘s The Cluetrain Manifesto — Manifested Today.

I’ve been digging around for stuff I blogged (or wrote somewhere on the Web) way back when. After finding two items I thought might be lost, I decided to point to them here, which (if search engines still work the Old Way) might make them somewhat easier to find again later.

One is Rebuilding the software industry, one word at a time, in Kuro5hin. (And cool to see that Kuro5hin is still trucking along.) The other is Cluetrain requires conversation. Both are from early 2001, more than ten years ago. A sample from the former:

I went through my own head-scratching epiphany right after the Web got hot and I found my profession had changed from writer to “content provider.” What was that about? Were my words going to be shrink-wrapped, strapped on a skid and sold in bulk at Costco?

No, “content” was just a handy way to label anything you could “package” and “deliver” through the “vehicle” of this wonderful new “medium.” Marketers were salivating at the chance to “target,” “capture” and “penetrate” ever-more-narrow “audiences” with ever-more-narrow “messages.” Never mind that there was zero demand at the receiving end for any of it. (If you doubt the math, ask what you’d be willing to pay to see an ad on the Web. Or anywhere.)

Soon I began to wonder what had happened to markets, which for thousands of years were social places where people got together to buy and sell stuff, and to make civilization. By the end of the Industrial Age, every category you could name was a “market.” So was every region and every demographic wedge where there was money to be spent. Worse, these were all too often conceived as “arenas” and “battlefields,” even though no growing category could be fully described in the zero-sum terms of sports and war metaphors.

And from the latter:

Cluetrain talks far less about what markets need that about what they are. The first thesis says Markets are conversations. Not markets need to be conversations. Or people need the right message. In fact, we make the point that there is no market for messages. If you want to see how little people want messages, look at the MUTE button on your TV’s remote control. Sum up all marketing sentiment on the receiving end and you’ll find negative demand for it.

There’s nothing conversational about a message. I submit that if a message turns into a conversation, it isn’t a message at all. It’s a topic.

Not many people noticed (including me, until Jakob Nielsen pointed it out) that The Cluetrain Manifesto was written in first and second person plural voices, and was addressed not by marketers to markets, but by markets to marketers. It said —

if you only have time for one clue this year, this is the one to get…

Chris Locke wrote that in early 1999. Marketing still doesn’t get it. Maybe it can’t.

And, because marketing (and the rest of business) didn’t get it, I started ProjectVRM, and am now finishing a book about customer liberation and why free customers will prove more valuable than captive ones.

This stuff seems to be taking awhile. But hey, it’s fun.

“When I’m Sixty-Four” is 44 years old. I was 20 when it came out, in the summer of 1967,  one among thirteen perfect tracks on The Beatles‘ Sgt. Pepper’s Lonely Hearts Club Band album. For all the years since, I’ve thought the song began, “When I get older, losing my head…” But yesterday, on the eve of actually turning sixty-four, I watched this video animation of the song (by theClemmer) and found that Paul McCartney actually sang, “… losing my hair.”

Well, that’s true. I’m not bald yet, but the bare spot in the back and the thin zone in the front are advancing toward each other, while my face continues falling down below.

In July 2006, my old friend Tom Guild put Doc Searls explains driftwood of the land up on YouTube. It’s an improvisational comedy riff that Tom shot with his huge new shoulder-fire video camera at our friend Steve Tulsky’s house on a Marin County hillside in June, 1988. The shot on the left is a still from that video.

It was a reunion of sorts. Tom, Steve and I had all worked in radio together in North Carolina. I was forty in ’88, and looked about half that age. When my ten-year-old kid saw it, he said “Papa, you don’t look like that.” I replied, “No, I do look like that. I don’t look like this,” pointing to my face.

Today it would be nice if I still looked like I did five years ago. The shot in the banner at the top of my old (1999-2007) blog was taken in the summer of 1999 (here’s the original), when I was fifty-two and looked half that age. The one on the right was taken last summer (the shades on my forehead masking a scalp that now reflects light), when I was a few days short of sixty-three. By then I was finally looking my age.

A couple months back I gave a talk at the Personal Democracy Forum where I was warmly introduced as one of those elders we should all listen to. That was nice, but here’s the strange part: when it comes to what I do in the world, I’m still young. Most of the people I hang and work with are half my age or less, yet I rarely notice or think about that, because it’s irrelevant. My job is changing the world, and that’s a calling that tends to involve smart, young, energetic people. The difference for a few of us is that we’ve been young a lot longer.

But I don’t have illusions about the facts of life. It’s in one’s sixties that the croak rate starts to angle north on the Y axis as age ticks east on the X. Still, I’m in no less hurry to make things happen than I ever was. I’m just more patient. That’s because one of the things I’ve learned is that now is always earlier than it seems. None of the future has happened yet, and it’s always bigger than the past.

Last week we spent a lot of time here, in Venice:

Bancogiro, Rialto Mercado, Venice

The triangular marble plaza on the edge of the Grand Canal of Venice is known informally as Bancogiro, once one of Italy’s landmark banks, and now the name of an osteria there. The plaza is part of Rialto Mercado, the marketplace where Marco Polo was based and prospered when he wasn’t out opening trade routes to the east. It’s also where Shakespeare set The Merchant of Venice, and where Luca Pacioli studied double entry bookkeeping, which he described in Summa de arithmetica, geometria, proportioni et proportionalità (Venice 1494), one of the first textbooks written in the vernacular (rather than Latin), and an early success story of the printing press.

Here’s a photo set of the place.

Here’s a 360° view. (While it’s called “Fondamenta de la Preson,” that’s just the cockeyed white building in the map above — a former womens prison — in the corner of the plaza.)

Note that Google Maps tells us little about the location, but plenty about the commercial establishments there. When I go for a less fancy view, the problem gets worse:

Bancogiro, Rialto Mercado, Venice

In that pull-down menu (where it says “Traffic”) I can turn on webcams, photos and other stuff from the Long Tail; but there’s no way to turn on labels for the Grand Canal, the Bancogiro plaza, the Rialto Mercado vaporetto (water bus) stop, the Rialto Mercado itself, the Fondamenta de la Preson (women’s prison, labeled, sort of, in the upper view but not the lower), or even the @#$% street names. The only non-commercial item on the map is the Arciconfraternita Di San Cristoforo E Della Misericordia, which is an organization more than a place.

(My wife just said “You know those hotel maps they give away, that only show hotels? It’s like that, only worse. The hotel maps at least give you some street names.”)

For example, try to find information about the Bancogiro: that is, about the original historic bank, rather than the osteria or the other commercial places with that name. (Here’s one lookup.) For awhile I thought the best information I could find on the Web was text from the restaurant menu, which I posted here. That says the bank was founded in 1157. But this scholarly document says 1617. Another seems to agree. But both are buried under commercial links.

The problem here is that the Web has become commercialized at the cost of other needs of use. And Google itself is leading the way — to the point where it is beginning to fail in its mission to “organize the world‘s information and make it universally accessible and useful.”

This is understandable, and easily rationalized. Google is a commercial enterprise. It makes money by selling advertising, and placing commercial information in settings like the ones above. This has been good in many ways, and funds many free services. But it has subordinated purely useful purposes, such as finding the name of a street, a canal, or a bus stop.

There are (at least) two central problems here for Google and other giants like it. One is that we’re not always buying something, or looking only for commercial information. The other is that advertising should not be the only business model for the likes of Google, and all who depend on it are at risk while it remains so.

One missing piece is a direct market for useful information. Toward that end I’ll put this out there: I am willing to pay for at least some of the information I want. I don’t expect all information to be free. I don’t think the fact that information is easily copied and re-used means information “wants” to be free. In other words, I think there is a market here. And I don’t think the lack of one is proof that one can’t be built.

What we need first isn’t better offerings from Google, but better signaling from the demand side of the marketplace. That’s what I’m try to do right now, by signaling my willingness to pay something for information that nobody is currently selling at any price. We need to work on systems that make both signaling and paying possible — on the buyer’s terms, and not just the seller’s.

This is a big part of what VRM, or Vendor Relationship Management is about. Development is going on here. EmanciPay, for example, should be of interest to anybody who would like to see less money left on the market’s table.

Bonus link.

 

Ford River Rouge plant

Got my first good clear look at Detroit and Windsor from altitude on a recent trip back from somewhere. Here’s a series of shots. What impressed me most, amidst all that flat snow-dusted spread of city streets, a patch of grids on the flatland of Michigan and Ontario, flanking the Detroit River and its islands, was what looked like a dark smudge. Looking at it more closely, and matching it up with Reality, I discovered that this was Ford’s famous River Rouge Complex in the city of Dearborn.

Says Wikipedia,

The Rouge measures 1.5 miles (2.4 km) wide by 1 mile (1.6 km) long, including 93 buildings with nearly 16 million square feet (1.5 km²) of factory floor space. With its own docks in the dredged Rouge River, 100 miles (160 km) of interior railroad track, its own electricity plant, and ore processing, the titanic Rouge was able to turn raw materials into running vehicles within this single complex, a prime example of vertical-integration production. Over 100,000 workers were employed there in the 1930s.

As an inveterate infrastructure freak, I would love to see this thing sometime.

Royal pains

The Royal Wedding Not the Royal Weddingisn’t my cup of tedium, but olde blog buddies Eric and Dawn Olsen will be covering the show for The Morton Report, so I urge you to follow it there. I’ll do my best as well.

Not speaking of which, I am old enough to remember the last Royal Wedding, which happened on my birthday in 1981. What sticks most in my mind about that event is an exceptionally funny send-up of the whole thing: a book titled Not the Royal Wedding, by Sean Hardie and John Lloyd. My sister, who (I’ll let her explain) served “on the personal staff of the  Commander-in-Chief, US Naval Forces Europe as the Protocol Officer, living in a mews flat in Chelsea, working on Grosvenor Square and having the best time of my life”, brought the book back to the states, and I laughed my rocks off reading it, even though I’m sure many of the jokes sailed past me. One item that stands out is a large spread on the royal silverware, including a “bitchfork.” The price on Amazon at that last link is also pretty good: “5 used from £0.01”, it says.

What started as plain old Web search has now been marginalized as “organic”. That’s because the plain old Web — the one Tim Berners-Lee created as a way to hyperlink documents — has become commercialized to such an extent that the about the only “organic” result reliably rising to first-page status is Wikipedia.

Let’s say your interest in “granite” and “Vermont” is geological, rather than commercial. The first page of Google results won’t help much if your interest goes beyond visiting a headstone mineSame goes for Bing. I notice this change because it’s becoming harder and harder for me to do casual research on geology (or most other topics that interest me) on the Web.

Yesterday Vivek Wadhwa tweeted a perfect line: “Google is paying content farms to pollute the web”. This is true, yet the problem is bigger than that. The Web is changing from a world wide library with some commercial content to a world wide mall with intellectually interesting publications buried under it, in virtual catacombs. Google’s mission of “organizing all the world’s information” is still satisfied. The problem is that most of that information — at least on the Web — is about selling something. The percentage of websites that are Web stores goes up and up. SEO only makes the problem worse.

The Berkman Center has a project that should encourage thinking about solving this problem, along with many others. Specifically,

The Berkman Center and Stanford Law School are pleased to announce a new initiative in which we invite the world to submit their ‘Ideas for a Better Internet.’ We are seeking out brief proposals from anyone with ideas as to how to improve the Internet. Students at Harvard and Stanford will work through early next year to implement the ideas selected. Interested parties should submit their ideas at http://bit.ly/i4bicfp by Friday, April 15. Please spread the word far and wide, and follow us on Twitter at http://twitter.com/Ideas4BetterNet.

So get your ideas in by Tax Day.

is ahead of his time again.  nailed computing as a utility, long before “the cloud” came to mean pretty much the same thing. His latest book, , explored the changes in our lives and minds caused by moving too much of both online — again before others began noticing how much the Net was starting to look like a handbasket.

Thus The Shallows comes to mind when I read Alice Gregory’s in . An excerpt:

I have the sensation, as do my friends, that to function as a proficient human, you must both “keep up” with the internet and pursue more serious, analog interests. I blog about real life; I talk about the internet. It’s so exhausting to exist on both registers, especially while holding down a job. It feels like tedious work to be merely conversationally competent. I make myself schedules, breaking down my commute to its most elemental parts and assigning each leg of my journey something different to absorb: podcast, Instapaper article, real novel of real worth, real magazine of dubious worth. I’m pretty tired by the time I get to work at 9 AM.

In-person communication feels binary to me now: subjects are either private, confessional, and soulful or frantically current, determined mostly by critical mass, interesting only in their ephemeral status. Increasingly these modes of talk seem mutually exclusive. You can pull someone aside—away from the party, onto the fire escape—and confess to a foible or you can stay inside with the group and make a joke about something everyone’s read online. “Maybe you keep the wrong company,” my mother suggests. Maybe. But I like my friends! We can sympathize with each other and feel reassured that we’re not alone in our overeager consumption, denigrated self-control, and anxiety masked as ambition.

Here’s Nick:

On the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from tap to tap. We transfer only a small jumble of drops from different faucets, not a continuous, coherent stream.

Psychologists refer to the information flowing into our working memory as our cognitive load. When the load exceeds our mind’s ability to process and store it, we’re unable to retain the information or to draw connections with other memories. We can’t translate the new material into conceptual knowledge. Our ability to learn suffers, and our understanding remains weak. That’s why the extensive brain activity that Small discovered in Web searchers may be more a cause for concern than for celebration. It points to cognitive overload.

The Internet is an interruption system. It seizes our attention only to scramble it. There’s the problem of hypertext and the many different kinds of media coming at us simultaneously. There’s also the fact that numerous studies—including one that tracked eye movement, one that surveyed people, and even one that examined the habits displayed by users of two academic databases—show that we start to read faster and less thoroughly as soon as we go online. Plus, the Internet has a hundred ways of distracting us from our onscreen reading. Most email applications check automatically for new messages every five or 10 minutes, and people routinely click the Check for New Mail button even more frequently. Office workers often glance at their inbox 30 to 40 times an hour. Since each glance breaks our concentration and burdens our working memory, the cognitive penalty can be severe.

The penalty is amplified by what brain scientists call . Every time we shift our attention, the brain has to reorient itself, further taxing our mental resources. Many studies have shown that switching between just two tasks can add substantially to our cognitive load, impeding our thinking and increasing the likelihood that we’ll overlook or misinterpret important information. On the Internet, where we generally juggle several tasks, the switching costs pile ever higher.

The Net’s ability to monitor events and send out messages and notifications automatically is, of course, one of its great strengths as a communication technology. We rely on that capability to personalize the workings of the system, to program the vast database to respond to our particular needs, interests, and desires. We want to be interrupted, because each interruption—email, tweet, instant message, RSS headline—brings us a valuable piece of information. To turn off these alerts is to risk feeling out of touch or even socially isolated. The stream of new information also plays to our natural tendency to overemphasize the immediate. We crave the new even when we know it’s trivial.

And so we ask the Internet to keep interrupting us in ever more varied ways. We willingly accept the loss of concentration and focus, the fragmentation of our attention, and the thinning of our thoughts in return for the wealth of compelling, or at least diverting, information we receive. We rarely stop to think that it might actually make more sense just to tune it all out.

Try writing about the Net and tuning it out at the same time. Clearly Nick can do that, because he’s written a bunch of books about the Net (and related matters) while the Net’s been an available distraction. Meanwhile I’ve spent most of the past year writing just one book, fighting and often losing against constant distraction. It’s very hard for me to put the blinders on and just write the thing. In the last few months what I’ve succeed in doing, while wearing the blinders and getting most of my book writing done, is participating far less in many things that I help sustain, or that sustain me, including projects I’m working on, time with my wife, kids and grandkids, and this very blog. (Lotta white spaces on the calendar to the right there.)

On the whole I’ve been dismissive of theories (including Nick’s) about how the Net changes us for the worse, mostly because my own preoccupations, including my distractions, tend to be of the intellectually nutritive sort — or so I like to believe. That is, I’m curious about all kinds of stuff, and like enlarging the sum of what I know, and how well I know it. The Net rocks for that. Still, I see the problem. I can triangulate on that problem just from own struggles plus Alice’s and Nick’s.

used to say, “Great minds discuss ideas, mediocre minds discuss events, and small minds discuss people.” (Attributed, with some dispute, to Eleanor Roosevelt.) The Net feeds all three, but at the risk of dragging one’s mind from the great to the small. “What else are we doing on the internet if not asserting our rank?” Alice writes. (Would we ask the same about what we’re doing in a library?) Later she adds,

Sometimes I can almost visualize parts of myself, the ones I’m most proud of, atrophying. I wish I had an app to monitor it! I notice that my thoughts are homeopathic, that they mirror content I wish I weren’t reading. I catch myself performing hideous, futuristic gestures, like that “hilarious” moment three seconds into an intimate embrace in which I realize I’m literally rubbing my iPhone screen across his spine. Almost every day at 6 PM my Google Alert tells me that an “Alice Gregory” has died. It’s a pretty outdated name, and most of these obituaries, from family newsletters and local papers, are for octogenarians. I know I’m being tidy-minded even to feel a pang from this metaphor, but still . . .

It’s hard not to think “death drive” every time I go on the internet. Opening Safari is an actively destructive decision. I am asking that consciousness be taken away from me. Like the lost time between leaving a party drunk and materializing somehow at your front door, the internet robs you of a day you can visit recursively or even remember. You really want to know what it is about 20-somethings? It’s this: we live longer now. But we also live less. It sounds hyperbolic, it sounds morbid, it sounds dramatic, but in choosing the internet I am choosing not to be a certain sort of alive. Days seem over before they even begin, and I have nothing to show for myself other than the anxious feeling that I now know just enough to engage in conversations I don’t care about.

The internet’s most ruinous effect on literacy may not be the obliteration of long-format journalism or drops in hardcover sales; it may be the destruction of the belief that books can be talked and written about endlessly. There are fewer official reviews of novels lately, but there are infinitely more pithily captioned links on Facebook, reader-response posts on Tumblr, punny jokes on Twitter. How depressing, to have a book you just read and loved feel so suddenly passé, to feel—almost immediately—as though you no longer have any claim to your own ideas about it. I started writing this piece when the book came out at the end of July, and I started unwriting it almost immediately thereafter. Zeno’s Paradox 2.0: delete your sentences as you read their approximations elsewhere. How will future fiction work? Will details coalesce into aphorism? I wonder if instead of scribbling down in my notebook all the familiar aspects of girls I see on the street, as I used to, I’ll continue doing what I do now: snapping a picture and captioning it, in the words of Shteyngart, “so media.”

I’ll grant that we have problems here, but is literacy actually being ruined? Is long-format journalism actually obliterated? The New Yorker is as thick as ever with six to eight thousand word essays. Books still move through stores online and off. Our fourteen year old kid still reads piles of books, even as he spends more time online, watching funny YouTube videos and chatting with a friend three time zones away. Is he worse for that? Maybe, but I don’t think so. Not yet, anyway.

What I am sure about is this: Twitter, Facebook and Tumblr are temporary constructions on the Web, like Worlds Fairs used to be, when we still had them. The Internet is a world where all four seasons happen at once. New sites and services are like plants that germinate, grow, bud, bloom and die, over and over. Even the big trees don’t grow to the sky. We need their fruit, their shade, their wood and the humus to which they return. Do we need the other crap that comes along with it those stages? Maybe not, but we go for it anyway.

Last Tuesday gave an excellent Berkman Lunch talk titled Status Update: Celebrity, Publicity and Self-Branding in Web 2.0. The summary:

In the mid-2000s, journalists and businesspeople heralded “Web 2.0” technologies such as YouTube, Twitter, and Facebook as signs of a new participatory era that would democratize journalism, entertainment, and politics. By the decade’s end, this idealism had been replaced by a gold-rush mentality focusing on status and promotion. While the rhetoric of Web 2.0 as democratic and revolutionary persists, I will contend that a primary use of social media is to boost user status and popularity, maintaining hierarchy rather than diminishing it. This talk focuses on three status-seeking techniques that emerged with social media: micro-celebrity, self-branding, and life-streaming. I examine interactions between social media and social life in the San Francisco “tech scene” to show that Web 2.0 has become a key aspect of social hierarchy in technologically mediated communities.

I’ve been in and out of that scene since 1985, and I know personally a large percentage of Alice’s sources. One of them, , provided Alice with some terrific insights about how the status system works. Tara also punched out of that system not long ago, moving to Montreal and starting a company. She has also been very active in the development community, for which I am very grateful. She’s on a helluva ride.

Listening to the two Alices,  comes to mind:

A Route of Evanescence,
With a revolving Wheel –
A Resonance of Emerald
A Rush of Cochineal –
And every Blossom on the Bush
Adjusts it’s tumbled Head –
The Mail from Tunis – probably,
An easy Morning’s Ride –

Speaking of which, here’s Bill Hicks on life’s ride:

The World is like a ride in an amusement park, and when you choose to go on it you think it’s real, because that’s how powerful our minds are. And the ride goes up and down and round and round, and it has thrills and chills and is very brightly colored, and it’s very loud. And it’s fun, for a while.

Some people have been on the ride for a long time, and they’ve begun to question, ‘Is this real, or is this just a ride?’, and other people have remembered, and they’ve come back to us and they say ‘Hey, don’t worry. Don’t be afraid, ever, because this is just a ride.’ and we KILL THOSE PEOPLE.

“Shut him up! We have alot invested in this ride! SHUT HIM UP! Look at my furrows of worry. Look at my big bank account, and my family. This has to be real.”

It’s just a ride.

But we always kill those good guys who try and tell us that. You ever noticed that? And let the demons run amok. But it doesn’t matter, because … It’s just a ride.

And we can change it anytime we want. It’s only a choice. No effort, no work, no job, no savings of money. A choice, right now, between fear and love. The eyes of fear wants you to put bigger locks on your door, buy guns, close yourself off. The eyes of love, instead see all of us as one.

(Watch the video. It’s better.)

Social media, social networking — all of it — is just practice. It’s just scaffolding for the roller coaster we keep re-building, riding on, falling off, and re-building. That’s what we’ve been making and re-making of civilization, especially since Industry won the Industrial revolution. (That’s why we needed world’s fairs,  to show off how Industry was doing.)

You go back before that and, on the whole, life didn’t change much, anywhere. Most of our ancestors, for most of the Holocene, lived short, miserable lives that were little different than those of generations prior or hence.

Back in the ’70s I lived in a little community called Oxbow, north of Chapel Hill. My house was one off whats now called Wild Primrose Lane, in this map here. In those days the bare area in the center of that map was a farm that was plowed fresh every spring. One day while we were walking there, I picked up a six-inch spear point (or hand-held scraper) that now resides at the (one county over):

I brought it to friends in the anthropology department at UNC — associates of the great Joffre Coe — who told me it was a Guilford point, from the Middle Archaic period, which ran from 6000 to 3000 B.C. (The original color was gray, as you can see from the chipped parts. The surface color comes from what’s called patination.)

What fascinates me about this date range, which is similar to the range for other kinds of points everywhere in the world, is how little technology changed over such a long period of time. Generation after generation made the same kinds of stone tools, the same way, for thousands of years. Today we change everything we make, pretty much constantly. There was no operating among the Guilford people, or anywhere, in 5000 B.C. Today Moore sometimes seems slow.

I don’t have a conclusion here, other than to say that maybe Nick and both Alices are right, and the Net is not so ideal as some of us (me especially) tend to think it is. But I also think the Net is something we make, and not just something that makes us.

Clearly, we could do a better job. We have the tools, and we can make many more.

 

Over on Facebook, and friends have been pondering the provenance of Invention is the mother of necessity. Writes Don, “… heard that once from Doc Searls – I think its an original. Seems more true everyday. (think facebook, smartphones, the internet, computers).” So I responded,

Back in the ’80s, I had a half-serious list of aphorisms I called “Searls’ Laws.” The first was “Logic and reason sit on the mental board of directors, but emotions casts the deciding votes.” The second was, “Invention is the mother of necessity.”

There were others, but I forget them right now. One, from my high school roommate (now the Episcopal Bishop of Bethlehem, PA — who blogs, in a fashion, here), was “Matter can be neither created nor destroyed. It can only be eaten.” He was sixteen when he said that.

Anyway, one day I laid my second law on the CEO of our ad agency’s top client at the time, a company called Racal-Vadic. The CEO was Kim Maxwell, who taught at Stanford before kicking ass in business, and has since moved on to other things.) He replied, “Ah, yes. Thorstein Veblen.”

I thought, wtf? So I looked it up, and sure enough, Thorstein Veblen uttered “Invention is the mother of necessity” about a century before I made it one of my laws.

Anyway, my point in using it remained the same: Silicon Valley was built on inventions that mother necessity (from ICs to iPhones) at least as much as it was built on necessities that mother invention.

Just thought I’d share that out here in the un-silo’d non-F’book world.

Tags: , , , ,

[This piece was written for (in Raleigh, North Carolina ) and published twenty-five years ago, on February 10, 1986. Since it might be worth re-visiting some of the points I made, as well as the event itself, I decided to dust off the piece and put up here. Except for a few spelling corrections and added links, it’s unchanged. — Doc]

I can remember, when I first saw the movie , how unbelievable it seemed that and could fly their spacecrafts so easily. They’d flick switches and glance knowingly at cryptic lights and gauges, and zoom their ways through hostile traffic at speeds that would surely kill them if they ran into anything; and they’d do all this with a near-absolute disregard for the hazards involved.

That same night, after I left the movie theatre, I experienced one of the most revealing moments of my life. I got into my beat-up , flicked some switches, glanced knowingly at some lights and gauges, and began to zoom my way through hostile traffic at speeds that would surely kill me if I ran into anything; and I did all this with a near-absolute disregard for the hazards involved. Suddenly, I felt like Han Solo at the helm of the . And in my exhilaration, I realized how ordinary it was to travel in a manner and style beyond the dreams of all but humanity’s most recent generations. I didn’t regret the likelihood that I would never fly in space like Han and Luke; rather I felt profoundly grateful that I was privileged to enjoy, as a routine, experiences for which many of my ancestors would gladly have paid limbs, or even lives.

Since then I have always been astonished at how quickly and completely we come to take our miraculous inventions for granted, and also at how easily we use those inventions to enlarge ourselves, our capabilities, and our experience in the world. “I just flew in from the Coast,” we say, as if we were all born with wings.

I think this “enlarging” capacity, even more than our brains and our powers of speech, is what makes us so special as creatures. As individuals, and as an entire species, we add constantly to our repertoire of capabilities. As the educator said, our capacity to learn is amplified by our ability to develop skills. Those skills give us the power to make things, and then to operate those things as if they were parts of ourselves. Through our inventions and skills, we acquire superhuman powers that transcend the weaknesses of our naked, fleshy selves.

One might say that everything we do is an enlargement on our naked beginnings. That’s why we are the only animals that not only wear clothes, but who also care about how they look. After all, if we were interested only in warmth, comfort and protection, we wouldn’t have invented push-up bras and neckties. Or other non-essentials, like jewelry and cosmetics. It seems we wear those things to express something that extends beyond the limits of our bodies: the notions of our minds, about who we are and what we do.

But clothes are just the beginning, the first and most visible layer in a series that grows to encompass all our tools and machines. When we ride a bicycle, for example, the bike becomes part of us. When we use a hammer to drive a nail, we ply that tool as if it were an extra length of arm. Joined by our skills to tools and machines, our combined powers all but shame the naked bodies that employ them.

I remember another movie: a short animated feature in which metallic creatures from Mars, looking through telescopes, observed that the Earth was populated by a race of automobiles. Martian scientists described how cars were hatched in factories, fed at filling stations, and entertained at drive-in movies.

And maybe they were right. Because, in a way, we become the automobiles we drive. Who can deny how differently we behave as cars than as people? It’s a black that cuts us off at the light, not Mary Smith, the real estate agent. In traffic, we give vent to hostilities and aggressions we wouldn’t dare to release in face-to-face encounters.

Of course, we have now metamorphosed into entities far more advanced than automobiles. As pilots we have become airplanes. As passengers we have become creatures that fly great distances in flocks.

If those Martian scientists were to keep an eye on our planet, they would note that we have now begun to evolve beyond airplanes, into spaceships. In their terms we might note the Tragedy as the metallic equivalent of a single failure in the amphibians’ first assault on land. Evolution, after all, is a matter of trial and error.

But as we contemplate the price of our assault on the shores of space, we need to ask ourselves some hard questions. For example: is the Challenger tragedy just a regrettable accident in the natural course of human progress, or evidence of boundaries we are only beginning to sense?

On January 28th, Challenger addressed that question to our whole species. We all felt the same throb of pain when we learned how, in one orange moment, seven of our noble fellows were blown to mist at the edge of the heavens they were launched to explore.

Most of us made it our business that day to visit the TV, to watch the Challenger bloom into fire, and to share the same helpless feeling as we saw the smoking fragments of countless dreams rained down in white tendrils, like the devil’s own confetti, to the ancestral sea below. The final image — a monstrous white Y in the sky — is permanently embossed in the memories of all who witnessed the event.

It was so unexpected because the shuttle had become exactly what NASA had planned: an ordinary form of transportation, a service elevator between Earth and Space. NASA’s plan to routinize space travel succeeded so convincingly that major networks weren’t even there to cover the Challenger liftoff. Instead they “pooled” for rights to images supplied by Ted Turner‘s Cable News Network. Chuck Yeager, the highest priest in the Brotherhood of The Right Stuff, voiced the unofficial NASA line on the matter. “I see no difference between this accident and any accident involving a commercial or military aircraft,” he said.

Would that it were so.

“Fallen heroes” is not a term applied to plane crash victims. In fact, the technologies of space travel are still extremely young, and the risks involved are a lot higher than we like to think. “Since NASA made it look so easy, people thought it would never happen. Those of us close to the program thought it could happen a whole lot sooner. We’re glad it was postponed this long,” said , a former astronaut and pilot of the .

The fact that the shuttle program was so vulnerable, and we failed to recognize the fact, says unwelcome things about our faith in technology, and now is when we should listen to them. Because the time when flying through space becomes as easy as flying down the road, or even through the air, is still a long way off. In the meantime, it might be best to leave the exploring to guys like Lousma, who are blessed with the stuff it takes to push the risks out of the way for the rest of us.

And we’re talking about the kind of risks that were built into the shuttle from its start.

Consider for a moment that the shuttle program is, after all, the bastard offspring of a dozen competing designs, and constrained throughout its history by a budgetary process that subordinates human and scientific aspirations to a variety of military and commercial interests. And consider how, as with most publicly-funded technologies, most of the Shuttle’s components were all produced by the lowest bidder. And consider the fact that many of the Shuttle’s technologies are, even by NASA’s admission, obsolete. If we had to start at Square One today, we’d probably design a very different program.

A new program, for example, would probably take better account of the Perrow Law of Unavoidable Accidents. A corrolary of Murphy’s Law — “Anything that can go wrong, will go wrong” — the Perrow Law is modestly named after himself by , Professor of Sociology and Organizational Analysis at Yale University. According to Perrow, the shuttle program has succeeded mostly in spite of itself. Its whole design is so detailed, so complex, so riddled with interdependent opportunities for failure, that we’re lucky one of the things didn’t blow up sooner, or worse, suffer a more agonizing death in space.

“The number of interconnections in these systems is so enormous,” he says, “that no designer can think of everything ahead of time. It may be that this was one major valve failing on one of the tanks, but I rather suspect that that’s not the case. NASA tests and is very concerned about those valves. They have back-ups for every major system. The problem is more likely to have been a number of small things that came together in a mysterious way — a way that we may never learn about.”

He continues, “The chances for an accident will be only marginally reduced if we find the cause of this, and harden something or increase the welds, and eliminate this one thing as a source of an accident. But right next to it will be a dozen other unique sources of accidents that we haven’t touched. But by touching the components next to it, we may increase the possibility of other accidents.”

, who wrote , and invented the term, suggests that NASA may have snowed itself into believing that space travel is past the pioneering stage, and that, as a concept, the shuttle’s “coach & freight service — a people’s zero-G express” was premature. Of the martyred teacher, , he says “Her flight was to be the crossover, at last, from a quarter of a century in which space had been a frontier open only to pioneers who lived and were willing to die by the code of ‘the right stuff’ — the Alan Shepards, s and Neil Armstrongs — to an era when space would belong to the entire citizenry, to Everyman. The last role in the world NASA had in mind for Crista McAuliffe and the rest of the Challenger crew was that of pioneer or hero.”

This was because NASA had labored long and hard to break the political grip of what Wolfe calls “Astropower,” the “original breed of fighter-pilot and test-pilot astronaut — the breed who had been willing, over and over again, to sit on top of enormous tubular bombs, some 36 stories high, gorged with several of the most explosive materials this side of nuclear fission, and let some NASA GS-18 engineer light the fuse.”

The fact was, Wolfe suggests, that McAuliffe and her companions “hurtled for 73 seconds out on the edge of a still-raw technology” before they perished. Which is why he asks “If space flight still involves odds unacceptable to Everyman, then should it be put back in the hands of those whose profession consists of hanging their hides, quite willingly, out over the yawning red maw?”

If the answer is yes, then what will need to happen before Everyman is really ready to fly the zero-G express?

In a word, simplification. Right now there is no way for a single pilot’s senses to stretch over the entire shuttle system, and operate it skillfully. A couple of years ago, the Director of Flight Operations for NASA said “this magnificent architecture makes it that much harder to learn to use the system.” According to Professor Perrow, “because the Shuttle system was designed in so many parts by a phalanx of designers, when it’s all put together to run, there is nobody, no one, who can know all about that system.”

Perrow says “It requires simplification for a single person, a pilot, to know everything that’s happening in such a hostile environment as space.” One of the great simplifications in aviation history was the substitution of the jet engine for the piston engine. That’s what we need to make space travel agreeably safe.”

It is ironic that on the day the Challenger blew up, , a space industry consultant and a former NASA administrator, was about to mail the first draft of a commission report to the president on the future of the U.S. space effort. That report advanced two recommendations: 1) a unmanned cargo-launching program to deliver cargo to space at a fraction of current shuttle costs; and 2) an improved shuttle or a new-generation system like the “hypersonic transportation vehicle” the Air Force has wanted ever since NASA beat the rocket airplane into space. The hypersonic transport would simply be an airplane that can fly in space. By contrast, the shuttle is a spacecraft that can glide to earth. Already, hypersonic transport technology has been around for years. Reports say the first “space plane” could be ready to fly in the 1990s. The thing would cruise along at anywhere from Mach 5 to Mach 25, which would mean, theoretically, that no two points on the earth would be more than three hours apart.

But it will have to fight the inertia behind the shuttle program, which is substantial, and slowed only momentarily by the Challenger explosion.

I fear we can only pray that future missions will continue to dodge Murphy’s law.

Over time, however, our sciences will need to face Perrow’s Corollary more soberly. We need to recognize that there are limits to the complexities we can build into our technologies before accidents are likely to occur. Thanks to Fail Safe, Doctor Strangelove , and other dramatic treatments of the issue, we are already familiar with (and regretably taking for granted) the risks of nuclear catastrophe to which we are exposed by our terribly complicated “defensive shields.”

And this hasn’t stopped us from committing to even more dangerous and complicated “defensive” projects, the most frightening of which is the euphemistically titled , better known by its nickname: Star Wars. Professor Perrow says “Star Wars is the most frightening system I can think of.” In fact, Star Wars is by far the most complex technology ever contemplated by man. And possibly the most expensive.

There are cost projections for Star Wars that make NASA’s whole budget look like pocket change. Portentiously, the first shuttle experiment with Star Wars technology failed when shuttle scientists pointed a little mirror the wrong way. We can only hope that the little mirrors on Soviet Warheads are aligned more cooperatively.

Complexity is more than a passing issue. It is science’s most powerful and debilitating intoxicant. We teach it in our schools, confuse it with sophistication and sanction it with faith. In this High Tech Age, we have predictably become drunk on the stuff. And, as with alcohol and cocaine, we’ll probably discover its hazards through a series of painful accidents.

Meanwhile, there is another concern that ironically might have been illuminated by a teacher, or better yet a journalist, in space. Its advocates include a recently-created organization of space veterans whose non-political goal is to share their singular view of our planet. That view sees a fragile ball of blue, green and brown, undivided by the lines that mark the maps and disputes on the surface below. It is an objective view, and we need it badly.

The implications of that view are made more sober by recent discoveries suggesting limits to the viability of human life in the environments of space. Outside the protective shield of our atmosphere, travelers are bombarded constantly by cosmic radiation that produces cancer and other ailments.

Weightlessness also has its long-term costs. While there may be ways to reduce or eliminate the risks involved with space travel, we are still, at best, in the zygote stage of our development as space creatures. It might be millennia before we are finally ready to leave Earth’s womb and dodge asteroids in the manner of Han Solo.

Until then, it would be nice if we didn’t have to discover our limits the hard way.

« Older entries § Newer entries »