Ideas

You are currently browsing the archive for the Ideas category.

I’m on a list where the subject of patents is being discussed. While thinking about how I might contribute to the conversation, I remembered that I once cared a lot about the subject and wrote some stuff about it. So I did some spelunking through the archives and found the following, now more than twelve years old. It was written during Esther Dyson‘s PC Forum, and addressed via blog to those present there. So, rather than leave it languishing alone in the deep past, I decided to run it again here. I’m not sure if it contributes much to the patent debate, but it does surface a number of topics I’ve been gnawing on ever since. 

— Doc


I think I could turn and live awhile with the animals…
Not one is demented with the mania of owning things.

Walt Whitman


PC Forum 2000,
Phoenix, AZ. March 15, 2000.

Source Coders

Six years ago, at PC Forum 94, John Gage of Sun Microsystems stood on stage between a twitchy Macintosh Duo and a huge projection screen, and pushed the reset button on our lives.

He showed us the Web.

It was like he took us on a tour of the Milky Way — a strange, immense and almost completely alien space. With calm authority and the deep, warm voice of a Nova narrator, he led us from the home page of a student in Massachusetts to a Winter Olympics report archive in Japan, then to a page that showed everything useful piece of data about every broadcast satellite, compiled and published by a fanatic in North Carolina.

We all knew it was fabulous, but why? How could you make money in a world of ends where nobody owns the means? How could you make sense of a network that is nobody’s product and everybody’s service? And where the hell did it come from?

  • Not Compuserve, AOL, Prodigy or any of the other online services
  • Not Novell, 3Com, Crisco, or any of the infrastructure companies
  • Not AT&T, MCI, Nortek or any of the phone companies.
  • Not Microsoft, Apple, Sun or any of the other platform companies.

Sure, it ran on all of them; but it belonged to none of them. And since they couldn’t own it, they never would have made it. So who the hell did make it?

In a word, Hackers. Programmers. Guys who were real good at writing code. Lots of those guys worked for companies, including the companies we just listed. Lots more worked in the public sector, for schools and government organizations. What they shared was a love of information, and of putting it to work. They put both passions into building the Net, working cooperatively in what Eric Raymond calls a “gift culture,” like Amish farmers raising a barn.

Hackers didn’t build the Net for business. They built it for research. They wanted to make it easy for people to inform each other, no matter who or where they were.

Several days ago Tim O’Reilly and I were talking about information, which is a noun derived from the verb to form. We use information, literally, toform each other. So, if we are in the market for information, we are asking to be formed by other people. In other words, we are authors of each other. It follows that the best information is the kind that changes us most. If we want to know something — if we are in the market for knowledge — we demand to be changed.

That change is growth. Our identity persists, yet who-we-are becomes larger, because we know more. And the more we know, the more valuable we become. This value isn’t a “brand” (a nasty word that comes to us from the cattle industry). It’s reputation.

What these hackers made was an extraordinarily vast and efficient market for knowledge — a wide-open marketspace for information — where everybody gets to participate, to contribute, to grow, and to increase the value of their own reputations.

Utopia

It turns out that the Net is also good for business, even though it was not written for business. In fact, “good” is too weak a word. The Net is a Utopia for business.Think about it. This is a place where —

  • The threshold of enterprise is approximately zero.
  • All you need to get millions of dollars is an idea that looks like it could be worth billions more.
  • You can create those billions of dollars in value just by impressing people with your idea.
  • The value of your idea can grow from zero to billions in a matter of hours.
  • You see investment as income, because you’re obligated to burn it, and you don’t need to hock your house or your car to get it.
  • Promise of reward far out-motivates fear of punishment, because there is no punishment.
  • Failure informs and therefore qualifies you for more money to fund your next idea, because both your knowledge and your reputation have grown in the process

To succeed in this world, your business only needs to be Utopia-compatible. That is, your people need to be in the market for information — or, in the parlance of The Cluetrain Manifesto — in the market for clues.

Yet many companies, especially traditional industrial ones, are not in the market for clues. They neither supply nor demand them. They put up a Web site, strictly as a pro forma measure. The corporate face is blank, the voice robotic. David Weinberger writes, “Companies that cannot speak in a human voice make sites that smell like death.”

The medium is the metaphor

Their problem is conceptual. They literally concieve markets — including the vast information market of the Net — in obsolete terms. They see them as real estate, as battlefields, as territories, as theaters, as animal forces. And none of those metaphors work for the Net.

Three years ago, at PC Forum 97, George Lakoff told us how metaphors work (a good source is his 1980 book, Metaphors We Live By). We were taught in school that metaphors were poetic constructions. In fact, metaphors scaffold our understanding of the world. Conceptual metaphors induce the vocabularies that describe every subject we know.

Take life. In a literal sense, life is a biological state. But that’s not how we know life. If we stop to look at the vocabulary we use to describe life, we find beneath it the conceptual metaphor life is a journey. We cannot talk about life without using the language of travel. Birth is arrival. Death is departure. Choices are crossroads. Troubles are potholes or speed bumps. Mistakes take us off the path or onto dead end streets.

Take time. Our primary conceptual metaphor for time is time is money. We save, spend, budget, waste, hoard and invest it.

Conceptual metaphors are equally ubiquitous and unconscious. They are the aquifers of meaning beneath the grounds of our consciousness. Think about how we turn what we mean into what we say. When we speak, we usually don’t know how we will finish the sentences we start, or how we started the sentences we finish. Think about how hard it is to remember exactly what somebody says, yet to know exactly what they mean. Conceptual metaphors are deeply involved in this paradox. They help us agree that we all understand a subject in the same metaphorical terms.

Now lets look at markets. This morning Steve Ballmer told us that Microsoft’s first principle was “to compete very hard, do your best job, study ideas, move forward aggressively.” What is the conceptual metaphor here? Easy: markets are battlefields. There are two sets of overlapping vocabularies induced by this metaphor: war and sports. So you can talk about “blowing away” competition and “level playing fields” in the same sentence. (Microsoft’s problems derive from a confusion between the war and sports metaphors. “All’s fair” in war, but not sports.)

There are related metaphors. One is markets are real estate. By this metaphor, companies can own market territory, or lease rights to it. To a large extent, both the battle and playing field metaphors derive from the real estate metaphor.

There are unrelated metaphors. One is markets are beings. The investment community describes markets as bullsbears, and invisible hands. They growand shrink. They have moods. They get nervouscalm or upset. Another is markets are theaters. Companies perform there, for audiences, who they would like to enjoy a good experience.Another is markets are environmentsIn The Death of Competition, James Moore speaks of markets as ecosystems where companies and categories evolvecompete in a habitat, for resources like plants and animals, and evolve or become extinct.

So what the hell is a market, really? The answer isn’t complicated when we subtract out all the modern metaphors.

Markets are markets

The first markets were markets. They were real places where people gathered to talk about subjects that mattered to them, and to do business. Supply and demand, selling and buying, production and consumption, vendor and customer —all those reciprocal roles and processes that describe market relationships — were a handshake apart. Our ancestors’ surnames — Smith, Hunter, Shoemaker, Weaver, Tanner, Butcher — derived from roles they played in marketplaces. They were literally defined by their crafts.

Yet the balance of power favored the buy side: the customers, buyers and consumers who were one and the same. The noun “market” comes from the Latin mercere, which means to buy. That’s why we call malls “shopping centers.” Not “selling centers.”

The industrial revolution changed everything. Our ancestors left their farms and shops and got jobs in the offices and factories of industry. On the sell side, they became labor, and on the buy side they became consumers. As the Industrial Age advanced, the distance between production and consumption grew so wide that we came to understand business itself in terms of a new metahor: business is shipping. Now we had content that we loaded into a distribution system or a channel, and addressed for delivery to an end user or a consumer. Eventually, industry came to treat market as a verb as well as a noun. Marketing became the job of moving products across the complex distribution deltas that grew between a few suppliers and vast “markets,” where demand was perceived categorically, rather than personally. Every categorical subject or population — consumer electronics, cosmetics, yachting, 18-34 year old men, drivers, surfers — were all “markets.”

My work as a journalist flanks twenty-two years in marketing, advertising and public relations. These are professions which, in spite of good advice of gurus from Theodore Levitt to Regis McKenna, conceived marketing as the military wing of industry’s shipping system. Marketing’s job was to develop “strategies” for “campaigns” to wage against “targets” with munitions called “mesages” which would succeed by “impact” and “penetration. Those targets were not customers, but “consumers,” “eyeballs” and “seats.” There was no demand by those people for messages, but that didn’t matter because those people were not paying for the messages we insisted on lobbing at them.

So, by the end of the Industrial Age, we had not only forgotten what a market really was, but we had developed new and often hostile meanings for both the noun and the verb. We also understood both in terms of conceptual metaphors that were far removed from markets as places and as activities that defined those places.

Around the turn of the 90s, I began to float a new metaphor: markets are conversations. I liked it for two reasons: 1) it worked as a synonmym (try substiting conversation for market everywhere the latter appears and you’ll see what I mean); and 2) every other metaphor — with the notable exception of markets are environments — insulted the true nature of markets, especiallly in a networked world built by a gift economy, where product categories and their competing occupants all grow, often at nobody’s expense.

The idea didn’t catch on until it was put to work as Thesis #1 in The Cluetrain Manifesto. Now it’s all over the place. But it also has a long way to go. Conceptual metaphors such as markets are battlefields are huge reservoirs of bad meaning. Even highly clueful e-businesses make constant use of them.

Which brings us to patents, which operate on the conceptual metaphor inventions are property. This metaphor worked, more or less, through the entire Industrial Age; but it runs into trouble with the Net. While patents and properties may have been involved in the development of the Net, we don’t see them among the credits. As Larry Lessig puts it, the Net grew in the context of regulation, but regulation that broaded access to the very limits of plausibility, essentially by making cyberspace a form of public property — or, more accurately, nobody’s property.

But when we frame the argument over patents in terms of property, we must use the conceptual metaphor on which patents depend, and which also that deny the nature of the Net. We will also argue in terms of market metaphors that employ property concepts: war, games, real estate, theater, and shipping. We will not talk in terms of knowledge, information and conversation.

The challenge

This is where we found ourselves today, when Larry Lessig spoke to us. He said,

“…In the context of patents, the passion to regulate rages. Some 40,000 software patents now float in the ether; a new industry of patent making was launched by a decision of the federal circuit in 1998 — the business method patent. Gaggles of lawyers, my students, now police the innovation process in Internet industry. 5 years ago, if you had a great idea, you coded it. Today, if you have a great idea, you call the lawyers to check its IP.

“This change is the product of regulation. And while in principle, I’m in favor of patents, we should not ignore the nature of the change that this creates. Unlike open access, the regulations of patent don’t decentralize the innovative process. They do the opposite. Unlike open access, the regulations of patent don’t increase the range of those who might compete; for the most part, they narrow it. Unlike open access, patents don’t broaden the architecture of innovation. They narrow it. They are part of an architecture — a legal architecture — that narrows innovation.” (You’ll find this and many other speeches at his site.)

A year ago I defected from marketing. I went over to the other side, joining markets in their fight against Business as Usual. That’s why I write for Linux Journal. It’s also why I co-wrote The Cluetrain Manifesto.

Linux is the Amish barn operating system. It was conceived and built on the same principles as the Net. Not surprisingly, much of what we see on the Net is served up by Linux and other software described as “open” and “free.”

Cluetrain insists that we start to understand the Net on its own terms. This means we have to go back to our founding hackers and look at the virtues embodied in the Utopia donated to business by the hackers’ gift culture.

I suggest we start with these three:

  • Nobody owns it
  • Everybody can use it
  • Anybody can improve it

Eric Raymond suggests many more. So do Bryan Pfaffenberger (who also writes for Linux Journal), Larry LessigRichard Stallman,Tim O’Reilly,James Gleick and Dave Winer, to name just a few.

Let’s start there.

If we start with the industrial world, we’ll stay there. And we can kiss Utopia good-bye.

Uninstalled is Michael O'Connor ClarkeMichael O’Connor Clarke’s blog — a title that always creeped me out a bit, kind of the way Warren Zevon‘s My Ride’s Here did, carrying more than a hint of prophesy. Though I think Michael meant something else with it. I forget, and now it doesn’t matter because he’s gone: uninstalled yesterday. Esophogeal cancer. A bad end for a good man.

All that matters, of course, is his life. Michael was smart and funny and loving and wise far beyond his years. We bonded as blogging buddies back when most blogs were journals and not shingles of “content” built for carrying payloads of advertising. Start to finish, he was a terrific writer. Enviable, even. He always wrote for the good it did and not the money it brought. (Which, in his case, like mine and most other friends in the ‘sphere, was squat.) I’ll honor that, his memory and many good causes at once by sharing most of one of his last blog posts:

Leaky Algorithmic Marketing Efforts or Why Social Advertising Sucks

Posted on May 9, 2012

A couple of days ago, the estimable JP Rangaswami posted a piece in response to a rather weird ad he saw pop up on Facebook. You should go read the full post for the context, but here’s the really quick version.

JP had posted a quick Facebook comment about reading some very entertainingly snarky Amazon.com reviews for absurdly over-priced speaker cables.

Something lurking deep in the dark heart of the giant, steam-belching, Heath Robinson contraption that powers Facebook’s social advertising engine took a shine to JP’s drive-by comment, snarfled it up, and spat it back out again with an advert attached. A rather… odd choice of “ad inventory unit”, to say the least. Here’s how it showed up on on of JP’s friends’ Facebook news feeds:

I saw JP post about this on Facebook and commented. The more I thought about the weirdness of this, the longer my comment became – to the point where I figured it deserved to spill over into a full-blown blog rant. Strap in… you have been warned.

I’ve seen a lot of this kind of thing happening in the past several months. Recently I’ve been tweeting and Facebooking my frustration with social sharing apps that behave in similar ways. You know the kind of thing – those ridiculous cluewalls implemented by Yahoo!, SocialCam, Viddy, and several big newspapers. You see an interesting link posted by one of your friends, click to read the article, and next thing you know you’re expected to grant permission to some rotten app to start spamming all your friends every time you read something online. Ack.

The brilliant Matthew Inman, genius behind The Oatmeal, had a very smart, beautifully simple take on all this social reader stupidity.

It’s the spread of this kind of leaky algorithmic marketing that is starting to really discourage me from sharing or, sometimes, even consuming content. And I’m a sharer by nature – I’ve been willingly sharing and participating in all this social bollocks for a heck of a long time now.

But now… well, I’m really starting to worry about the path we seem to be headed down. Or should I say, the path we’re being led down.

Apps that want me to hand over the keys to my FB account before I can read the news or watch another dopey cat video just make me uncomfortable. If I inadvertently click through an interesting link only to find that SocialCam or Viddy or somesuch malarkey wants me to accept its one-sided Terms of Service, then I nope the hell out of there pretty darn fast.

How can this be good for the Web? It denies content creators of traffic and views, and ensures that I *won’t* engage with their ideas, no matter how good they might be.

All these examples are bad cases of Leaky Algorithmic Marketing Efforts (or L.A.M.E. for short). It’s a case of developers trying to be smart in applying their algorithms to user-generated content – attempting to nail the sweet spot of personal recommendations by guessing what kind of ad inventory to attach to an individual comment, status update, or tweet.

It results in unsubtle, bloody-minded marketing leaking across into personal conversations. Kinda like the loud, drunken sales rep at the cocktail party, shoe-horning a pitch for education savings plans into a discussion about your choice of school for your kids.

Perhaps I wouldn’t mind so much if it wasn’t so awfully bloody cack-handed as a marketing tactic. I mean – take another look at the ad unit served up to run alongside JP’s status update. What the hell has an ad for motorbike holidays got to do with him linking to snarky reviews of fancyass (and possibly fictional) speaker cables? Where’s the contextual connection?

Mr. Marketer: your algorithm is bad, and you should feel bad.

As you see, Michael was one of those rare people who beat the shit out of marketing from the inside. Bless him for that. It’s not a welcome calling, and Lord knows marketing needs it, now more than ever.

Here are some memorial posts from other old friends. I’ll add to the list as I spot them.

And here is his Facebook page. Much to mull and say there too. Also at a new memorial page there.

It’s good, while it lasts, that our presences persist on Facebook after we’re gone. I still visit departed friends there: Gil Templeton, Ray Simone, R.L. “Bob” Morgan, Nick Givotovsky.SupportMichaelOCC.ca is still up, and should stay up, to help provide support for his family.

His Twitter stream lives here. Last tweet: 26 September. Here’s that conversation.

Charge for them.

Let users be customers and not just consumers. Let demand engage supply the old fashioned way: by paying for goods and services, and making the sellers directly accountable to buyers in a truly competitive marketplace.

Here’s the thing. We, the customers of Apple and the consumers of both Apple’s and Google’s free map services, are getting screwed by value-subtracting games played by both companies.

Millions of us are highly dependent on our phones’ primary maps app. From the beginning on the iPhone that app has been Google’s — or at least seemed to be. By replacing it with a shamefully lame app by the same name, Apple screwed its customers, hard. Why? Because it wanted to screw Google. And why screw Google? Because Google had been screwing both Apple and iPhone/iPad customers for the duration.

Or so I assume. I really don’t know.

A few days ago I asked A question about Apple vs. Google maps. Noting that Google’s Maps app on iPhone lacked at least two features found on Android versions of the app — adaptive turn-by-turn directions and vocalization — I wondered out loud if Google was playing a passive-aggressive game with Apple by crippling the iOS version of the app. One commenter said it was Apple’s choice not to include those features; but in a New York Times column a few days ago, David Pogue confirmed my original suspicion:

After poking around, here’s what I’ve learned.

First, why Apple dropped the old version: Google, it says, was saving all the best features for phones that run its Android software. For example, the iPhone app never got spoken directions or vector maps (smooth lines, not tiles of pixels), long after those features had come to rival phones.

Hey, if that’s the case, and if I were Apple, I’d be pissed too — and I’d want to offer a better maps app than Google’s. As an iPhone and iPad user, I’ve been annoyed for years at Google for obviously crippling its iOS Maps app. (Datum: I’m also an Android user.) But now it bothers me a lot more that Google hardly seems to mind that Apple killed the Google-sourced Maps app for the entire iOS 6 user base. Why would Google be so blasé? One big reason is that Apple’s users pay nothing for the app. And, because users pay nothing, Google can ignore those users’ suffering while relishing the sight of Apple embarrassing itself.

To fully understand what’s going on here, it is essentiall to respect the difference between customers and users (aka consumers). Customers pay. By not paying, and functioning only as a user, you have little if any economic leverage. Worse, you’re the product being sold to the actual customers, which are advertisers.

This Google vs. Apple thing reminds me of my days in commercial broadcasting. There too consumers and customers were different populations. Consumers were listeners and viewers whose ears and eyeballs were sold to advertisers, who were the real customers. Listeners and viewers had no leverage when a station or a network got in the mood to kill a format, or a show. We’re in the same spot here, at least in respect to Google.

With Apple it’s different, because iPhone and iPad users are actual customers of Apple. Now chagrined, Apple is pressing that advantage, starting with Tim Cook’s open letter to customers. An excerpt:

We are extremely sorry for the frustration this has caused our customers and we are doing everything we can to make Maps better.

We launched Maps initially with the first version of iOS. As time progressed, we wanted to provide our customers with even better Maps including features such as turn-by-turn directions, voice integration, Flyover and vector-based maps. In order to do this, we had to create a new version of Maps from the ground up.

There are already more than 100 million iOS devices using the new Apple Maps, with more and more joining us every day. In just over a week, iOS users with the new Maps have already searched for nearly half a billion locations. The more our customers use our Maps the better it will get and we greatly appreciate all of the feedback we have received from you.

While we’re improving Maps, you can try alternatives by downloading map apps from the App Store like Bing, MapQuest and Waze, or use Google or Nokia maps by going to their websites and creating an icon on your home screen to their web app.

If you buy an iPhone you’re already paying for the Maps app. So this post is mostly for Google. While I think an apology is owed to iPhone and iPad users, for withholding features just to disadvantage those devices against Android (if in fact that’s what happened… I still don’t know for sure), I’d rather see Google offer Google Maps for sale, at a fair price, in the Apple Apps store. And I’d like to see Apple approve that product for sale, pronto.

Trust me: plenty of customers will pay. Google will not only drive home the real value of its Maps app (and all the good work behind it), but get some long-overdue practice at doing real customer service. Google’s high dependence on a single source of revenue — advertising — is a vulnerability that can only be reduced by broadening the company’s businesses. The future of selling direct has been looming at Google for a long time. There is a great opportunity, right now, to do that in a big way with Google Maps.

Data wants to be free, but value wants to be paid for. Let us pay. We’re the damed market. Let us help you work out the kinks in your products. Develop real relationships with us, and provide real customer support that’s worth what we pay for it.

[Later…] Some tweets, sort of threaded:

@Owen Barder@carlkalapesi @dsearls seems to be wrong to say that Google has until now had it’s app in IOS. It was an Apple app. [Link.]

@Kevin Marks: No, @dsearls, the old Maps app on iPhone was written by Apple, using Google APIs. Apple vetoed Google’s own app in ’09. [Link]

@Jamie Starke@kevinmarks @dsearls citation needed [Link]

@Kevin Marks: @jamiestarke @dsearls http://wireless.fcc.gov/releases/9182009_Google_Filing_iPhone.pdf … Google Latitude was rejected because Apple believed it could replace the preloaded maps app (p3) [Link]

So are you (Owen and Kevin) saying David Pogue got bad info from Apple in the piece quoted above?

Either way, the question then is, Who crippled the old Maps app? Was it Google, Apple, or both? Also, Why?

I still stand by my recommendation that Google offer the map for sale on iOS. And on Android too, for the reasons I give above.

Meanwhile, somebody ought to put up a post, or a site, explaining the particulars of this case. Such as whose app Maps was, and is now. Most stories (seems to me) about the fracas say the old app was Google’s. If it wasn’t, and was instead an Apple map fed by the Google API, that needs to be made clear. I’m still fuzzed around the details here.

[Later (1 October)…] Christina Bonnington in Wired says it was Apple’s decision not to include turn-by-turn directions in the Maps app. She writes,

When iOS first launched in the iPhone in 2007, Apple embraced Google Maps as its mapping back-end. But over the years, rivalry between the tech giants increased to a fever pitch. So it’s likely that Apple decided some years ago to eventually abandon Google Maps, and create its own platform. And because Apple knew it was eventually going to drop Google as its back-end, there was no point in pushing further innovation or integration with the system doomed to a limited lifespan.

But do I believe her, just because she’s writing for Wired? Do I believe David Pogue, just because he’s writing for the NY Times? Obviously, they don’t agree. At least one is wrong about whether the Maps app was crippled by Google (says David) or Apple (says Christina). At this point I can’t believe either of them. For that I’ll need. at the very least, a quote from a source who knows. I mean, really knows.

Mother Jones‘ original slogan was, “You trust your mother. But you cut the cards.” So here’s my card-cutting: I want hard facts on exactly what happened here. Who made the decision not to include turn-by-turn and voice directions in the Maps app on iOS? It had to have been Apple, Google or some combination of both. Which was it? How? And why?

[Later (2 October)…] In Voice navigation killed Apple-Google maps talks, John Paczkowski of Fox News does the best job I’ve seen yet of pulling the covers back on what actually happened:

Google Chairman Eric Schmidt said Apple should have continued to use Google’s mapping application in iOS 6 instead of swapping it out for its poorly received home-brewed replacement, and given the sour reception Apple’s Maps app has been given, he may have been right.

But multiple sources familiar with Apple’s thinking say the company felt it had no choice but to replace Google Maps with its own, because of a disagreement over a key feature: Voice-guided turn-by-turn driving directions.

Spoken turn-by-turn navigation has been a free service offered through Google’s Android mobile OS for a few years now. But it was never part of the deal that brought Google’s Maps to iOS. And sources say Apple very much wanted it to be. Requiring iPhone users to look directly at handsets for directions and manually move through each step — while Android users enjoyed native voice-guided instructions — put Apple at a clear disadvantage in the mobile space…

Apple pushed Google hard to provide the data it needed to bring voice-guided navigation to iOS. But according to people familiar with Google’s thinking, the search giant, which had invested massive sums in creating that data and views it as a key feature of Android, wasn’t willing to simply hand it over to a competing platform.

And if there were terms under which it might have agreed to do so, Apple wasn’t offering them. Sources tell AllThingsD that Google, for example, wanted more say in the iOS maps feature set. It wasn’t happy simply providing back-end data. It asked for in-app branding. Apple declined. It suggested adding Google Latitude. Again, Apple declined. And these became major points of contention between the two companies, whose relationship was already deteriorating for a variety of other reasons, including Apple’s concern that Google was gathering too much user data from the app.

“There were a number of issues inflaming negotiations, but voice navigation was the biggest,” one source familiar with Apple and Google’s negotiations told AllThingsD. “Ultimately, it was a deal-breaker.”

There’s more from John Paczkowski in All Things D.

So maybe we’ll never know. “Sources” will, but the rest of us won’t.

 

 

Geologists have an informal name for the history of human influence on the Earth. They call it the Anthropocene. It makes sense. We have been raiding the earth for its contents, and polluting its atmosphere, land and oceans for as long as we’ve been here, and it shows. By any objective perspective other than our own, we are a pestilential species. We consume, waste and fail to replace everything we can, with  little regard for consequences beyond our own immediate short-term needs and wants. Between excavation, erosion, dredgings, landfills and countless other alterations of the lithosphere, evidence of human agency in the cumulative effects studied by geology is both clear and non-trivial.

As for raiding resources, I could list a hundred things we’ll drill, mine or harvest out of the planet and never replace — as if it were in our power to do so — but instead I’ll point to just one small member of the periodic table: helium. Next to hydrogen, it’s the second lightest element, with just two electrons and two protons. Also, next to hydrogen, it is the second most abundant, comprising nearly a quarter of the universe’s elemental mass.  It is also one of the first elements to be created out of the big bang, and remains essential to growing and lighting up stars.

Helium is made in two places: burning stars and rotting rock. Humans can do lots of great stuff, but so far making helium isn’t one of them. Still, naturally, we’ve been using that up: extracting it away, like we do so much else. Eventually, we’ll run out.

Heavy elements are also in short supply. When a planet forms, the heaviest elements sink to the core. The main reason we have gold, nickel, platinum, tungsten, titanium and many other attractive and helpful elements laying around the surface or within mine-able distance below is that meteorites put them there, long ago. At our current rate of consumption, we’ll be mining the moon and asteroids for them. If we’re still around.

Meanwhile the planet’s climates are heating up. Whether or not one ascribes this to human influence matters less than the fact that it is happening. NASA has been doing a fine job of examining symptoms and causes. Among the symptoms are the melting of Greenland and the Arctic. Lots of bad things are bound to happen. Seas rising. Droughts and floods. Methane releases. Bill McKibben is another good source of data and worry. He’s the main dude behind 350.org, named after what many scientists believe is the safe upper limit for carbon dioxide in the atmosphere: 350 parts per million. We’re over that now, at about 392. (Bonus link.)

The main thing to expect, in the short term — the next few dozen or hundreds of years — is rising sea levels, which will move coastlines far inland for much of the world, change ecosystems pretty much everywhere, and alter the way the whole food web works.

Here in the U.S., neither major political party has paid much attention to this. On the whole the Republicans are skeptical about it. The Democrats care about it, but don’t want to make a big issue of it. The White House has nice things to say, but has to reconcile present economic growth imperatives with the need to save the planet from humans in the long run.

I’m not going to tell you how to vote, or how I’m going to vote, because I don’t want this to be about that. What I’m talking about here is evolution, not election. That’s the issue. Can we evolve to be symbiotic with the rest of the species on Earth? Or will we remain a plague?

Politics is for seasons. Evolution is inevitable. One way or another.

(The photo at the top is one among many I’ve shot flying over Greenland — a place that’s changing faster, perhaps, than any other large landform on Earth.)

[18 September…] I met and got some great hang time with Michael Schwartz (@Sustainism) of Sustainism fame, at PICNIC in Amsterdam, and found ourselves of one, or at least overlapping, mind on many things. I don’t want to let the connection drop, so I’m putting a quick shout-out here, before moving on to the next, and much-belated, post.

Also, speaking of the anthropocene, dig The ‘Anthropocene’ as Environmental Meme and/or Geological Epoch, in Dot Earth, by Andrew Revkin, in The New York Times. I met him at an event several years ago and let the contact go slack. Now I’m reeling it in a bit. 🙂 Here’s why his work is especially germane to the topic of this here post:  “Largely because of my early writing on humans as a geological force, I am a member of the a working group on the Anthropocene established by the Subcommission on Quaternary Stratigraphy.” Keep up the good work, Andy.

Just discovered YouReputation while checking on what Drazen Pantic has been up to. (I met Drazen a decade ago while researching public Wi-Fi in New York for Linux Journal.) YouReputation is Drazen’s “viral search” engine. Here is the top result in a search for “John Hagel”:

Thu Aug 23 06:33:50 2012
Viral Probability: 0.7092
Sentiment: 31% POSITIVE0% NEGATIVE

Demographics prediction: 45-60
Pinterest / John Hagel’s followers
Jul 22, 2012 … John Hagel. I live and work on the edge – the views are breathtaking, the experiences deep and satisfying and the learning is limitless.
Viral Impact:  Sentiment: POSITIVE

Here are additional searches for Scoble, Robert Scoble, Jonathan Zittrain, IdentityWoman, Kaliya Hamlin, Stewart Brand, danah boyd, Drazen and myself. The one thing I love about this is that it says I fall in the same demographic as Scoble (18-30), and that both Scoble and I appear younger to Drazen’s algorighm than Robert Scoble (30-45). A few weeks back, on a Gillmor Gang, after getting some age-ist flack from Robert, I yelled back at him (like the juvenile I still am), “I’ve been young a lot longer than you have!” Stewart Brand, older than me in years, also comes in at 18-30.

Drazen is a mathematician as well as a hacker, which I’m sure is a big reason YouReputation exists. I just hope he doesn’t use these findings to tweak the results. Keep me young, okay?

65plusI worked in retailing, wholesaling, journalism and radio when I was 18-24.

I co-founded an advertising agency when I was 25-34. Among the things I studied while working in that age bracket were Nielsen and Arbitron ratings for radio and TV. Everything those companies had to say was fractioned into age brackets.

The radio station I did most of that work for was WQDR in Raleigh, one of the world’s first album rock stations. Its target demographic was 18-34. It’s a country station now, aimed at 25-54.

Other “desirable” demographics for commercial media are 18-49 and 25-49.

The demographic I entered between the last sentence and this one, 65+, is the last in the usual demographic series and the least desirable to marketers, regardless of the size of the population in it, and the disposable wealth it is ready to spend.

Thus I have now fallen over the edge of a demographic cliff, at the bottom of which is little of major interest to marketers, unless they’re hawking the cushy human equivalent of parking lots. You know: cruises, golf, “lifestyle communities,” “erectile dsyfunction,” adult diapers, geriatric drugs, sensible cars, dementia onset warnings…

For individuals, demographics are absurd. None of us are an age, much less a range of them. We’re animals who live and work and have fun and do stuff. Eventually we croak, but if we stay healthy we acquire wisdom and experience, and find ourselves more valuable over time.

Yet we become less employable as we climb the high end of the demographic ladder, but not because we can’t do the work. It’s mostly because we look old and our tolerance for bullshit is low. Even our own, which is another bonus.

Nearly 100% of the people I work with are younger than me, usually by a generation or two. I almost never feel old among them. Sometimes I joke about it, but I really don’t care. It helps to have been around. It helps to know how fast and well the mighty rise, and then fall. It helps to see what comes and stays, and to know why those things matter more than what comes and goes. It helps to know there are sand dunes older than any company born on the Internet.

For most of my life I’ve worked in the most amazing industry the world has ever hosted. Technology is a miracle business. Lots of good new things come and go, but three aren’t sand dunes. They’re staying for the duration. I knew they would when I saw each arrive and then fail to leave. They were things nobody owned, everybody could use and anybody could improve. For all three reasons they supported boundless economic growth and other benefits to society. They are:

  1. The personal computer
  2. The internet
  3. The smartphone.

All three were genies that granted wishes without end, and weren’t going back in their bottles.

Yeah, they all had problems and caused many more. They were like people that way. But these two graces — computing and worldwide communication ease — in your pocket or purse, are now as normal as wearing shoes. Nobody owns the design for those either. Also, everyone can use them and anyone can improve them. That’s pretty freaking cool, even though it’s hardly appreciated.

I could go on but I’ll let this interview with Dorie Clark suggest the rest. I’ve gotta sleep before we hit the road early in the morning to celebrate the beginning of the rest of my life. May yours be at least as long. And as good.

 

Save

When I was a kid I had near-perfect vision. I remember being able to read street signs and license plates at a distance, and feeling good about that. But I don’t think that was exceptional. Unless we are damaged in some way, the eyes we are born with tend to be optically correct. Until… what?

In my case it was my junior year in college. That’s when I finally became a good student, spending long hours reading and writing in my carrel in the library basement, bad flourescent light, cramping my vision at a single distance the whole time. Then, when I’d walk out and the end of the day or the evening, I’d notice that things were a little blurry at a distance. After a few minutes, my distance vision would gradually clear up. By the end of the year, however, my vision had begun to clear up less and less. By the end of my senior year, I needed glasses for distance: I had become myopic. Nearsighted. I remember the prescription well: -.75 dioptres for my left eye and -1.oo dioptres for my right.

I then began the life of a writer, with lots of sitting still, reading things and writing on a typewriter or (much later) a computer. Since I tended to wear glasses full-time, the blurred distance vision when work was done — and then the gradual recovery over the following minutes or hours — continued. And my myopia gradually increased. So, by the time I reached my forties, I was down to -3 dioptres of correction for both eyes.

A digression into optics… “Reading” glasses, for hyperopia, or farsightedness, are in positive dioptres: +1, +2, etc. As magnifiers, they tend toward the convex, thicker in the middle and thinner toward the edges, or frames. Corrections for myopia tend toward the concave, thicker on the edges. You can sort-of see the thick edges of my frames in the YouTube video above, shot in June, 1988, when I was a month away from turning 42 (and looked much younger, which I wish was still the case). My glasses were Bill Gates-style aviators.

I also began to conclude that myopia, at least in my case was adaptive. It made sense to me that the most studious kids — the ones who read the most, and for the longest times each day — wore glasses, almost always for myopia.

So I decided to avoid wearing glasses as much as I could. I would wear none while writing and reading (when I didn’t need them), and only wear them for driving, or at other times when distance vision mattered, such as when watching movies or attending sports events. Over the years, my vision improved. By the time I was 55, I could pass the eye test at the DMV, and no longer required glasses for driving. In another few years my vision was 20/25 i

n one eye and 20/30 in the other. I still had distance glasses (mostly for driving), but rarely used them otherwise.

I’ve been told by my last two optometrists that most likely my changes were brought on by onset of cataracts (which I now have, though mostly in my right eye), and maybe that was a factor, but I know of at least two other cases like mine, in which myopia was reduced by avoiding correction for it. And no optometrist or opthamologist I visted in my forties or fifties noted cataracts during eye examinations. But all have doubted my self-diagnosis of adaptive myopia.

Now I read stories like, “Why Up to 90% of Asian Schoolchildren Are Nearsighted: Researchers say the culprit is academic ambition: spending too much time studying indoors and not enough hours in bright sunlight is ruining kids’ eyesight“… and the provisional conclusion of my one-case empirical study seems, possibly, validated.

It also seems to me that the prevalence of myopia, worldwide, is high enough to make one wonder if it’s a feature of civilization, like cutting hair and wearing shoes.

I also wonder whether Lasik is a good idea, especially when I look at the large number of old glasses,  all with different prescriptions, in my office drawer at home. What’s to stop one’s eyes from changing anyway, after Lasik? Maybe Lasik itself? I know many people who have had Lasik procedures, and none of them are unhappy with the results. Still, I gotta wonder.

 

My son remembers what I say better than I do. One example is this:

I uttered it in some context while wheezing my way up a slope somewhere in the Reservation.

Except it wasn’t there. Also I didn’t say that. Exactly. Or alone. He tells me it came up while we were walking across after getting some hang time after Mass at the . He just told me the preceding while looking over my shoulder at what I’m writing. He also explains that the above is compressed from dialog between the two of us, at the end of which he said it should be a bumper sticker, which he later designed, sent to me and you see above.

What I recall about the exchange, incompletely (as all recall is, thanks to the graces and curses of short term memory), is that I was thinking about the imperatives of invention, and why my nature is native to Silicon Valley, which exists everywhere ideas and ambition combine and catch fire.

Through my work over the years I have often been directed to the worlds of Elinor OstromElinor Ostrom, and toward speaking to her in person. Alas, the latter choice is now off the table. She died yesterday, at 78, of pancreatic cancer.

On Monday evening, in the Q&A during my talk, I was asked about the relevance of Ostrom’s work to mine around VRM and The Intention Economy. I answered, with regret, that my sourcing of Ostrom was limited to a bibliography entry, after I had to reduce the curb weight of the book from 120,000 words to 80,000. So here’s one section, recovered from the cutting room floor:

In Governing the Commons (1990), Elinor Ostrom says Hardin’s argument is not new:

Aristotle long ago observed that “what is common to the greatest number has the least care bestowed upon it. Everyone thinks chiefly of his own, hardly at all of the common interest” (Politics Book II, ch. 3). Hobbes’s parable of man in a state of nature is a prototype of the tragedy of the commons: Men see their own good and end up fighting one another…[1]

She goes on to cite a long list of other sources, the growing sum of which have long since snowballed into a single widely held conclusion: “Much of the world is dependent on resources that are subject to the possibility of a tragedy of the commons.”[2]

Yet Hardin’s model, she explains, is an argument of one very narrow kind: a prisoner’s dilemma, “conceptualized as a noncooperative game in which all players possess complete information … When both players choose their dominant strategy… they produce an equlibrium that is the third-best result for both.” The game is fascinating for scholars because “The paradox that individually rational strategies lead to collectively irrational outcomes seems to challenge the fundamental faith that rational beings can achieve rational results.” She adds, “The deep attraction of the dilemma is further illustrated by the number of articles written about it. At one count, 15 years ago, more than 2,000 papers had been devoted to the prisoner’s dilemma game (Grofman and Pool 1975).”[3]

Ostrom, however, doesn’t challenge Hardin’s assumption that common pool resources and a commons are the same thing.[1] Lewis Hyde does. In Common as Air (2010), he makes a thoroughly argued case against both Hardin’s tragedy-prone commons and idealized models, such as what he calls John Locke’s “aboriginal first condition” and Lawrence Lessig’s “dreams of pentitude.” What Hyde argues for is something much more complex, subtle and—I believe—important to understand if we are to make the most of the Internet.

“I take a commons to be a kind of property,” Hyde writes, “and I take ‘property’ to be, by one old dictionary definition, a right of action,” noting “that ownership rarely consists of the entire set of possible actions.”


[1] Elinor Ostrom, Governing the Commons: The evolution of institutions for collective action. (New York, Cambridge University Press, 1990) 2-3. [2] Ibid., 3. [3] Ibid, 4-5.

[4] In fairness, Hyde notes, “Garret Hardin has indicated that his original essay should have been titled ‘The Tragedy of the Unmanaged commons,’ though better still might be ‘The Tragedy of Unmanaged, Laissez-Faire, Common-Pool Resources with Easy Access for Noncommunicating, Self-Interested Individuals.” (Common as Air, 44.) [Links added.]

The final version focuses entirely on Lewis Hyde’s work, which I believe encompasses Elinor Ostrom’s, at least for my purposes in the book. Still, leaving her out seems especially regrettable now.

And I encourage study of her work. Our common pool resources, which are many and of transcendant importance, are well served by her original thinking about them.

Bonus linkage:

I was interviewed for a story recently. (It’s still in the mill.) In the correspondence that followed, the reporter asked me to clarify a statement: “that the idea of selling your data is nuts.” I didn’t remember exactly what I said, so I responded,

I think what I meant was this:

1) The use value of personal data so far exceeds its sale value that it’s insane to compare the two.

Especially because …

2) There never has been a market for selling personal data, and to create one now, just because marketers are sneakily getting that data for free, doesn’t mean there should be one.

Especially because …

3) The sums paid by marketers for personal data are actually tiny on a per-person basis.

4) Selling one’s personal data amounts to marketing exposure of one’s self. It’s like stripping, only less sexy. And for a lot less money.

And added a pointer to For personal data, use value beats sale value.

« Older entries § Newer entries »