Past

You are currently browsing the archive for the Past category.


While walking around Paris for the last month, I’ve became fascinated by the highly fossiliferous limestone that comprises so many of its iconic structures. At one point I thought, Hmm… The City of Light is built with materials of death. I had no idea how much farther that thought would take me.

Without abundant death we wouldn’t have asphalt, concrete, marble, travertine, chert, oil, coal or countless other graces of civilization. Still, there seemed to be an unusual abundance of limestone in use here, and I wondered where it came from. Naturally, from my 21st century perspective, I assumed that all the stone had been quarried in some other place: hills outside of town, perhaps. (Lutetian limestone, it’s called, and it’s a relatively new rock: only only a few dozen million years old. Younger than dinosaurs. It’s also known as “Paris stone”, and has become quite the fashion item lately.) What I hadn’t figured was that nearly all of this building stone, for many centuries, was extracted from beneath Paris itself.

I didn’t learn that fact until we visited the Catacombes a couple days ago.

The Catacombes are bone banks called ossuaries. They occupy abandoned quarries beneath Paris and contain the remains of more than six million people. Many of the deceased are surely the same men (and women? probably) who carved out the quarries, mostly in the first several centuries of the last millennium. It must have been quite a project, since they withdrew enough rock to assemble Notre Dame, thousands of other churches large and small, bridges, city walls and homes — and left beneath the streets of Paris more than 300 kilometers (100 miles) of tunnels, including rooms and vaults that together comprise a vast man-made cave system. Top to bottom, a vertical cross-section of Paris looks like this:

  • Surface — streets, buildings, parks
  • Metro tunnels
  • Sewers
  • Quarries

Fossils are bones of stone, I explained to my kid. And limestones are stones of bone. Here in the Catacombes, down hallways that go on and on and on and on, the bones of dead Parisians are stacked into walls, with an artistry that makes one wonder what was going on in the heads of the masons. The walls facing the halls and passing visitors are built mostly with femurs and skulls. The femurs are stacked and interlocked, with the knee knuckles outward, course after course forming a pattern like stitches in a cloth. These are interrupted by horizontal lines of skulls, and usually topped with a final row: a crowning course of human heads. Here and there some arm bones might be used, but femurs and skulls were clearly the preferable building material. Behind these walls behind lie the rest of the bones: remains of remains.

The masons were priests. The bones were gathered from the city’s cemeteries, which had become rotten with an abundance of corpses as the end of the 18th century approached. That’s when it was decided to move the bones down into deeper graves. The quarries were empty, so the bones came down. The whole project went in stages, running from the late 1700s to the middle 1800s. The priests, whose jobs already required exceptional respect for the dead, were conscripted for the work.

The pictures in my collection (such as the one above) aren’t the best I’ve taken. Most of the light was provided by dim illumination in the catacombes itself, or by cell phones. If you wish to know more (and I recommend it), here are a pile of fascinating links:

Since one walks through the tunnels in the company of others, it is less creepy than you might think. After a while, endless aisles of bones also tend to make the bones themselves ordinary. Yet one wonders: Is this skull Robespierre’s? Danton’s? Both lost their heads at the guillotine, but down here all heads are equally ordinary and anonymous, fully respected, but still just building material.

A lesson: different as we all are in life, we are remarkably identical in death. Skulls tend to all look the same. So do other bones. One can say, These were babies once. Then laughing children. They grew up, learned about life, and lived long enough to produce more babies and get work done. And what they’ve left is no different than what everybody else leaves.

What makes us animals is that we eat other living things. (We need their carbon.) We live on things that lived. And we build with them too. Death supplies us. In turn, we supply as well. And all our turns will come.

What makes us different is who and what we are, and what we do, when we’re alive. Life is for the living. And so, it turns out, is death.

Tags: , , , , , , , , , , , ,

We are what we do.

We are more than that, of course, but it helps to have answers to the questions “What do you do?” and “What have you done?”

Among many other notable things l did was survive breast cancer. It was a subject that came up often during the year we shared as fellows at the Berkman Center. It may not have been a defining thing, but it helped build her already strong character. Persephone also said she knew that her personal war with the disease might not be over. The risks for survivors are always there.

So it was not just by awful chance that Persephone showed up at a Berkman event this Spring wearing a turban. She was on chemo, she said, but optimistic. Thin and frail, she was still pressing on with work, carrying the same good humor, toughness, intelligence and determination.

The next time I saw her, in early June, she looked worse. Then, on June 24, Ethan Zuckerman sent an email to Berkman friends, letting us know that Persephone’s health was diminishing quickly, and that she “probably will not live through July.” He also said that she had moved to a hospice, but was doing well enough to read email and accept a few visitors — and that he had hoped to visit her on July 6. Just five days later, Ethan wrote to say that Persephone had died the night before. I had been working in slow motion on an email to her — thinking, I guess, that Ethan’s July 6 date was an appointment she would keep. This post began as that email.

Persephone is gone, but her work isn’t, and that’s what I want to talk about. It’s a subject I wanted to bring up with her, and one I’m sure all her friends care about. We all should.

What I want to talk about is not “carrying on” the work of the deceased in the usual way that eulogizers do. What I’m talking about is keeping Persephone’s public archives in a published, accessible and easily found state. I fear that if we don’t make an effort to do that — for everybody — that we’ll lose them.

The Web went commercial in 1995, and has only become more so since. Today it is a boundless live public marketplace, searched mostly through one company’s engine, which continues to adapt accordingly. While Google’s original mission (“to organize the world’s information and make it universally accessible and useful”) persists, its commercial imperatives cannot help but subordinate its noncommercial ones.

In my own case I’m finding it harder and harder to use Google (or any search engine) to find my own archived work, even if there are links to it. The Live Web, which I first wrote about in 2005, has come to be known as the “real time” Web, which is associated with Twitter and Facebook as well as Google. What’s live, what’s real time, is now. Not then.

Today almost no time passes between the publishing of anything and its indexing by Google. This is good, but it is also aligned with commercial imperatives that emphasize the present and dismiss the past. No seller has an interest in publishing last week’s offerings, much less last year’s or last decade’s. What would be the point?

It would help if there were competition among search engines, or more specialized ones, but there’s not much hope for that. Bing’s business model is the same as Google’s. And the original Live Web search engines — Technorati, PubSub, Blogpulse, among others — are gone or moved on to other missions. Perhaps ironically, Technorati maintained an archive of all blogging for half a decade. But I’ve been told that’s gone. is still there, but re-cast as a news engine. Only persists as a straightforward Live Web engine, sustained, I suppose, by Mark Cuban‘s largesse. (For which I thank him. IceRocket is outstanding.)

For archives we have two things, it seems. One is search engines concerned mostly about the here and now, and the other is Archive.org. The latter does an amazing job, but finding stuff there is a chore if you don’t start with a domain name.

Meanwhile I have no idea how long tweets last, and no expectation that Twitter (or anybody other than a few individuals) will maintain them for the long term. Nor do I have a sense of how long anything will (or should) last inside Facebook, Linkedin or any other commercial walled garden.

To be fair, everything on the Web is rented, starting with domain names. I “own” , only for as long as I keep paying a domain registrar for the rights to use it. Will it stay around after I’m gone? For how long? All of us rent our servers, even if we own them, simply because they use electricity, take up space and need to be maintained. Who will do that after their paid-for purposes expire? Why? And again, for how long?

Persephone worked for years at Internews.org. I assume her work there will last as long as the organization does. Here’s the Google cache of her Key Staff bio. Her tweets as (her last was June 9th) will persist as long as Twitter doesn’t bother to get rid of them, I suppose. Here’s a Google search for her name. Here’s her Berkman alum page. Here’s her Linkedin. Here are her Delicious bookmarks. More to the point of this post, here’s her Media Re:public blog, with many links out to other sources, including her own. Here’s the Media Re:public report she led. And here’s an Internews search for Persephone, which has five pages of results.

All of this urges us toward a topic and cause that was close to Persephone’s mind and heart: journalism. If we’re serious about practicing journalism on the Web, we need to preserve it at least as well as we publish it.

Tags: , , , ,

…in the mid-Atlantic and New England states between 1800 and 1830, turnpike companies accounted for 27 percent of all business incorporations.

That’s from Turnpikes and Toll Roads in Nineteenth-Century America, by Daniel B. Klein, Santa Clara University and John Majewski, University of California Santa Barbara. A long entry in EH.net, an encyclopedia of economic history, it unpacks a remarkable but mostly-forgotten stage of business growth in North America. In reading it I wonder what relevance it might have to the current problems regarding the financing, deployment and running of Internet infrastructure, especially for rural areas that are outside the geographic scope of telephone and cable company ambitions. Take this paragraph, for example:

For Americans looking for better connections to markets, the poor state of the road system was a major problem. In 1790, a viable steamboat had not yet been built, canal construction was hard to finance and limited in scope, and the first American railroad would not be completed for another forty years. Better transportation meant, above all, better highways. State and local governments, however, had small bureaucracies and limited budgets which prevented a substantial public sector response. Turnpikes, in essence, were organizational innovations borne out of necessity – “the states admitted that they were unequal to the task and enlisted the aid of private enterprise” (Durrenberger 1931, 37).

The operative phrase is in the first sentence: better connections to markets. These were markets of the literal sort: places where people gathered to carry out business and other social activities. Most of the civilized world was country then. The most common business was farming. As farming spread farther from cities, the largest of which were ports with harbors or astride navigable inland waters, means had to be found to finance the building and operating of roads. Another two paragraphs:

Throughout the nineteenth-century, the United States was notoriously “land-rich” and “capital poor.” The viability of turnpikes shows how Americans devised institutions – in this case, toll-collecting corporations – that allowed them to invest precious capital in important public projects. What’s more, turnpikes paid little in direct dividends and stock appreciation, yet still attracted investment. Investors, of course, cared for long-term economic development, but that does not account for how turnpike organizers overcame the important public goods problem of buying turnpike stock. Esteem, social pressure, and other non-economic motivations influenced local residents to make investments that they knew would be unprofitable (at least in a direct sense) but would nevertheless help the entire community. On the other hand, the turnpike companies enjoyed the organizational clarity of stock ownership and residual returns. All companies faced the possibility of pressure from investors, who might have wanted to salvage something of their investment. Residual claimancy may have enhanced the viability of many projects, including communitarian projects undertaken primarily for use and esteem.

The combining of these two ingredients – the appeal of use and esteem, and the incentives and proprietary clarity of residual returns – is today severely undermined by the modern legal bifurcation of private initiative into “not-for-profit” and “for-profit” concerns. Not-for-profit corporations can appeal to use and esteem but cannot organize themselves to earn residual returns. For-profit corporations organize themselves for residual returns but cannot very well appeal to use and esteem.

The Internet is to commerce today what the oceans were two centuries ago: essential means for connecting between distant places and moving goods. While physical goods must still move by physical means, digital goods and the digital communications surrounding them move via the Internet, which connects over a variety of wired (copper, fiber) and wireless paths. Every town and district with absent or small data transport capacities today faces the same kinds of choices that were met by the toll roads two hundred years ago.

In the long run toll roads proved a temporary solution to a permanent problem that could be solved other ways. But I wonder what we still might learn from them, at least for regions where the problems are similar, and the solutions likely to be just as temporary.

We’ve seen this movie: the one where a big company takes over a whole market ecosystem. There was IBM with mainframes, Microsoft with operating systems, Apple with pocket music players (and now apps for phones and tablets).

But there’s another movie too. That’s the one where the big company fails. IBM did that with PCs. (They started the ball rolling, but no longer even make the things.) Apple did it with PDAs, when the Newton flopped. And Microsoft, even in its glory days, failed at a lot of things.

One big one was directories. All but lost in the sands of time is Netscape’s lone victory over a Microsoft move to make everybody in the world use Active Directory. That story was told by Craig Burton in an Interview I did for the late Websmith (later merged into Linux Journal) fourteen years ago this month.

Another was identity, and single sign-on. Microsoft tried that with Hailstorm, and flopped.

And now comes Facebook with social graphs, which Barrett Sheridan calls a Play to Take Over the Entire Internet, and Mark Zuckerberg (two links back) says is the “next version of Facebook Platform,” which he says “puts people at the center of the web.”

Right. Sez Mark,

We think that the future of the web will be filled with personalized experiences. We’ve worked with three pre-selected partners—Microsoft Docs, Yelp and Pandora—to give you a glimpse of this future, which you can access without having to login again or click to connect. For example, now if you’re logged into Facebook and go to Pandora for the first time, it can immediately start playing songs from bands you’ve liked across the web. And as you’re playing music, it can show you friends who also like the same songs as you, and then you can click to see other music they like.

We look forward to a future where all experiences are this easy and personalized, and we’re happy today to take the next important step to get there.

Of course, then we no longer have the Web. We have the Union of Soviet Social Graph Vendors.

This will fail, of course. Commercial containers for the Web (social or otherwise) are limited. They have rules. They are the Great Indoors, which can neither control nor compete with the Great Outdoors which is the Web itself.

But discovering this plain fact will take some time. Or, more to the point, waste it. The hard way.

As usual, Dave Winer nails the diagnostics, with Will this loop ever end? Sez Dave,

Facebook is hot now, but history has shown that being a hotbed doesn’t scale. That eventually these companies have to tap into the general talent pool and they end up achieving the same level of mediocrity as the previous dominant one. It happened to IBM, the minicomputer companies, IBM again, Microsoft, now it’s Google’s turn, and soon it will be Facebook’s.

Let’s go back to Microsoft and Hailstorm. It’s important to remember the hysteria surrounding that move. Many thought that this was The End. Here is what I wrote at the time on my blog. I just copied and pasted the html below (from Google’s cache, while the archive was offline…  somehow the bold-faced search terms give it a little extra punch, so I’m leaving them in)…

Trojan Storm

The storm has arrived, and the peerage is weighing in with its reactions.

When I first read about Hailstorm, it scared the shit out of me. (As it also did to Joel Spolsky, who gives us a fine tech-level explanation of exactly why.)

But at a deeper level — the social level where the Net connects us — I have complete faith in forces more powerful than any monopoly’s wet dream. And that’s the Net.

The Net is ours. Not Microsoft’s. Hailstorm is heavy weather, but the Net is geology. Our geology. It’s us, not just me (pun intended).

Computing isn’t personal any more. It’s social. Microsoft understands that, but it’s not where they come from. Where they come from is the desktop. Always have, always will. It’s not for nothing they’re called Microsoft.

With Hailstorm, Microsoft is doing a beautiful job of being itself. As always, they’re draping users in bountiful benefits, whether those users want them or not). That’s just what Microsoft does. They can’t help it. They come from the desktop, just like Apple comes from art and Nordstrom comes from shoes.

And they sound very convincing, because they’re busy advocating the user. You can’t go wrong there, can you?

O yeah. You always go wrong when you characterize competent human beings as weak and helpless — and then tell them your stuff is their only hope. That’s exactly what Microsoft does in the very first line of Building the User-centric Experience:

    Users are definitely not in control of the technology that surrounds them.  Asked to adapt to the differences between the way they interact with local programs and sites on the web, asked to cope with doing things completely differently on their cell phone, their PC, and any other device they have, users are generally frustrated and confused.

Like moths in a lampshade. How sad. And whose fault is that?

    If you want to enter a friend’s new phone number into your PC, you use a keyboard and a piece of software like Microsoft Outlook to do it using a particular sequence of keystrokes and mouse clicks.  But to enter that same information into your Palm Pilot, you need to learn a completely new interface – right down to relearning how to draw the letters of the alphabet!

Oh! It’s Palm’s fault! That OS is so hard to use. Not easy like Outlook, which is so encrusted with options that few users ever figure the damn thing out. (To say the least of it.) The insults continue:

    This environment, in which users are forced to adapt to technology instead of technology adapting to users, creates significant restrictions on how effective any application or Web site can be, and ultimately hinders the acceptance and adoption of not only the technologies themselves, but also the real-world products and services that might be best offered to a user in the context of the things they do online.

The environment we’re talking about here is called a market. Yes, it’s messy. Yes, it’s full of choices that don’t agree with each other. But it’s the natural habitat for business. It’s also networked to the gills. That network is where users live. Not just Windows. Not just .Net, whatever it becomes.

The Trojan Storm here isn’t Windows or even .Net. It’s Internet Explorer.

The Net is ours, indeed. But most of us interact with it through a Microsoft browser. That browser is about to get a lot fatter. That’s the only way to interpret this:

    HailStorm services are oriented around people, instead of around a specific device, application, service, or network.  They put the user in control of their own data and information, protecting personal information and making user consent the basis for who can access it, what they can do with it, and for how long they have that permission.

It’s time for us to stop acting like an audience and start acting like a market. For that we need to do three things:

  1. Work with the hackers to make Mozilla the best possible alternative to Internet Explorer — and fast.
  2. Start paying more attention and respect to other developers who are working together to make the Net something that works better for all of us (and that includes interested developers inside Microsoft — it’s a big company).
  3. Expose Hailstorm for what it is: yet another attempt by Microsoft to collapse the Net into its own service framework. And to say this won’t work because the Net’s context is bigger than any vendor, no matter how privileged they are with “critical mass.”

It’s important to remember that this is not just about Microsoft’s napoleonic corporate personality, which is equally real and beside the point, making it the biggest red herring in business history.

It’s about building out the Net’s infrastructure. .Net doesn’t do it. Hailstorm doesn’t do it. Java doesn’t do it. No “solution” controlled by one vendor will do it.

You can’t privatize what only works because it’s public. Microsoft hasn’t learned that lesson yet. Let’s help them.

And we did. Mozilla succeeded, and so have other browsers. Identity still isn’t a solved problem and may never be — at least not in the simple way one gets when the Eye of Sauron rules the world. But the very fact that good people are working on identity and related problems out in the open is endlessly encouraging.

Speaking of which, the 10th Internet Identity Workshop is happening in Mountain View next month. Micrtosoft is a sponsor, as are many other companies and organizations, some of which (Information Card Foundation, Open ID Foundation) grew directly or indirectly out of IIW conversations. In fact, Microsoft’s good identity work (started by Kim Cameron and colleagues there) would not have happened without Hailstorm’s failure.

If Facebook and Twitter are smart (and listen to their elders), they’ll skip the loop. Burn the movie. Get Net- and Web-compliant. Because that’s where nature will takes us in the long run anyway. Let’s not keep making that run longer than it needs to be.

When  reported on the next-generation iPhone that had come into its hands, I was as curious as the next geek about what they’d found. But I didn’t think the ends justified the means.

The story begins,

You are looking at Apple’s next iPhone. It was found lost in a bar in Redwood City, camouflaged to look like an iPhone 3GS. We got it. We disassembled it. It’s the real thing, and here are all the details.

“We got it,” they said. How?

There was much speculation about that, but obviously — if the phone was a real prototype — it must have been lost by an Apple employee. That’s why I tweeted, “Some employee is in very deep shit for letting this happen: http://bit.ly/bVN5Ma” But others wondered. Was it planted by Apple? That’s what, for example, Howard Stern guessed on his show yesterday morning. He thought it was a brilliant marketing move by Apple.

But Gizmodo set their record straight, through a much-updated piece titled How Apple lost the next iPhone. After telling the story, at length, of how Gray Powell, an Apple employee, had left it at a restaurant (“The Gourmet Haus Staudt. A nice place to enjoy good German lagers”), Gizmodo unpacks the means by which the phone came into their possession:

There it was, a shiny thing, completely different from everything that came before.

He reached for a phone and called a lot of Apple numbers and tried to find someone who was at least willing to transfer his call to the right person, but no luck. No one took him seriously and all he got for his troubles was a ticket number.

He thought that eventually the ticket would move up high enough and that he would receive a call back, but his phone never rang. What should he be expected to do then? Walk into an Apple store and give the shiny, new device to a 20-year-old who might just end up selling it on eBay?
The Aftermath
Weeks later, Gizmodo got it for $5,000 in cash. At the time, we didn’t know if it was the real thing or not. It didn’t even get past the Apple logo screen. Once we saw it inside and out, however, there was no doubt about it. It was the real thing, so we started to work on documenting it before returning it to Apple. We had the phone, but we didn’t know the owner. Later, we learnt about this story, but we didn’t know for sure it was Powell’s phone until today, when we contacted him via his phone.

The apparent purpose of the story is to save Gary Powell’s ass, as well as to cover some of Gizmodo’s as well. It concludes,

He sounded tired and broken. But at least he’s alive, and apparently may still be working at Apple—as he should be. After all, it’s just a stupid iPhone and mistakes can happen to everyone—Gray Powell, Phil Schiller, you, me, and Steve Jobs.

The only real mistake would be to fire Gray in the name of Apple’s legendary impenetrable security, breached by the power of German beer and one single human error.

Additional reporting by John Herrman; extra thanks to Kyle VanHemert, Matt Buchanan, and Arianna Reiche

Update 2: I have added the bit on the $5,000 (in italics) and how we acquired the iPhone, as Gawker has disclosed to every media outlet that asked.

Yesterday the New York Times ran iPhonegate: Lost, Stolen Or A Conspiracy?, by Nick Bilton. The gist:

One big question is how much Gizmodo paid for the phone, and whether keeping it was legal. Nick Denton, chief executive of Gawker Media, which owns Gizmodo, told The Times the site paid $5,000 for the phone. But still bloggers wondered if it had really paid $10,000.

On Monday, Charles Arthur, Technology blogger for The Guardian, said paying for the phone could mean that Gizmodo was knowingly receiving stolen goods; on Tuesday, citing the Economic Espionage Act of 1996, Mr. Arthur expanded on his theory.

This helped the debate move on to more serious matters: whether the phone was “lost,” or “stolen.” John Gruber, blogger for Daring Fireball, pointed outthat in the eyes of  California law, there isn’t a difference. The law states:

One who finds lost property under circumstances which give him knowledge of or means of inquiry as to the true owner, and who appropriates such property to his own use, or to the use of another person not entitled thereto, without first making reasonable and just efforts to find the owner and to restore the property to him, is guilty of theft.

The next big question — whether Gizmodo would turn over the phone to Apple — was answered after a long day of speculation on Monday over itsauthenticity.  Gizmodo has reported that it received a letter from Apple’s legal counsel…

Gizmodo complied and returned the phone. Yesterday I tweeted, “Re: bit.ly/d0P4Vo If you found a next-gen iPhone, would you return it — or use it to pull the owner’s pants down?” Thus far, two responses:

Of course, what Gizmodo did was an example of investigative journalism at work. Mainstream journals and broadcasters sometimes pay for stories, leads, video and audio recordings, photographs. That’s not unusual. But, as Charles Arthur writes, “As a reporter – and make no doubt, Gizmodo is reporting here, actually doing journalism red in tooth and claw – you inevitably end up walking close to the edge of what’s legal every now and then. Whether it’s being in receipt of confidential information, publishing something that’s potentially defamatory, or standing closer to the front line of a protest than the police would like, you occasionally have to put yourself in some legally-risky positions.”

Many thousands of years ago on the time scale of both the Internet and journalistic practices, specifically in 1971, I wrote a story for a New Jersey newspaper about rural poverty, illustrated by a photo I took of somebody’s snow-covered yard filled with discarded appliances and half-disassembled old cars sitting on cinder blocks. I thought at the time that the photo was sufficiently generic to protect the anonymity of the home’s occupier. I was wrong. The owner called me up and let me have it. I was still a kid myself — just 22 years old — and it was a lesson that stuck with me.

A couple decades later that lesson was enlarged by “Notes Toward a Journalism of Consciousness,” by D. Patrick Miller, in The Sun, a magazine for which I had once been a regular contributor. (No links to the story, but its table of contents is here.) In it Miller recalled his work as an investigative reporter in the Bay Area, and how sometimes he had to cross a moral line. In his case it was gaining the confidence of sources he would later, in some ways, betray — for the Greater Good of the story’s own moral purposes.

Gizmodo poses the moral goodness of its own story against the backdrop of Apple’s fanatical secrecy:

And hidden in every corner, the Apple secret police, a team of people with a single mission: To make sure nobody speaks. And if there’s a leak, hunt down the traitor, and escort him out of the building. Using lockdowns and other fear tactics, these men in black are the last line of defense against any sneaky eyes. The Gran Jefe Steve trusts them to avoid Apple’s worst nightmare: The leak of a strategic product that could cost them millions of dollars in free marketing promotion. One that would make them losecontrol of the product news cycle.

But the fact is that there’s no perfect security. Not when humans are involved. Humans that can lose things. You know, like the next generation iPhone.

Thus the second wrong makes a write, but not a right.

Two years ago, in this post here, I wrote,

Still, I think distinctions matter. There is a difference in kind between writing to produce understanding and writing to produce money, even when they overlap. There are matters of purpose to consider, and how one drives (or even corrupts) the other.

Two additional points.

One is about chilling out. Blogging doesn’t need to be a race. Really.

The other is about scoops. They’re overrated. Winning in too many cases is a badge of self-satisfaction one pins on oneself. I submit that’s true even if Memeorandum or Digg pins it on you first. In the larger scheme of things, even if the larger scheme is making money, it doesn’t matter as much as it might seem at the time.

What really matters is … Well, you decide.

Gizmodo was acting in character here. That character is traditional journalism itself, which is no stranger to moral compromises.

I’m not saying that one must not sometimes make those compromises. We all often do, regardless of our professions. What makes journalism a special case is its own moral calling.

How high a calling is it to expose the innards of an iPhone prototype?

To help decide, I recommend the movie Absence of Malice.

Was malice absent in Gizmodo’s case? And, even if it was, is the story worth what it cost to everybody else involved — including whatever dollar amount Gizmodo paid to its source?

I submit that it wasn’t. But then, I’m not in Gizmodo’s business. I also don’t think that business is journalism of the sort we continue to idealize, even though journalism never has been as ideal as we veterans of the trade like to think it is.

Tags: , ,

And “social media” is a crock. Or perhaps an oxymoron.

Brands are boring because they’re not human. They’re companies. And, despite the recent Supreme Court decision to the contrary, companies are not human. They are abstractions that make business possible. Businesses are necessary to thriving economies and working civilizations. They are comprised of human beings and therefore have human qualities. But they are not themselves human.

The term “brand” was borrowed by from the cattle industry, and came into popular use during the golden age of network radio, in the 1930s and ’40s, when large suppliers to grocery and department stores (especially detergent and tobacco companies) won space in “shelf wars” by putting one  product in eight different packages and singing about the difference. Singing was a form of branding. You burned a song into consumers’ heads, so they had no choice but to recall it. “If you’ve got nothing to say, sing it,” the saying went.

Okay, hit it (in 3/4 time, and a Munich beer house spirit, flasks raised, singing loudly)…

Schaefer
Is the
One beer to have
When you’re having more than one.
Schaefer
Pleasure
Doesn’t fade
Even when your thirst is done.
The most rewarding flavor
In this man’s world
Is for people who are having fun.
Schaefer
Is the
One beer to have
When you’re having more than one.

I can’t help knowing that song because Schaefer burned it into the brains of baseball fans listening to Brooklyn Dodgers games. I know this one…

My beer is Rheingold the dry beer.
Think of Rheingold whenever you buy beer.
It’s not bitter, not sweet.
It’s the extra dry treat.
Won’t you buy extra dry Rheingold beer?

.. because Rheingold advertised during Giants games.

Piels and Ballantine had less memorable jingles, though I do remember “Bert and Harry Piels,” who were actually Bob & Ray, the most dry and ironic radio comedians who ever walked the earth.

In those days it made sense to brand, because there were so few media, and — actually — so few companies. If you wanted to make beer you needed a big industrial brewery.  The Industrial Age was one in which Industry was All.

This is no longer the case.

As for social media, all media now need to be social. Mediation is between humans, some of which are inside companies. Hence, “social media” as oxymoron. Sort of, anyway.

Meanwhile, lots of social media types are talking about brands and branding as if these were new and hip things. They’re not. They’re heavy and old. We need to move on, folks. Think of something human instead.

When a friend came back from SXSW recently, we talked about how, at the show, it was “social every fucking thing there is.” The term SEFTTI was thus coined.

We need to move past that too.

Tags: , , ,

I just learned by Eric Martindale’s comment to my Borg’s Woods post in February that the March 13 storm knocked down many of the trees in the old growth urban forest that was our neighborhood playground when I was a kid. For more here’s a post in the NJUrbanForest blog, and here are some pictures as well.

Storms are as much a part of nature as old growth forests, even when the former reduces the latter. Sad to read, however, that mosquito abatement has involved the draining of the woods’ pond, where generations of kids learned to skate in a beautiful setting.

For perspective perhaps it is helpful to note that the boggy parts of Borg’s Woods are among the few vernal remnants of glacial Lake Hackensack, which pooled over most of the Hackensack River watershed when the last ice age began to end around 15,000 years ago. The lake lasted several millennia, then drained around 11,500 years ago, when the terminal moraine near Perth Amboy broke. Back then the sea was still far outside the current borders of New York and New Jersey. Only when the rest of the ice cap melted did the oceans reach their current level — which, as we know, is still rising.

Four years and one day ago, we took a trip aboard a sailboat captained by our friend John Pfarr (who a few days later would later sail the same vessel to Hawaii, the South Seas and back — the dude is a serious sailor). Our modest destination was the string of oil platforms that rise above the coastal waters off Santa Barbara. These are now familiar landmarks, and are regarded with both loathing and affection, the latter especially by he sea (most obviously seal) life that abounds on the platforms’ pylons and girders, above and below the waterline.

As always, I took a lot of photos, one of which now also graces the poster for Oil + Water: The Case of Santa Barbara and Southern California, which will take place April 8 – 10, 2010 in the McCune Conference Room, 6020 HSSB, at UCSB. Specifically,

This conference will explore the ways in which oil and water have created and transformed the history and culture of Santa Barbara and Southern California. Topics will include the Santa Barbara oil spill; the impact of oil on Hollywood; agriculture and marine life; the Owens River Valley; the Salton Sea; cars and car culture; and environmental histories and their lessons.

Important stuff, and highly recommended.

Seems like all my favorite college hoops teams are playing in tournaments.

Harvard’s Crimson go up against Appalachian State tonight in the CIT.

UCSB’s Gauchos are the 15th seed in the NCAA Men’s Midwest bracket, a checkbox win for #2 seed Ohio State on Friday night.

The Quakers of my alma mater, , are back in the Final Four of the NCAA’s Division III, after polishing off . They take on Friday afternoon. Have a bunch of friends with Williams connections too.

My long-time fave Division I team, , is the top seed in the NCAA South bracket. They play a team whose jerseys say ARPB, before facing the winner of the game. My daughter and a bunch of neices and nephews are grads, so I’ll be rooting for them, should they survive.

I was Knicks fan growing up, but I didn’t follow basketball much until I went to Guilford in 1965. North Carolina is basketball country in any case, and somehow I got into playing it as well there. Nothing serious, just pick-up intramural ball. My whole game was shooting long-range bombers, and I lacked all the other skills (dribbling, passing) one expects to go with that one. But at least I wasn’t taken last when teams were chosen, which for me was exceptionally positive feedback.

As it happened Guilford also had damn fine basketball teams the whole time I was there. They were often ranked #1 in the NAIA, and in ’68 (a year they lost in the finals to Oshkosh State) they graduated three players into the NBA. The best of those was Bob Kauffman, the #3 pick in the draft that year. Bob went on to become a 3-time All-Star, and then the head coach and general manager of the Detroit Pistons. He completed that career by making the mistake of giving Dick Vitale the head coaching job. In 1975 Guilford won the NAIA tournament with a team that included World B. Free and M.L. Carr.

My Division I sympathies were originally with Wake Forest (also in the NCAAs) since my entire coterie of North Carolina relatives were affiliated in one way or another with the school. When I moved to Chapel Hill after college, however, I became a Carolina fan. I still am. (Wake too.) But my overriding affection for Duke was born at the first pre-season game of the 1977-78 season. That was when freshmen Kenny Dennard and Gene Banks joined Jim Spanarkel, Mike Gminski and Johny Harrell to turn a has-been team into what would become the powerhouse it has been ever since.

But I didn’t know that then. I was working on the Duke campus in the Fall of ’77 at the time, and was invited to that game (against ) by David Hodskins, who would become my business partner for most of the following two decades. David was a Duke grad with season tickets to games at the very intense Cameron Indoor Stadium. I was his date for many of those games over many years, and couldn’t help getting into the team.

While Duke had good years during ‘ tenure as coach back in the 1960s, it had been nowhere for most the decade that followed. In those days, as the UCLA dynasty (the biggest ever, never to be repeated), NC State, Maryland and Carolina were the cream of the ACC. Duke joined that elite with what John Feinstein (another Duke grad) called : the 1977-78 crew I saw play that pre-season game. Now people say, “How can you like an overdog like Duke?” Sorry, can’t help it. My experience as a Duke fan also prepped me for following Tommy Amaker, now the coach here at Harvard. (Tommy also played high school ball at Wilbert Tucker Woodson High School in Virginia, where one of his teammates was my cousin Andy Heck, a multi-sport athlete who went on to co-captain the Notre Dame football team that won the national championship in 1988, before going on to an eleven-year career as an NFL player. He’s now the offensive line coach for the Jacksonville Jaguars.)

Speaking of overdogs, I’m also a Boston Celtics fan these days too, for roughly the same reason: I’m local here. And I like the team. Celtics coach Doc Rivers and I have a common friend in , who is a hard-core Duke fan too — as well as a former college hoops player. Buzz got into Duke when he went to law school there. (I still like the Knicks, though. And the Golden State Warriors. David Hodskins and I had season tickets to the Warriors back in the days of Run TMC.)

Wish I could say I expect Duke to win it all. Hope they do, but I just picked Kansas. Or maybe it was Kentucky. (The Kid just went downstairs to check.) Okay, it’s Kentucky. Whatever, it’ll be fun to follow. I see that CBS has the games on-demand over the Net. Count me in for that. We got nothing but Net here. (Hey, it’s the future of what used to be television. I just hope that single purpose — pumping “content” — doesn’t turn the Net into TV 2.0.)

Earlier this year the Pew Research Center’s Internet & American Life Project and Elon University conducted research toward The Future of the Internet IV, the latest in their survey series, which began with Future of the Internet I – 2004. This latest report includes guided input from subjects such as myself (a “thoughtful analyst,” they kindly said) on subjects pertaining to the Net’s future. We were asked to choose between alternative outcomes — “tension pairs” — and to explain our views. Here’s the whole list:

  1. Will Google make us stupid?
  2. Will we live in the cloud or the desktop?
  3. Will social relations get better?
  4. Will the state of reading and writing be improved?
  5. Will those in GenY share as much information about themselves as they age?
  6. Will our relationship to key institutions change?
  7. Will online anonymity still be prevalent?
  8. Will the Semantic Web have an impact?
  9. Are the next takeoff technologies evident now?
  10. Will the Internet still be dominated by the end-to-end principle?

The results were published here at Pew and Elon’s Imagining the Internet site. Here’s the .pdf.

My own views are more than well represented in the 2010 report. One of my responses (to the last question) was even published in full. Still, I thought it would be worth sharing my full responses to all the questions. That’s why I’m posting them here.

Each question is followed by two statements — the “tension pair” — and in some cases by additional instruction. I’ve italicized those.

[Note… Much text here has been changed to .html from .pdf and .doc forms, and extracting all the old formatting jive has been kind of arduous. Bear with me while I finish that job, later today. (And some .html conventions don’t work here in WordPress, so that’s a hassle too.)]


1. Will Google make us smart or stupid?

1 By 2020, people’s use of the Internet has enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid (http://www.theatlantic.com/doc/200807/google).

2 By 2020, people’s use of the Internet has not enhanced human intelligence and it could even be lowering the IQs of most people who use it a lot. Nicholas Carr was right: Google makes us stupid.

1a. Please explain your choice and share your view of the Internet’s influence on the future of human intelligence in 2020 – what is likely to stay the same and what will be different in the way human intellect evolves?


Though I like and respect Nick Carr a great deal, my answer to the title question in his famous essay in The Atlantic — “Is Google Making Us Stupid?” — is no. Nothing that informs us makes us stupid.

Nick says, “Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.” Besides finding that a little hard to believe (I know Nick to be a deep diver, still), there is nothing about Google, or the Net, to keep anyone from diving — and to depths that were not reachable before the Net came along. Also, compare using the Net to TV viewing. There is clearly a massive move to the former from the latter. And this move, at the very least, requires being less of a potato.

But that’s all a separate matter from Google itself. There is no guarantee that Google will be around, or in the same form, in the year 2020.

First, there are natural limits to any form of bigness, and Google is no exception to those. Trees do not grow to the sky.

Second, nearly all of Google’s income is from advertising. There are two problems with this. One is that improving a pain in the ass does not make it a kiss — and advertising is, on the whole, still a pain in the user’s ass. The other is that advertising is a system of guesswork, which by nature makes it both speculative and inefficient. Google has greatly reduced both those variables, and made advertising accountable for the first time: advertisers pay only for click-throughs. Still, for every click-through there are hundreds or thousands of “impressions” that waste server cycles, bandwidth, pixels, rods and cones. The cure for this inefficiency can’t come from the sell side. It must come from the demand side. When customers have means for advertising their wants and needs (e.g. “I need a stroller for twins in downtown Boston in the next two hours. Who’s coming through and how”) — and to do this securely and out in the open marketplace (meaning not just in the walled gardens of Amazons and eBays) — much of advertising’s speculation and guesswork will be obsoleted. Look at it this way: we need means for demand to drive supply at least as well as supply drives demand. By 2020 we’ll have that. (Especially if we succeed at work we’re doing through ProjectVRM at Harvard’s Berkman Center.) Google is well positioned to help with that shift. But it’s an open question whether or not they’ll get behind it.

Third, search itself is at risk. For the last fifteen years we have needed search because Web grew has lacked a directory other than DNS (which only deals with what comes between the // and the /.) Google has succeeded because it has proven especially good at helping users find needles in the Web’s vast haystack. But what happens if the Web ceases to be a haystack? What if the Web gets a real directory, like LANs had back in the 80s — or something like one? The UNIX file paths we call URLs (e.g. http://domain.org/folder/folder/file.htm…) presume a directory structure. This alone suggests that a solution to the haystack problem will eventually be found. When it is, search then will be more of a database lookup than the colossally complex thing it is today (requiring vast data centers that suck huge amounts of power off the grid, as Google constantly memorizes every damn thing it can find in the entire Web). Google is in the best position to lead the transition from the haystack Web to the directory-enabled one. But Google may remain married to the haystack model, just as the phone companies of today are still married to charging for minutes and cable companies are married to charging for channels — even though both concepts are fossils in an all-digital world.


2. Will we live in the cloud or on the desktop?

1 By 2020, most people won’t do their work with software running on a general-purpose PC. Instead, they will work in Internet-based applications, like Google Docs, and in applications run from smartphones. Aspiring application developers will sign up to develop for smart-phone vendors and companies that provide Internet-based applications, because most innovative work will be done in that domain, instead of designing applications that run on a PC operating system.

2 By 2020, most people will still do their work with software running on a general-purpose PC. Internet-based applications like Google Docs and applications run from smartphones will have some functionality, but the most innovative and important applications will run on (and spring from) a PC operating system. Aspiring application designers will write mostly for PCs.

Please explain your choice and share your view about how major programs and applications will be designed, how they will function, and the role of cloud computing by 2020.

The answer is both.

Resources and functions will operate where they make the most sense. As bandwidth goes up, and barriers to usage (such as high “roaming” charges for data use outside a carrier’s home turf) go down, and Bob Frankston’s “ambient connectivity” establishes itself, our files and processing power will locate themselves where they work best — and where we, as individuals, have the most control over them.

Since we are mobile animals by nature, it makes sense for us to connect with the world primarily through hand-held devices, rather than the ones that sit on our desks and laps. But these larger devices will not go away. We need large screens for much of our work, and we need at least some local storage for when we go off-grid, or need fast connections to large numbers of big files, or wish to keep matters private through physical disconnection.

Clouds are to personal data what banks are to personal money. They provide secure storage, and are in the best positions to perform certain intermediary and back-end services, such as hosting applications and storing data. This latter use has an importance that will only become more critical as each of us accumulates personal data by the terabyte. If your home drives crash or get stolen, or your house burns down, your data can still be recovered if you’ve backed it up in the cloud.

But most home users (at least in the U.S. and other under-developed countries) are still stuck at the far ends of asymmetrical connections with low upstream data rates, designed at a time when carriers thought the Net would mostly be a new system for distributing TV and other forms of “content.” Thus backing up terabytes of data online ranges from difficult to impossible.

This is why any serious consideration of cloud computing — especially over the long term — needs to take connectivity into account. Clouds are only as useful as connections permit. And right now the big cloud utilities (notably Google and Amazon) are way ahead of the carriers at imagining how connected computing needs to grow. For most carriers the Internet is still just the third act in a “triple play,” a tertiary service behind telephony and television. Worse, the mobile carriers show little evidence that they understand the need to morph from phone companies to data companies — even with Apple’s iPhone success screaming “this is the future” at them.

A core ideal for all Internet devices is what Jonathan Zittrain (in his book The Future of the Internet — and How to Stop It) calls generativity, which is maximized encouragement of innovation in both hardware and software. Today generativity in mobile devices varies a great deal. The iPhone, for example, is highly generative for software, but not for hardware (only Apple makes iPhones). And even the iPhone’s software market is sphinctered by Apple’s requirement that every app pass to market only through Apple’s “store,” which operates only through Apple’s iTunes, which runs only on Macs and PCs (no Linux or other OSes). On top of all that is Apple’s restrictive partnerships with AT&T (in the U.S.) and Rogers (in Canada). While AT&T allows unlimited data usage on the iPhone, Rogers still has a 6Gb limit.

Bottom line: Handhelds will no smarter than the systems built to contain them. The market will open widest — and devices will get smartest — when anybody can make a smartphone (or any other mobile device), and use it on any network they please, without worrying about data usage limits or getting hit with $1000+ bills because they forgot to turn off “push notifications” or “location services” when they roamed out of their primary carrier’s network footprint. In other words, the future will be brightest when mobile systems get Net-native.


3. Will social relations get better?

1 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a negative force on my social world. And this will only grow more true in the future.

2 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a positive force on my social world. And this will only grow more true in the future.

3a. Please explain your choice and share your view of the Internet’s influence on the future of human relationships in 2020 — what is likely to stay the same and what will be different in human and community relations?

Craig Burton describes the Net as a hollow sphere — a three-dimensional zero — comprised entirely of ends separated by an absence of distance in the middle. With a hollow sphere, every point is visible to every other point. Your screen and my keyboard have no distance between them. This is a vivid way to illustrate the Net’s “end-to-end” architecture and how we perceive it, even as we also respect the complex electronics and natural latencies involved in the movement of bits from point to point anywhere on the planet. It also helps make sense of the Net’s distance-free social space.

As the “live” or “real-time” aspects of the net evolve, opportunities to engage personally and socially are highly magnified beyond all the systems that came before. This cannot help but increase our abilities not only to connect with each other, but to understand each other. I don’t see how this hurts the world, and I can imagine countless ways it can make the world better.

Right now my own family is scattered between Boston, California, Baltimore and other places. Yet through email, voice, IM, SMS and other means we are in frequent touch, and able to help each other in many ways. The same goes for my connections with friends and co-workers.

We should also hope that the Net makes us more connected, more social, more engaged and involved with each other. The human diaspora, from one tribe in Africa to thousands of scattered tribes — and now countries — throughout the world, was driven to a high degree by misunderstandings and disagreements between groups. Hatred and distrust between groups have caused countless wars and suffering beyond measure. Anything that helps us bridge our differences and increase understanding is a good thing.

Clearly the Internet already does that.


4. Will the state of reading and writing be improved?

1 By 2020, it will be clear that the Internet has enhanced and improved reading, writing, and the rendering of knowledge.

2 By 2020, it will be clear that the Internet has diminished and endangered reading, writing, and the intelligent rendering of knowledge.

4a. Please explain your choice and share your view of the Internet’s influence on the future of knowledge-sharing in 2020, especially when it comes to reading and writing and other displays of information – what is likely to stay the same and what will be different? What do you think is the future of books?

It is already clear in 2010 that the Net has greatly enhanced reading, writing, and knowledge held — and shared — by human beings. More people are reading and writing, and in more ways, for more readers and other writers, than ever before. And the sum of all of it goes up every day.

I’m sixty-two years old, and have been a journalist since my teens. My byline has appeared in dozens of publications, and the sum of my writing runs — I can only guess — into millions of words. Today very little of what I wrote and published before 1995 is available outside of libraries, and a lot of it isn’t even there.

For example, in the Seventies and early Eighties I wrote regularly for an excellent little magazine called The Sun. (It’s still around, at http://thesunmagazine.org) But, not wanting to carry my huge collection of Suns from one house to another (I’ve lived in 9 places over the last ten years), I gave my entire collection (including rare early issues) to an otherwise excellent public library, and they lost or ditched it. Few items from those early issues are online. My own copies are buried in boxes in a garage, three thousand miles from where I live now. So are dozens of boxes of photos and photo albums. (I was also a newspaper photographer in the early days, and have never abandoned the practice.)

On the other hand, most of what I’ve written since the Web came along is still online. And most of that work — including 34,000 photographs on Flickr — is syndicated trough RSS (Really SimpleSyndication) or its derivatives. So is the work of millions of other people. If that work is interesting in some way, it tends to get inbound links, increasing its discoverability through search engines and its usefulness in general. The term syndication was once applied only to professional purposes. Now everybody can do it.

Look up RSS on Google. Today it brings in more than three billion results. Is it possible that this has decreased the quality and sum of reading, writing and human knowledge? No way.


5. Will the willingness of Generation Y / Millennials to share information change as they age?

1 By 2020, members of Generation Y (today’s “digital natives”) will continue to be ambient broadcasters who disclose a great deal of personal information in order to stay connected and take advantage of social, economic, and political opportunities. Even as they mature, have families, and take on more significant responsibilities, their enthusiasm for widespread information sharing will carry forward.

2 By 2020, members of Generation Y (today’s “digital natives”) will have “grown out” of much of their use of social networks, multiplayer online games and other time-consuming, transparency-engendering online tools. As they age and find new interests and commitments, their enthusiasm for widespread information sharing will abate.

5a. Please explain your choice and share your view of the Internet’s influence on the future of human lifestyles in 2020 – what is likely to stay the same and what will be different? Will the values and practices that characterize today’s
younger Internet users change over time?

Widespread information sharing is not a generational issue. It’s a technological one. Our means for controlling access to data, or its use — or even for asserting our “ownership” of it — are very primitive. (Logins and passwords alone are clunky as hell, extremely annoying, and will be seen a decade hence as a form of friction we were glad to eliminate.)

It’s still early. The Net and the Web as we know them have only been around for about fifteen years. Right now we’re still in the early stages of the Net’s Cambrian explosion. By that metaphor Google is a trilobyte. We have much left to work out.

For example, take “terms of use.” Sellers have them. Users do not — at least not ones that theycontrol. Wouldn’t it be good if you could tell Facebook or Twitter (or any other company using your data) that these are the terms on which they will do business with you, that these are the ways you will share data with them, that these are the ways this data can be used, and that this is what will happen if they break faith with you? Trust me: user-controlled terms of use are coming. (Work is going on right now on this very subject at Harvard’s Berkman Center, both at its Law Lab and ProjectVRM.)

Two current technical developments, “self-tracking” and “personal informatics,” are examples of ways that power is shifting from organizations to individuals — for the simple reason that individuals are the best points of integration for
their own data, and the best points of origination for what gets done with that data.

Digital natives will eventually become fully empowered by themselves, not by the organizations to which they belong, or the services they use. When that happens, they’ll probably be more careful and responsible than earlier generations, for the simpler reason that they will have the tools.


6. Will our relationship to institutions change?

1 By 2020, innovative forms of online cooperation will result in significantly more efficient and responsive governments, businesses, non-profits, and othe mainstream institutions.

2 By 2020, governments, businesses, non-profits and other mainstream institutions will primarily retain familiar 20th century models for conduct of relationships with citizens and consumers online and offline.

6a. Please explain your choice and share your view of the Internet’s influence upon the future of institutional relationships with their patrons and customers between now and 2020. We are eager to hear what you think of how social, political, and commercial endeavors will form and the way people will cooperate in the future.

Online cooperation will only increase. The means are already there, and will only become more numerous and functional. Institutions that adapt to the Net’s cooperation-encouraging technologies and functions will succeed. Those that don’t will have a hard time.

Having it hardest right now are media institutions, for the simple reason that the Internet subsumes their functions, while also giving to everybody the ability to communicate with everybody else, at little cost, and often with little or no intermediating system other than the Net itself.

Bob Garfield, a columnist for AdAge and a host of NPR’s “On The Media,” says the media have entered what he calls (in his book by the same title) The Chaos Scenario. In his introduction Garfield says he should have called the book “Listenomics,” because listening is the first requirement of survival for every industry that lives on digital bits — a sum that rounds to approximately every industry, period.

So, even where the shapes of institution persist, their internal functions must be ready to listen, and to participate in the market’s conversations, even when those take place outside the institution’s own frameworks.


7. Will online anonymity still be prevalent?

1 By 2020, the identification ID systems used online are tighter and more formal – fingerprints or DNA-scans or retina scans. The use of these systems is the gateway to most of the Internet-enabled activity that users are able to perform such as shopping, communicating, creating content, and browsing. Anonymous online activity is sharply curtailed.

2 By 2020, Internet users can do a lot of normal online activities anonymously even though the identification systems used on the Internet have been applied to a wider range of activities. It is still relatively easy for Internet users to
create content, communicate, and browse without publicly disclosing who they are.

7a. Please explain your choice and share your view about the future of anonymous activity
online by the year 2020

In the offline world, anonymity is the baseline. Unless burdened by celebrity, we are essentially anonymous when we wander through stores, drive down the road, or sit in the audience of a theater. We become less anonymous when we enter into conversation or transact business. Even there, however, social protocols do not require that we become any more identifiable than required for the level of interaction. Our “identity” might be “the woman in the plaid skirt,” “the tall guy who was in here this morning,? or “one of our students.”

We still lack means by which an individual can selectively and gracefully shift from fully to partially anonymous, and from unidentified to identified — yet in ways that can be controlled and minimized (or maximized) as much as the individual (and others with which he or she interacts) permit. In fact, we’re a long way off.

The main reason is that most of the “identity systems” we know put control on the side of sellers, governments, and other institutions, and not with the individual. In time systems that give users control will be developed. These will be native to users and not provided only by large organizations (such as Microsoft, Google or the government).

A number of development communities have been working on this challenge since early in the last decade, and eventually they will succeed. Hopefully this will be by 2020, but I figured we’d have it done by 2010, and it seems like we’ve barely started.


8. Will the Semantic Web have an impact?

By 2020, the Semantic Web envisioned by Tim Berners-Lee and his allies will have been achieved to a significant degree and have clearly made a difference to the average Internet users.

2 By 2020, the Semantic Web envisioned by Tim Berners-Lee will not be as fully effective as its creators hoped and average users will not have noticed much of a difference.

8a. Please explain your choice and share your view of the likelihood that the Semantic Web will have been implemented by 2020 and be a force for good in Internet users?

Tim’s World Wide Web was a very simple and usable idea that relied on very simple and usable new standards (e.g. HTML and HTTP), which were big reason why the Web succeeded. The Semantic Web is a very complex idea, and one that requires a lot of things to go right before it works. Or so it seems.

Tim Berners-Lee introduced the Semantic Web Roadmap (http://www.w3.org/DesignIssues/Semantic.html) in September 1998. Since then more than eleven years have passed. Some Semantic Web technologies have taken root: RDFa, for example, and microformats. But the concept itself has energized a relatively small number of people, and there is no “killer” tech or use yet.

That doesn’t mean it won’t happen. Invention is the mother of necessity. The Semantic Web will take off when somebody invents something we all find we need. Maybe that something will be built out of some combination of code and protocols already laying around — either within the existing Semantic Web portfolio, or from some parallel effort such as XDI. Or maybe it will come out of the blue.

By whatever means, the ideals of the Semantic Web — a web based on meaning (semantics) rather than syntax (the Web’s current model) — will still drive development. And we’ll be a decade farther along in 2020 than we are in 2010.


9. Are the next takeoff technologies evident now?

1 The hot gadgets and applications that will capture the imagination of users in 2020 are pretty evident today and will not take many of today’s savviest innovators by surprise.

2 The hot gadgets and applications that will capture the imagination of users in 2020 will often come “out of the blue” and not have been anticipated by many of today’s savviest innovators.

9a. Please explain your choice and share your view of its implications for the future. What do you think will be the hot gadgets, applications, technology tools in 2020?

“The blue” is the environment out of which most future innovation will come. And that blue is the Net.

Nearly every digital invention today was created by collaboration over the Net, between people working in different parts of the world. The ability to collaborate over distances, often in real time (or close to it), using devices that improve constantly, over connections that only get fatter and faster, guarantees that the number and variety of inventions will only go up. More imaginations will be captured more ways, more often. Products will be improved, and replaced, more often than ever, and in more ways than ever.

The hottest gadgets in 2020 will certainly involve extending one’s senses and one’s body. In fact, this has been the case for all inventions since humans first made stone tools and painted the walls of caves. That’s because humans are characterized not only by their intelligence and their ability to speak, but by their capacity to extend their senses, and their abilities, through their tools and technologies. Michael Polanyi, a scientist and philosopher, called this indwelling. It is through indwelling that the carpenter’s tool becomes an extension of his arm, and he has the power to pound nails through wood. It is also through indwelling that an instrument becomes an extension of the musician’s mouth and hands.

There is a reason why a pilot refers to “my wings” and “my tail,” or a driver to “my wheels” and “my engine.” By indwelling, the pilot’s senses extend outside herself to the whole plane, and the driver’s to his whole car.

The computers and smart phones of today are to some degree extensions of ourselves, but not to the extent that a hammer extends a carpenter, a car enlarges a driver or a plane enlarges a pilot. Something other than a computer or a smart phone will do that. Hopefully this will happen by 2020. If not, it will eventually.


10. Will the Internet still be dominated by the end-to-end principle?

1 In the years between now and 2020, the Internet will mostly remain a technology based on the end-to-end principle that was envisioned by the Internet’s founders. Most disagreements over the way information flows online will be resolved in favor of a minimum number of restrictions over the information available online and the methods by which people access it.

2 In the years between now and 2020, the Internet will mostly become a technology where intermediary institutions that control the architecture and significant amounts of content will be successful in gaining the right to manage information and the method by which people access and share it.

10a. Please explain your choice, note organizations you expect to be most likely to influence the future of the Internet and share your view of the effects of this between now and 2020.

There will always be a struggle to reconcile the Net’s end-to-end principle with the need for companies and technologies operating between those ends to innovate and make money. This tension will produce more progress than either the principle by itself or the narrow interests of network operators and other entities working between the Net’s countless ends.

Today these interests are seen as opposed — mostly because incumbent network operators want to protect businesses they see threatened by the Net’s end-to-end nature, which cares not a bit about who makes money or how. But in the future they will be seen as symbiotic, because both the principle and networks operating within it will be seen as essential infrastructure. So will what each of does to the help raise and renovate the Net’s vast barn.

The term infrastructure has traditionally been applied mostly to the public variety: roads, bridges, electrical systems, water systems, waste treatment and so on. But this tradition only goes back to the Seventies. Look up infrastructure in a dictionary from the 1960s or earlier and you won’t find it (except in the OED). Today are still no institutes or academic departments devoted to infrastructure. It’s a subject in many fields, yet not a field in itself.

But we do generally understand what infrastructure is. It’s something solid and common we can build on. It’s geology humans make for themselves.

Digital technology, and the Internet in particular, provide an interesting challenge for understanding infrastructure, because we rely on it, yet it is not solid in any physical sense. It is like physical structures, but not itself physical. We go on the Net, as if it were a road or a plane. We build on it too. Yet it is not a thing.

Inspired by Craig Burton’s description of the Net as a hollow sphere — a three-dimensional zero comprised entirely of ends
— David Weinberger and I wrote World of Ends in 2003 (http://worldofends.com). The purpose was to make the Net more understandable, especially to companies (such as phone and cable carriers) that had been misunderstanding it. Lots of people agreed with us, but none of those people ran the kinds of companies we addressed.

But, to be fair, most people still don’t understand the Net. Look up “The Internet is” on Google (with the quotes). After you get past the top entry (Wikipedia’s), here’s what they say:

  1. a Series of Tubes
  2. terrible
  3. really big
  4. for porn
  5. shit
  6. good
  7. wrong
  8. killing storytelling
  9. dead
  10. serious business
  11. for everyone
  12. underrated
  13. infected
  14. about to die
  15. broken
  16. Christmas all the time
  17. altering our brains
  18. changing health care
  19. laughing at NBC
  20. changing the way we watch TV
  21. changing the scientific method
  22. dead and boring
  23. not shit
  24. made of kittens
  25. alive and well
  26. blessed
  27. almost full
  28. distracting
  29. a brain
  30. cloudy

Do the same on Twitter, and you’ll get results just as confusing. At this moment (your search will vary; this is the Live Web here), the top results are:

  1. a weird, WEIRD place
  2. full of feel good lectures
  3. the Best Place to get best notebook computer deals
  4. Made of Cats
  5. Down
  6. For porn
  7. one of the best and worst things at the same time
  8. so small
  9. going slow
  10. not my friend at the moment
  11. blocked
  12. letting me down
  13. going off at 12
  14. not working
  15. magic
  16. still debatable
  17. like a jungle
  18. eleven years old
  19. worsening by the day
  20. extremely variable
  21. full of odd but exciting people
  22. becoming the Googlenet
  23. fixed
  24. forever
  25. a battlefield
  26. a great network for helping others around the world
  27. more than a global pornography network
  28. slow
  29. making you go nuts
  30. so much faster bc im like the only 1 on it

(I took out the duplicates. There were many involving cats and porn.)

Part of the problem is that we understand the Net in very different and conflicting ways. For example, when we say the Net consists of “sites,” with “domains” and “locations” that we “architect,” “design,” “build” and “visit,”we are saying the Internet is a place. It’s real estate. But if we say the Net is a “medium” for the “distribution” of “content” to “consumers” who “download” it, we’re saying the Net is a shipping system. These metaphors are very different. They yield different approaches to business and lawmaking, to
name just two areas of conflict.

Bob Frankston, co-inventor (with Dan Bricklin) of spreadsheet software (Visicalc) and one of the fathers of home networking, says the end-state of the Net’s current development is ambient connectivity, which “gives us access to the oceans of copper, fiber and radios that surround us.” Within those are what Frankston calls a “sea of bits” to which all of us contribute. To help clarify the anti-scarce nature of bits, he explains, “Bits aren’t really like kernels of corn. They are more like words. You may run out of red paint but you don’t run out of the color red.”

Much has been written about the “economics of abundance,” but we have barely begun to understand what that means or what can be done with it. The threats are much easier to perceive than the opportunities. Google is one notable exception to that. Asked at a Harvard meeting to explain the company’s strategy of moving into businesses where it expects to make no money directly for the services it offers, a Google executive explained that the company looked for “second and third order effects.”

JP Rangaswami, Chief Scientist for BT (disclosure: I consult BT) describes these as “because effects.” You make money because of something rather than with it. Google makes money because of search, and because of Gmail. Not with them. Not directly.

Yet money can still be made with goods and services — even totally commodified ones. Amazon makes money with back-end Web services such as EC2 (computing) and S3 (data storage). Phone, cable and other carriers can make money with “dumb pipes” too. They are also in perfect positions to offer low-latency services directly to their many customers at homes and in businesses. All the carriers need to do is realize that there are benefits to incumbency other than charging monopoly rents.

The biggest danger for the Net and its use comes not from carriers, but from copyright absolutists in what we have recently come to call the “content” industry. For example, in the U.S. the DMCA (Digital Millenium Copyright Act), passed in 1998, was built to protect the interests of copyright holders and served as a model for similar lawmaking in other countries. What it did was little to protect the industries that lobbied its passing, while at the same time hurting or preventing a variety of other industries. Most notable (at least for me) was the embryonic Internet radio industry, which was just starting to take off when the DMCA came along. The saga that followed is woefully complex, and the story is far from over, but the result in the meantime is a still-infant industry that suffers many more restrictions in respect to “content” than over-the-air radio stations. Usage fees for music are much higher than those faced by broadcasters — so high that making serious money by webcasting music is nearly impossible. There are also tight restrictions on what music can be played, when, and how often. Music on podcasts is also essentially prohibited, because podcasters need to “clear rights” for every piece of copyrighted music they play. That’s why, except for “podsafe” music, podcasting today is almost all talk.

I’ll give the last words here to Cory Doctorow, who publishes them freely in his new book Content:

… there is an information economy. You don’t even need a computer to participate. My barber, an avowed technophobe who rebuilds antique motorcycles and doesn’t own a PC, benefited from the information economy when I found him by googling for barbershops in my neighborhood.

Teachers benefit from the information economy when they share lesson plans with their colleagues around the world by email. Doctors benefit from the information economy when they move their patient files to efficient digital formats. Insurance companies benefit from the information economy through better access to fresh data used in the preparation of actuarial tables. Marinas benefit from the information economy when office-slaves look up the weekend’s weather online and decide to skip out on Friday for a weekend’s sailing. Families of migrant workers benefit from the information economy when their sons and daughters wire cash home from a convenience store Western Union terminal.

This stuff generates wealth for those who practice it. It enriches the country and improves our lives.

And it can peacefully co-exist with movies, music and microcode, but not if Hollywood gets to call the shots. Where IT managers are expected to police their networks and systems for unauthorized copying — no matter what that does to productivity — they cannot co-exist. Where our operating systems are rendered inoperable by “copy protection,” they cannot co-exist. Where our educational institutions are turned into conscript enforcers for the record industry, they cannot co-exist.

The information economy is all around us. The countries that embrace it will emerge as global economic superpowers. The countries that stubbornly hold to the simplistic idea that the information economy is about selling information will end up at the bottom of the pile.


But all that is just me (and my sources, such as Cory). There are 894 others compiled by the project, and I invite you to visit those there.

I’ll also put in a plug for FutureWeb in Raleigh, April 28-30, where I look forward to seeing many old friends and relatives as well. (I lived in North Carolina for most of the 20 years from 1965-1985, and miss it still.) Hope to see some of ya’ll there.

Tags: , ,

« Older entries § Newer entries »