Technology

You are currently browsing the archive for the Technology category.

If you want to get the most out of your Verizon FiOS (fiber to the home) Internet connection, here are your top two tiers:

FiOS tiers

I have the one on the left, and that’s what I’m paying for it. The service is rock-solid and reliable. So is support, as rarely as I’ve needed it.

But when I go to work, my upstream speeds are higher — up to 100 Mbps. I get more done. And I’m not the only techie who appreciates high upstream speeds. Boston is the world’s biggest college town, and full of other industries (pharma, big science, finance) that are staffed by professionals that could use the speed too.

But Verizon does this weird thing with the next tier up: they cut back the upstream speed from 25 Mbps to 20 Mbps. At double the price. WTF is that all about? When I ordered the 25 Mbps tier several months ago, the guy on the phone told me the reason was “just marketing.” He also said “We could give you 100Mbps tomorrow and blow everybody else out of the water.”

So why not?

Oddly, all of FiOS’ “Triple Play” (Internet + TV + phone) bundles here have relatively low Internet speeds, compared to the two tiers above. If the Net is your main interest, you might be better off without the TV and the phone. (In fact, we had the other two “plays” we got FiOS originally, and dumped them later, mostly because  we hardly used them.) If you view more bundles, your best speeds are still just 25/25Mbps.

My request (and advice — and companies do pay me for this stuff) to Verizon is to do two things:

  1. Come up with a sensible offering — one that doesn’t subtract upstream value at twice the price.
  2. Try localizing a bit. Boston isn’t Red Bank. (And no offense to that town or other FiOS service areas.) See what happens when you super-serve a region with an offering that makes sense for it.

Maybe Verizon is doing that, sort of, with its business offerings. But getting to the actual offerings requires many clicks and filling out forms. Where I finally arrived in my latest hunt was a page with this set of choices:

First, this is much better than what I remember about my last look at FiOS business deals.

Second, that 35/35 offering is attractive.

Third, once again, we have an upstream speed drop when you go to the highest tier.

Fourth, the “static” offering is poorly explained. What this means is a real IP address, rather than one dynamically assigned by the router. This is real Internet stuff, so the customer can, say, run a server. (The copy does say “host websites.”) But, unless I’m missing it, nowhere does it say how many IP addresses the customer gets. For customers who care about this stuff, that’s the first question that will come up.

Fifth, the examples are poor. Here are some of the things that serious professional customers might care about:

  1. Offsite storage or backup
  2. Virtual computing in the cloud, such as with Amazon’s EC2
  3. Running servers in a co-lo or some other heavy-lifting environment
  4. Remote rendering, such as RenderCore

Verizon (or any ISP) could offer any of those services locally themselves, taking advantage of low latencies. In fact, in some cases that can be a huge advantage, and therefore a selling point.

Again, the service I’ve had all along with FiOS (going on three years now) has been solid and good — so good, in fact, that I miss it a lot when I’m gone. (Such as with this example here.) I just want it to be better. Hope this helps.

Tags: , , ,

There are two essential concepts of location for the World Wide Web. One is you: the individual, the reader, the writer, the customer, the singular entity. The other is the World.

I live and work mostly in the U.S. I also speak English. My French, German and Spanish are all too minimal to count unless I happen to be in a country that speaks one of those languages. When I’m in one of those places, as I am now in France, I do my best to learn as much of the language as I can. But I’m still basically an English speaker.

So, by default, when I’m on the Web my language is English. My location might be France, or Denmark or somewhere else, but when I’m searching for something the language I require most of the time is English. That’s my mental location.

So it drives me nuts that Google sends me to http://google.fr, even when I log into iGoogle and get my personalized Google index page. When I re-write the URL so it says http://google.us, Google re-writes it as http://google.fr, no matter what. On iGoogle I can’t find a way to set my preferred language, or my virtual location if it’s not where I am right now. I can’t do that even when I have Google translate, instantly, in my Google Chrome browser, the page text to English. (I’m sure there’s a hack, and I would appreciate it if somebody would tell me. But if there is why should it be so hard?)

Bing comes up all-French too, but at the bottom of the page, in small white type, it says “Go to Bing in English”. Nice.

So now, here in Paris, I’m using Bing when I want to search in English, and Google when I want to search for local stuff. Which is a lot, actually. But I miss searching in English on Google. I could ask them to fix that, but I’d rather fix the fact that only they can fix that. Depending on suppliers to do all the work is a bug, not a feature.

What matters is context. I’m tired of having companies guess at what my context is. I know what my contexts are. I know how they change. I want my own ways of changing contexts, and of informing services of what those contexts are. In some cases I don’t mind their guessing. In a few I even appreciate it. But in too many cases their guesses only get in the way. The Google search case is just one of them.

(disclosure: I’ve done work for Phil) gives a talk in which he provides a brief history of e-commerce. It goes, “1995: Invention of the cookie. The End.” Thanks to the , we have contexts — but only inside each company’s silo. We can’t provide our own contexts except to the degree that each company’s website allows it. And they’re all different. This too is a bug, not a feature. (Just like carrying around a pile of loyalty cards and key tabs is a bug. Hey, I know more about who and what I’m loyal to than any company does — and I’d like my own ways of expressing that.)

At this moment it is commonly believed that the contexts that matter most are “social”. This is defined as who my friends are, and where I happen to be right now. This information is held almost entirely by commercial services: Facebook, Twitter, Google, Foursquare, Groupon, Blippy and so on. Not by you or me. Not by individuals, and not independently of all those services. This too is a bug. Who your friends and other contacts are is indeed a context, but it should be one that you control, not some company. Your data, and how you organize it, should be the independent variable, and the data you share with these services should be the dependent variables.

Some of us in the community (including Phil and his company, ) are working on context provided by individuals. In the long run these contexts can work for any or all commercial and non-commercial institutions we deal with. I expect to see some of this work become manifest over the next year. Stay tuned.

Tags: , ,

Tomorrow we fly to Paris, where I’ll be based for the next five weeks. To help myself prep, here are a few of my notes from conversations with friends and my own inadequate research…

Mobile phone SIM recommendations are especially welcome. We plan to cripple our U.S. iPhones for the obvious reasons AT&T details here. Our other phones include…

  • Android Nexus One (right out of the box)
  • Nokia E72 (it’s a Symbian phone)
  • Nokia N900 (a computing device that does have a SIM slot and can be used as a phone)
  • Nokia 6820b (an old Nokia candybar-shaped GSM phone that hasn’t been used in years, but works)

Ideally we would like to go to a mobile phone store that can help us equip some combination of these things, for the time we’re there. The iPad too, once it arrives. It will be a 3G model.

Au revoir…

[Later…] We’re here, still jet-lagged and settling in. Here are some other items we could use some advice on:

  • “Free” wi-fi. This is confusing. There seem to be lots of open wi-fi access points in Paris, but all require logins and passwords. Our French is still weak at best, so that’s a bit of a problem too. One of the services is called Free, which also happens to be the company that provides TV/Internet/Phone service in the apartment. Should this also give us leverage with the Free wi-fi out there? Not sure. (Internet speed is 16.7Mbps down and .78Mbps up. It’s good enough, but not encouraging for posting photos. I’m also worried about data usage caps. Guidance on that is welcome too.)
  • Our 200-watt heavy-duty 220/110 step-down power transformer crapped out within two hours after being plugged in. We want to get a new one that won’t fail. The dead one is a Tacima.

Again, thanks for all your help.

Tags: , , , , , , , , , , ,

When  reported on the next-generation iPhone that had come into its hands, I was as curious as the next geek about what they’d found. But I didn’t think the ends justified the means.

The story begins,

You are looking at Apple’s next iPhone. It was found lost in a bar in Redwood City, camouflaged to look like an iPhone 3GS. We got it. We disassembled it. It’s the real thing, and here are all the details.

“We got it,” they said. How?

There was much speculation about that, but obviously — if the phone was a real prototype — it must have been lost by an Apple employee. That’s why I tweeted, “Some employee is in very deep shit for letting this happen: http://bit.ly/bVN5Ma” But others wondered. Was it planted by Apple? That’s what, for example, Howard Stern guessed on his show yesterday morning. He thought it was a brilliant marketing move by Apple.

But Gizmodo set their record straight, through a much-updated piece titled How Apple lost the next iPhone. After telling the story, at length, of how Gray Powell, an Apple employee, had left it at a restaurant (“The Gourmet Haus Staudt. A nice place to enjoy good German lagers”), Gizmodo unpacks the means by which the phone came into their possession:

There it was, a shiny thing, completely different from everything that came before.

He reached for a phone and called a lot of Apple numbers and tried to find someone who was at least willing to transfer his call to the right person, but no luck. No one took him seriously and all he got for his troubles was a ticket number.

He thought that eventually the ticket would move up high enough and that he would receive a call back, but his phone never rang. What should he be expected to do then? Walk into an Apple store and give the shiny, new device to a 20-year-old who might just end up selling it on eBay?
The Aftermath
Weeks later, Gizmodo got it for $5,000 in cash. At the time, we didn’t know if it was the real thing or not. It didn’t even get past the Apple logo screen. Once we saw it inside and out, however, there was no doubt about it. It was the real thing, so we started to work on documenting it before returning it to Apple. We had the phone, but we didn’t know the owner. Later, we learnt about this story, but we didn’t know for sure it was Powell’s phone until today, when we contacted him via his phone.

The apparent purpose of the story is to save Gary Powell’s ass, as well as to cover some of Gizmodo’s as well. It concludes,

He sounded tired and broken. But at least he’s alive, and apparently may still be working at Apple—as he should be. After all, it’s just a stupid iPhone and mistakes can happen to everyone—Gray Powell, Phil Schiller, you, me, and Steve Jobs.

The only real mistake would be to fire Gray in the name of Apple’s legendary impenetrable security, breached by the power of German beer and one single human error.

Additional reporting by John Herrman; extra thanks to Kyle VanHemert, Matt Buchanan, and Arianna Reiche

Update 2: I have added the bit on the $5,000 (in italics) and how we acquired the iPhone, as Gawker has disclosed to every media outlet that asked.

Yesterday the New York Times ran iPhonegate: Lost, Stolen Or A Conspiracy?, by Nick Bilton. The gist:

One big question is how much Gizmodo paid for the phone, and whether keeping it was legal. Nick Denton, chief executive of Gawker Media, which owns Gizmodo, told The Times the site paid $5,000 for the phone. But still bloggers wondered if it had really paid $10,000.

On Monday, Charles Arthur, Technology blogger for The Guardian, said paying for the phone could mean that Gizmodo was knowingly receiving stolen goods; on Tuesday, citing the Economic Espionage Act of 1996, Mr. Arthur expanded on his theory.

This helped the debate move on to more serious matters: whether the phone was “lost,” or “stolen.” John Gruber, blogger for Daring Fireball, pointed outthat in the eyes of  California law, there isn’t a difference. The law states:

One who finds lost property under circumstances which give him knowledge of or means of inquiry as to the true owner, and who appropriates such property to his own use, or to the use of another person not entitled thereto, without first making reasonable and just efforts to find the owner and to restore the property to him, is guilty of theft.

The next big question — whether Gizmodo would turn over the phone to Apple — was answered after a long day of speculation on Monday over itsauthenticity.  Gizmodo has reported that it received a letter from Apple’s legal counsel…

Gizmodo complied and returned the phone. Yesterday I tweeted, “Re: bit.ly/d0P4Vo If you found a next-gen iPhone, would you return it — or use it to pull the owner’s pants down?” Thus far, two responses:

Of course, what Gizmodo did was an example of investigative journalism at work. Mainstream journals and broadcasters sometimes pay for stories, leads, video and audio recordings, photographs. That’s not unusual. But, as Charles Arthur writes, “As a reporter – and make no doubt, Gizmodo is reporting here, actually doing journalism red in tooth and claw – you inevitably end up walking close to the edge of what’s legal every now and then. Whether it’s being in receipt of confidential information, publishing something that’s potentially defamatory, or standing closer to the front line of a protest than the police would like, you occasionally have to put yourself in some legally-risky positions.”

Many thousands of years ago on the time scale of both the Internet and journalistic practices, specifically in 1971, I wrote a story for a New Jersey newspaper about rural poverty, illustrated by a photo I took of somebody’s snow-covered yard filled with discarded appliances and half-disassembled old cars sitting on cinder blocks. I thought at the time that the photo was sufficiently generic to protect the anonymity of the home’s occupier. I was wrong. The owner called me up and let me have it. I was still a kid myself — just 22 years old — and it was a lesson that stuck with me.

A couple decades later that lesson was enlarged by “Notes Toward a Journalism of Consciousness,” by D. Patrick Miller, in The Sun, a magazine for which I had once been a regular contributor. (No links to the story, but its table of contents is here.) In it Miller recalled his work as an investigative reporter in the Bay Area, and how sometimes he had to cross a moral line. In his case it was gaining the confidence of sources he would later, in some ways, betray — for the Greater Good of the story’s own moral purposes.

Gizmodo poses the moral goodness of its own story against the backdrop of Apple’s fanatical secrecy:

And hidden in every corner, the Apple secret police, a team of people with a single mission: To make sure nobody speaks. And if there’s a leak, hunt down the traitor, and escort him out of the building. Using lockdowns and other fear tactics, these men in black are the last line of defense against any sneaky eyes. The Gran Jefe Steve trusts them to avoid Apple’s worst nightmare: The leak of a strategic product that could cost them millions of dollars in free marketing promotion. One that would make them losecontrol of the product news cycle.

But the fact is that there’s no perfect security. Not when humans are involved. Humans that can lose things. You know, like the next generation iPhone.

Thus the second wrong makes a write, but not a right.

Two years ago, in this post here, I wrote,

Still, I think distinctions matter. There is a difference in kind between writing to produce understanding and writing to produce money, even when they overlap. There are matters of purpose to consider, and how one drives (or even corrupts) the other.

Two additional points.

One is about chilling out. Blogging doesn’t need to be a race. Really.

The other is about scoops. They’re overrated. Winning in too many cases is a badge of self-satisfaction one pins on oneself. I submit that’s true even if Memeorandum or Digg pins it on you first. In the larger scheme of things, even if the larger scheme is making money, it doesn’t matter as much as it might seem at the time.

What really matters is … Well, you decide.

Gizmodo was acting in character here. That character is traditional journalism itself, which is no stranger to moral compromises.

I’m not saying that one must not sometimes make those compromises. We all often do, regardless of our professions. What makes journalism a special case is its own moral calling.

How high a calling is it to expose the innards of an iPhone prototype?

To help decide, I recommend the movie Absence of Malice.

Was malice absent in Gizmodo’s case? And, even if it was, is the story worth what it cost to everybody else involved — including whatever dollar amount Gizmodo paid to its source?

I submit that it wasn’t. But then, I’m not in Gizmodo’s business. I also don’t think that business is journalism of the sort we continue to idealize, even though journalism never has been as ideal as we veterans of the trade like to think it is.

Tags: , ,

March Madness for me this year was a double treat. First, my team, the Duke Blue Devils, won the championship. (Though my heart went out to Butler, which came within inches of winning at the buzzer on a half-court shot.) Second, I got to follow the Devils, and North Carolina Basketball in general, on . I did this over on my iPhone. I listened in my pocket as I cooked in the kitchen, rode on my bike, and walked to the bus and the train. I dug and in the mornings, the PackMan in the afternoon, and hyper-local features such as the Duke Basketball show from the Washington Duke Inn, on Duke’s campus).

I loved hearing old familiars like , and Duke play-by-play announcer , who started as a sales guy at WDNC in 1975, not long after I left that same job. In those days WDNC was a struggling Top 40 station, still owned by the Durham Herald-Sun newspapers, still with studios in the paper’s building, and still carrying CBS news (its lone connection to a glorious past). Since then WDNC has bounced through a number of formats, and currently thrives in the overlap of , and empires. Its FM counterpart is WCMC/99.9, which didn’t exist when I left town in 1985. Currently known as “620 The Buzz” (the FM is “The Fan”), it was until recently The Bull. (In fact, if you go to http://wdnc.com, it re-directs to http://www.620thebull.com/, which is a blank page. Somebody needs to get a second re-direct going there.)

A confession. Not long after Bob Harris took over play-by-play for Duke games, he often had Mike Krzyzewski, then Duke’s rookie basketball coach, as a guest. I wasn’t a fan of Coach K. His predecessor, Bill Foster, was gregarious, emotional and easy for fans to love, Krzyzewski seemed cold and a bit nasty. He rarely smiled and had coaching style that appeared to consisted entirely of barking at officials. I once said of him, “There’s nothing about that guy that a blow-dry and a sense of humor wouldn’t cure.” While it wasn’t quite a nickname for Coach K, it stuck, and I heard it repeated often. Today, of course, Krzyzewski is an institution, and much loved by everybody who knows him, especially his players.

Anyway, the most interesting irony to me, as I listen to WDNC here in Cambridge, Mass, is that it has long been the custom in radio to obsess about signals and coverage — since you can’t listen to what you can’t get. Among souls who still do this I know few who are more devoted, even still, than I am. (The very best is Scott Fybush, by the way. I love his site visits.)

As a kid growing up in New Jersey I would ride my bike down to visit the transmitters of New York’s AM stations, whose towers bristled from swamps on the flanks of the Hackensack river: WABC, WINS, WMGM/WHN, WOV/WADO, WMCA, WNEW, WHOM…

I’d talk with the guys who manned the transmitters (they were always guys, and they were often old), logging readings and walking out to the towers to make sure all was well. I became a ham radio operator around that time, and continued to fancy myself something of an engineer, though technically I wasn’t. Still, I jumped at the opportunity to take shifts maintaining WDNC’s transmitter as a side job when I worked there. The whole plant was about the same age as me (at the time, 27), and spread across about ten acres at the end of a dirt road on the northwest side of town. It was 5000 watts by day and 1000 watts by night, with directional patterns produced by its three towers. The shot above is from Bing’s excellent “bird’s eye” view of the site. (Why doesn’t Microsoft make more of this? Google has nothing like it, and it totally rocks.) And it’s much nicer now than it was then. At that time the fields had turned to high brush, and I needed to ride a lawnmower out to the towers on a bumpy path, so I wouldn’t get ticks. (One could pick up — I’m not kidding, hundreds of ticks by walking out there.)

What fascinated me most about the facility was the engineering files, which included details on the transmission patterns and coverage maps showing how waves interacted with conductive ground to produce signal intensities that didn’t look as much like the signal pattern as one might expect. AM coverage depends on ground conductivity. In North Carolina (and the East in general) the ground conductivity is poor; but at the bottom end of the AM dial the waves are longer and travel farther along the ground in any case. WDNC was at 620, so its signal was many times the size of a signal at the top end of the dial with the same wattage.

Now I can go online and see WDNC’s daytime pattern here and its nighttime pattern here — both at . I can see the coverage they produce at . Here’s a mash-up of patterns (left) and coverage (right):

Which is all well and cool. Playing with this stuff is catnip for me. But it’s also meaningless, once radio moves off AM and FM and onto the Net, where in the long run it makes much more sense.

What we’re dealing with, in the images I show here, is exceedingly antique stuff. The basics of AM broadcast engineering were set in the 1920s and 1930s. FM dates from the 1940s and 1950s. Recent improvements to both (through IBOC — In Band On Channel) are largely proprietary, and uptake on the receiving end borders on pathetic. None of the technologies employed are interactive, much less Net-native. They soak billions of watts off the world’s power grids. AM stations occupy large areas of real estate. FM and TV stations use frequencies that require high elevations, provided by tall towers, buildings or mountains, offering hazards to aviation and bird migration. Not to mention that lots of the biggest towers tend to fall down. In 1989 a pair of 2000-foot TV/FM towers near Raleigh (serving the same areas outlined above) collapsed in the same ice storm.

Three problems stand in the way of building out radio on the Net.

First is the mobile phone system that carries it. When I listen to WDNC on my iPhone, I don’t care how much data I use. AT&T has no data limit for the iPhone or the iPad. Other carriers need to have similar deals. To my knowledge they don’t — at least not in the U.S. (Sprint used to, and after my problems with Sprint last year I doubt I’ll use its system much for media again son.) Still, even AT&T regards subordinates mobile data to mobile telephony. This gets more retro every day. In the long run, we’ll have a mobile data system that includes mobile telephony but is not defined by it (and its infuriating billing systems). These also need to be better integrated with wi-fi from all sources (and not just the carriers’ own). These days most wi-fi access points are “secure,” making them useless as part of a larger system. But that can change.

Second is revising the rules restricting music streamed and podcast over the Net. Copyright law, especially as established by the 1998 Digital Millennium Copyright Act, screwed the hell out of music broadcasting and podcasting. Today we have some of the former and little of the latter (except for “podsafe” music, which includes approximately nothing that’s been popular over the last 80 years). Fixing this won’t be easy, but it needs to be done.

Third is revising the means by which stations make money, and rules about where advertising can be carried. For the former we need a much better system for listeners to pay broadcasters on a voluntary basis, for both commercial and noncommercial stations. (This is why at ProjectVRM we are working on EmanciPay, for example.) For advertising, there are currently restrictions on much national advertising, which is why the majority of ads I hear on WDNC (and other commercial stations that do streaming) are public service announcements from the Ad Council. Listening to these, over and over and over and over, accelerates the listeners own aging process.

Networks and stations also need to realize that more and more online listeners aren’t tuning in to Web pages. They’re tuning directly to streams using applications on mobile devices. The folks on WDNC do a good job of using Twitter, Facebook and other familiar “social media,” but they don’t seem to have a clue that it’s a heck of a lot easier to listen to mobile radio on something that’s actually like a radio — namely a smartphone — than on a computer. Search for “radio” in Apple’s app store and you’ll get hundreds of results. The Public Radio Player, there on the left, has had over 2.5 million downloads so far. Hopefully the iPad will help. Check out Pandora’s latest.

Anyway, a big thanks to the folks at WDNC/TheBuzz for a great season of Duke, Carolina and ACC basketball coverage — especially for a listener stuck here in New England, where pro sports dominate. (Not that I don’t love those too. I just need my college basketball fix.) Props to @TZarzour and @WRALsportsFan too.

I was just interviewed for a BBC television feature that will run around the same time the iPad is launched. I’ll be a talking head, basically. For what it’s worth, here’s what I provided as background for where I’d be coming from in the interview:

  1. The iPad will arrive in the market with an advantage no other completely new computing device for the mass market has ever enjoyed: the ability to run a 100,000-app portfolio that’s already developed, in this case for the iPhone. Unless the iPad is an outright lemon, this alone should assure its success.
  2. The iPad will launch a category within which it will be far from the only player. Apple’s feudal market-control methods (all developers and customers are trapped within its walled garden) will encourage competitors that lack the same limitations. We should expect other hardware companies to launch pads running on open source operating systems, especially Android and Symbian. (Disclosure: I consult Symbian.) These can support much larger markets than Apple’s closed and private platforms alone will allow.
  3. The first versions of unique hardware designs tend to be imperfect and get old fast. Such was the case with the first iPods and iPhones, and will surely be the case with the first iPads as well. The ones being introduced next week will seem antique one year from now.
  4. Warning to competitors: copying Apple is always a bad idea. The company is an example only of itself. There is only one Steve Jobs, and nobody else can do what he does. Fortunately, he only does what he can control. The rest of the market will be out of his control, and it will be a lot bigger than what fits inside Apple’s beautiful garden.

I covered some of that, and added a few things, which I’ll enlarge with a quick brain dump:

  1. The iPad brings to market a whole new form factor that has a number of major use advantages over smartphones, laptops and netbooks, the largest of which is this: it fits in a purse or any small bag — where it doesn’t act just like any of those other devices. (Aside from running all those iPhone apps.) It’s easy and welcoming to use — and its uses are not subordinated, by form, to computing or telephony. It’s an accessory to your own intentions. This is an advantage that gets lost amidst all the talk about how it’s little more than a new display system for “content.”
  2. My own fantasy for tablets is interactivity with the everyday world. Take retailing for example. Let’s say you syndicate your shopping list, but only to trusted retailers, perhaps through a fourth party (one that works to carry out your intentions, rather than sellers’ — though it can help you engage with them). You go into Target and it gives you a map of the store, where the goods you want are, and what’s in stock, what’s not, and how to get what’s mising, if they’re in a position to help you with that. You can turn their promotions on or off, and you can choose, using your own personal terms of service, what data to share with them, what data not to, and conditions of that data’s use. Then you can go to Costco, the tire store, and the university library and do the same. I know it’s hard to imagine a world in which customers don’t have to belong to loyalty programs and submit to coercive and opaque terms of data use, but it will happen, and it has a much better chance of happening faster if customers are independent and have their own tools for engagement. Which are being built. Check out what Phil Windley says here about one approach.
  3. Apple works vertically. Android, Symbian, Linux and other open OSes, with the open hardware they support, work horizonally. There is a limit to how high Apple can build its walled garden, nice as it will surely be. There is no limit to how wide everybody else can make the rest of the marketplace. For help imagining this, see Dave Winer’s iPad as a Coral Reef.
  4. Content is not king, wrote Andrew Oldyzko in 2001. And he’s right. Naturally big publishers (New York Times, Wall Street Journal, the New Yorker, Condé Nast, the Book People) think so. Their fantasy is the iPad as a hand-held newsstand (where, as with real-world newsstands, you have to pay for the goods). Same goes for the TV and movie people, who see the iPad as a replacement for their old distribution systems (also for pay). No doubt these are Very Big Deals. But how the rest of us use iPads (and other tablets) is a much bigger deal. Have you thought about how you’ll blog, or whatever comes next, on an iPad? Or on any tablet? Does it only have to be in a browser? What about using a tablet as a production device, and not just an instrument of consumption? I don’t think Apple has put much thought into this, but others will, outside Apple’s walled garden. You should too. That’s because we’re at a juncture here. A fork in the road. Do we want the Internet to be broadcasting 2.0 — run by a few content companies and their allied distributors? Or do we want it to be the wide open marketplace it was meant to be in the first place, and is good for everybody? (This is where you should pause and read what Cory Doctorow and Dave Winer say about it.)
  5. We’re going to see a huge strain on the mobile data system as iPads and other tablets flood the world. Here too it will matter whether the mobile phone companies want to be a rising tide that lifts all boats, or just conduits for their broadcasting and content production partners. (Or worse, old fashioned phone companies, treating and billing data in the same awful ways they bill voice.) There’s more money in the former than the latter, but the latter are their easy pickings. It’ll be interesting to see where this goes.

I also deal with all this in a longer post that will go up elsewhere. I’ll point to it here when it comes up. Meanwhile, dig this post by Dave Winer and this one by Jeff Jarvis.

Tags: , , , , , , , , , , , ,

I submit to your interest two speeches that challenge acceptance of status quos by which our collective frogs are slowly boiling.

First is Freedom in the Cloud, by , given at the Internet Society in New York on 5 February.

Second  is Making Sense of Privacy and Publicity, by . Given on 13 March at SXSW. One teaser quote:

A teaser quote from Eben:

…in effect, we lost the ability to use either legal regulation or anything about the physical architecture of the network to interfere with the process of falling away from innocence that was now inevitable in the stage I’m talking about, what we might call late Google stage 1.

It is here, of course, that Mr. Zuckerberg enters.

The human race has susceptibility to harm but Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age.

Because he harnessed Friday night. That is, everybody needs to get laid and he turned it into a structure for degenerating the integrity of human personality and he has to a remarkable extent succeeded with a very poor deal. Namely, “I will give you free web hosting and some PHP doodads and you get spying for free all the time”. And it works.

A teaser quote from Danah:

It’s easy to think that “public” and “private” are binaries. We certainly build a lot of technology with this assumption. At best, we break out of this with access-control lists where we list specific people who some piece of content should be available to. And at best, we expand our notion of “private” to include everything that is not “public.” But this binary logic isn’t good enough for understanding what people mean when they talk about privacy. What people experience when they talk about privacy is more complicated than what can be instantiated in a byte.

To get at this, let’s talk about how people experience public and private in unmediated situations. Because it’s not so binary there either.

First, think about a conversation that you may have with a close friend. You may think about that conversation as private, but there is nothing stopping your friend from telling someone else what was said, except for your trust in your friend. You actually learned to trust your friend, presumably through experience.

Learning who to trust is actually quite hard. Anyone who has middle school-aged kids knows that there’s inevitably a point in time when someone says something that they shouldn’t have and tears are shed. It’s hard to learn to really know for sure that someone will keep their word. But we don’t choose not to tell people things simply because they could spill the beans. We do our best to assess the situation and act accordingly.

We don’t just hold people accountable for helping us maintain privacy; we also hold the architecture around us accountable. We look around a specific place and decide whether or not we trust the space to allow us to speak freely to the people there.

They’re talking about different things, but they overlap. They both have to do with a loss of control, and both set out agenda for those who care. Curious to know what ya’ll think.

Earlier this year the Pew Research Center’s Internet & American Life Project and Elon University conducted research toward The Future of the Internet IV, the latest in their survey series, which began with Future of the Internet I – 2004. This latest report includes guided input from subjects such as myself (a “thoughtful analyst,” they kindly said) on subjects pertaining to the Net’s future. We were asked to choose between alternative outcomes — “tension pairs” — and to explain our views. Here’s the whole list:

  1. Will Google make us stupid?
  2. Will we live in the cloud or the desktop?
  3. Will social relations get better?
  4. Will the state of reading and writing be improved?
  5. Will those in GenY share as much information about themselves as they age?
  6. Will our relationship to key institutions change?
  7. Will online anonymity still be prevalent?
  8. Will the Semantic Web have an impact?
  9. Are the next takeoff technologies evident now?
  10. Will the Internet still be dominated by the end-to-end principle?

The results were published here at Pew and Elon’s Imagining the Internet site. Here’s the .pdf.

My own views are more than well represented in the 2010 report. One of my responses (to the last question) was even published in full. Still, I thought it would be worth sharing my full responses to all the questions. That’s why I’m posting them here.

Each question is followed by two statements — the “tension pair” — and in some cases by additional instruction. I’ve italicized those.

[Note… Much text here has been changed to .html from .pdf and .doc forms, and extracting all the old formatting jive has been kind of arduous. Bear with me while I finish that job, later today. (And some .html conventions don’t work here in WordPress, so that’s a hassle too.)]


1. Will Google make us smart or stupid?

1 By 2020, people’s use of the Internet has enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid (http://www.theatlantic.com/doc/200807/google).

2 By 2020, people’s use of the Internet has not enhanced human intelligence and it could even be lowering the IQs of most people who use it a lot. Nicholas Carr was right: Google makes us stupid.

1a. Please explain your choice and share your view of the Internet’s influence on the future of human intelligence in 2020 – what is likely to stay the same and what will be different in the way human intellect evolves?


Though I like and respect Nick Carr a great deal, my answer to the title question in his famous essay in The Atlantic — “Is Google Making Us Stupid?” — is no. Nothing that informs us makes us stupid.

Nick says, “Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.” Besides finding that a little hard to believe (I know Nick to be a deep diver, still), there is nothing about Google, or the Net, to keep anyone from diving — and to depths that were not reachable before the Net came along. Also, compare viewing the Net to using the Net. There is clearly a massive move to the former from the latter. And this move, at the very least, requires being less of a potato.

But that’s all a separate matter from Google itself. There is no guarantee that Google will be around, or in the same form, in the year 2020.

First, there are natural limits to any form of bigness, and Google is no exception to those. Trees do not grow to the sky.

Second, nearly all of Google’s income is from advertising. There are two problems with this. One is that improving a pain in the ass does not make it a kiss — and advertising is, on the whole, still a pain in the user’s ass. The other is that advertising is a system of guesswork, which by nature makes it both speculative and inefficient. Google has greatly reduced both those variables, and made advertising accountable for the first time: advertisers pay only for click-throughs. Still, for every click-through there are hundreds or thousands of “impressions” that waste server cycles, bandwidth, pixels, rods and cones. The cure for this inefficiency can’t come from the sell side. It must come from the demand side. When customers have means for advertising their wants and needs (e.g. “I need a stroller for twins in downtown Boston in the next two hours. Who’s coming through and how?”) — and to do this securely and out in the open marketplace (meaning not just in the walled gardens of Amazons and eBays) — much of advertising’s speculation and guesswork will be obsoleted. Look at it this way: we need means for demand to drive supply at least as well as supply drives demand. By 2020 we’ll have that. (Especially if we succeed at work we’re doing through ProjectVRM at Harvard’s Berkman Center.) Google is well positioned to help with that shift. But it’s an open question whether or not they’ll get behind it.

Third, search itself is at risk. For the last fifteen years we have needed search because Web grew has lacked a directory other than DNS (which only deals with what comes between the // and the /.) Google has succeeded because it has proven especially good at helping users find needles in the Web’s vast haystack. But what happens if the Web ceases to be a haystack? What if the Web gets a real directory, like LANs had back in the 80s — or something like one? The UNIX file paths we call URLs (e.g. http://domain.org/folder/folder/file.htm…) presume a directory structure. This alone suggests that a solution to the haystack problem will eventually be found. When it is, search then will be more of a database lookup than the colossally complex thing it is today (requiring vast data centers that suck huge amounts of power off the grid, as Google constantly memorizes every damn thing it can find in the entire Web). Google is in the best position to lead the transition from the haystack Web to the directory-enabled one. But Google may remain married to the haystack model, just as the phone companies of today are still married to charging for minutes and cable companies are married to charging for channels — even though both concepts are fossils in an all-digital world.


2. Will we live in the cloud or on the desktop?

1 By 2020, most people won’t do their work with software running on a general-purpose PC. Instead, they will work in Internet-based applications, like Google Docs, and in applications run from smartphones. Aspiring application developers will sign up to develop for smart-phone vendors and companies that provide Internet-based applications, because most innovative work will be done in that domain, instead of designing applications that run on a PC operating system.

2 By 2020, most people will still do their work with software running on a general-purpose PC. Internet-based applications like Google Docs and applications run from smartphones will have some functionality, but the most innovative and important applications will run on (and spring from) a PC operating system. Aspiring application designers will write mostly for PCs.

Please explain your choice and share your view about how major programs and applications will be designed, how they will function, and the role of cloud computing by 2020.

The answer is both.

Resources and functions will operate where they make the most sense. As bandwidth goes up, and barriers to usage (such as high “roaming” charges for data use outside a carrier’s home turf) go down, and Bob Frankston’s “ambient connectivity” establishes itself, our files and processing power will locate themselves where they work best — and where we, as individuals, have the most control over them.

Since we are mobile animals by nature, it makes sense for us to connect with the world primarily through hand-held devices, rather than the ones that sit on our desks and laps. But these larger devices will not go away. We need large screens for much of our work, and we need at least some local storage for when we go off-grid, or need fast connections to large numbers of big files, or wish to keep matters private through physical disconnection.

Clouds are to personal data what banks are to personal money. They provide secure storage, and are in the best positions to perform certain intermediary and back-end services, such as hosting applications and storing data. This latter use has an importance that will only become more critical as each of us accumulates personal data by the terabyte. If your home drives crash or get stolen, or your house burns down, your data can still be recovered if you’ve backed it up in the cloud.

But most home users (at least in the U.S. and other under-developed countries) are still stuck at the far ends of asymmetrical connections with low upstream data rates, designed at a time when carriers thought the Net would mostly be a new system for distributing TV and other forms of “content.” Thus backing up terabytes of data online ranges from difficult to impossible.

This is why any serious consideration of cloud computing — especially over the long term — needs to take connectivity into account. Clouds are only as useful as connections permit. And right now the big cloud utilities (notably Google and Amazon) are way ahead of the carriers at imagining how connected computing needs to grow. For most carriers the Internet is still just the third act in a “triple play,” a tertiary service behind telephony and television. Worse, the mobile carriers show little evidence that they understand the need to morph from phone companies to data companies — even with Apple’s iPhone success screaming “this is the future” at them.

A core ideal for all Internet devices is what Jonathan Zittrain (in his book The Future of the Internet — and How to Stop It) calls generativity, which is maximized encouragement of innovation in both hardware and software. Today generativity in mobile devices varies a great deal. The iPhone, for example, is highly generative for software, but not for hardware (only Apple makes iPhones). And even the iPhone’s software market is sphinctered by Apple’s requirement that every app pass to market only through Apple’s “store,” which operates only through Apple’s iTunes, which runs only on Macs and PCs (no Linux or other OSes). On top of all that is Apple’s restrictive partnerships with AT&T (in the U.S.) and Rogers (in Canada). While AT&T allows unlimited data usage on the iPhone, Rogers still has a 6Gb limit.

Bottom line: Handhelds will no smarter than the systems built to contain them. The market will open widest — and devices will get smartest — when anybody can make a smartphone (or any other mobile device), and use it on any network they please, without worrying about data usage limits or getting hit with $1000+ bills because they forgot to turn off “push notifications” or “location services” when they roamed out of their primary carrier’s network footprint. In other words, the future will be brightest when mobile systems get Net-native.


3. Will social relations get better?

1 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a negative force on my social world. And this will only grow more true in the future.

2 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a positive force on my social world. And this will only grow more true in the future.

3a. Please explain your choice and share your view of the Internet’s influence on the future of human relationships in 2020 — what is likely to stay the same and what will be different in human and community relations?

Craig Burton describes the Net as a hollow sphere — a three-dimensional zero — comprised entirely of ends separated by an absence of distance in the middle. With a hollow sphere, every point is visible to every other point. Your screen and my keyboard have no distance between them. This is a vivid way to illustrate the Net’s “end-to-end” architecture and how we perceive it, even as we also respect the complex electronics and natural latencies involved in the movement of bits from point to point anywhere on the planet. It also helps make sense of the Net’s distance-free social space.

As the “live” or “real-time” aspects of the net evolve, opportunities to engage personally and socially are highly magnified beyond all the systems that came before. This cannot help but increase our abilities not only to connect with each other, but to understand each other. I don’t see how this hurts the world, and I can imagine countless ways it can make the world better.

Right now my own family is scattered between Boston, California, Baltimore and other places. Yet through email, voice, IM, SMS and other means we are in frequent touch, and able to help each other in many ways. The same goes for my connections with friends and co-workers.

We should also hope that the Net makes us more connected, more social, more engaged and involved with each other. The human diaspora, from one tribe in Africa to thousands of scattered tribes — and now countries — throughout the world, was driven to a high degree by misunderstandings and disagreements between groups. Hatred and distrust between groups have caused countless wars and suffering beyond measure. Anything that helps us bridge our differences and increase understanding is a good thing.

Clearly the Internet already does that.


4. Will the state of reading and writing be improved?

1 By 2020, it will be clear that the Internet has enhanced and improved reading, writing, and the rendering of knowledge.

2 By 2020, it will be clear that the Internet has diminished and endangered reading, writing, and the intelligent rendering of knowledge.

4a. Please explain your choice and share your view of the Internet’s influence on the future of knowledge-sharing in 2020, especially when it comes to reading and writing and other displays of information – what is likely to stay the same and what will be different? What do you think is the future of books?

It is already clear in 2010 that the Net has greatly enhanced reading, writing, and knowledge held — and shared — by human beings. More people are reading and writing, and in more ways, for more readers and other writers, than ever before. And the sum of all of it goes up every day.

I’m sixty-two years old, and have been a journalist since my teens. My byline has appeared in dozens of publications, and the sum of my writing runs — I can only guess — into millions of words. Today very little of what I wrote and published before 1995 is available outside of libraries, and a lot of it isn’t even there.

For example, in the Seventies and early Eighties I wrote regularly for an excellent little magazine called The Sun. (It’s still around, at http://thesunmagazine.org) But, not wanting to carry my huge collection of Suns from one house to another (I’ve lived in 9 places over the last ten years), I gave my entire collection (including rare early issues) to an otherwise excellent public library, and they lost or ditched it. Few items from those early issues are online. My own copies are buried in boxes in a garage, three thousand miles from where I live now. So are dozens of boxes of photos and photo albums. (I was also a newspaper photographer in the early days, and have never abandoned the practice.)

On the other hand, most of what I’ve written since the Web came along is still online. And most of that work — including 34,000 photographs on Flickr — is syndicated trough RSS (Really Simple Syndication) or its derivatives. So is the work of millions of other people. If that work is interesting in some way, it tends to get inbound links, increasing its discoverability through search engines and its usefulness in general. The term syndication was once applied only to professional purposes. Now everybody can do it.

Look up RSS on Google. Today it brings in more than three billion results. Is it possible that this has decreased the quality and sum of reading, writing and human knowledge? No way.


5. Will the willingness of Generation Y / Millennials to share information change as they age?

1 By 2020, members of Generation Y (today’s “digital natives”) will continue to be ambient broadcasters who disclose a great deal of personal information in order to stay connected and take advantage of social, economic, and political opportunities. Even as they mature, have families, and take on more significant responsibilities, their enthusiasm for widespread information sharing will carry forward.

2 By 2020, members of Generation Y (today’s “digital natives”) will have “grown out” of much of their use of social networks, multiplayer online games and other time-consuming, transparency-engendering online tools. As they age and find new interests and commitments, their enthusiasm for widespread information sharing will abate.

5a. Please explain your choice and share your view of the Internet’s influence on the future of human lifestyles in 2020 – what is likely to stay the same and what will be different? Will the values and practices that characterize today’s
younger Internet users change over time?

Widespread information sharing is not a generational issue. It’s a technological one. Our means for controlling access to data, or its use — or even for asserting our “ownership” of it — are very primitive. (Logins and passwords alone are clunky as hell, extremely annoying, and will be seen a decade hence as a form of friction we were glad to eliminate.)

It’s still early. The Net and the Web as we know them have only been around for about fifteen years. Right now we’re still in the early stages of the Net’s Cambrian explosion. By that metaphor Google is a trilobyte.* We have much left to work out.

For example, take “terms of use.” Sellers have them. Users do not — at least not ones that they control. Wouldn’t it be good if you could tell Facebook or Twitter (or any other company using your data) that these are the terms on which they will do business with you, that these are the ways you will share data with them, that these are the ways this data can be used, and that this is what will happen if they break faith with you? Trust me: user-controlled terms of use are coming. (Work is going on right now on this very subject at Harvard’s Berkman Center, both at its Law Lab and ProjectVRM.)

Two current technical developments, “self-tracking” and “personal informatics,” are examples of ways that power is shifting from organizations to individuals — for the simple reason that individuals are the best points of integration for
their own data, and the best points of origination for what gets done with that data.

Digital natives will eventually become fully empowered by themselves, not by the organizations to which they belong, or the services they use. When that happens, they’ll probably be more careful and responsible than earlier generations, for the simpler reason that they will have the tools.


6. Will our relationship to institutions change?

1 By 2020, innovative forms of online cooperation will result in significantly more efficient and responsive governments, businesses, non-profits, and other mainstream institutions.

2 By 2020, governments, businesses, non-profits and other mainstream institutions will primarily retain familiar 20th century models for conduct of relationships with citizens and consumers online and offline.

6a. Please explain your choice and share your view of the Internet’s influence upon the future of institutional relationships with their patrons and customers between now and 2020. We are eager to hear what you think of how social, political, and commercial endeavors will form and the way people will cooperate in the future.

Online cooperation will only increase. The means are already there, and will only become more numerous and functional. Institutions that adapt to the Net’s cooperation-encouraging technologies and functions will succeed. Those that don’t will have a hard time.

Having it hardest right now are media institutions, for the simple reason that the Internet subsumes their functions, while also giving to everybody the ability to communicate with everybody else, at little cost, and often with little or no intermediating system other than the Net itself.

Bob Garfield, a columnist for AdAge and a host of NPR’s “On The Media,” says the media have entered what he calls (in his book by the same title) The Chaos Scenario. In his introduction Garfield says he should have called the book “Listenomics,” because listening is the first requirement of survival for every industry that lives on digital bits — a sum that rounds to approximately every industry, period.

So, even where the shapes of institution persist, their internal functions must be ready to listen, and to participate in the market’s conversations, even when those take place outside the institution’s own frameworks.


7. Will online anonymity still be prevalent?

1 By 2020, the identification ID systems used online are tighter and more formal – fingerprints or DNA-scans or retina scans. The use of these systems is the gateway to most of the Internet-enabled activity that users are able to perform such as shopping, communicating, creating content, and browsing. Anonymous online activity is sharply curtailed.

2 By 2020, Internet users can do a lot of normal online activities anonymously even though the identification systems used on the Internet have been applied to a wider range of activities. It is still relatively easy for Internet users to
create content, communicate, and browse without publicly disclosing who they are.

7a. Please explain your choice and share your view about the future of anonymous activity
online by the year 2020

In the offline world, anonymity is the baseline. Unless burdened by celebrity, we are essentially anonymous when we wander through stores, drive down the road, or sit in the audience of a theater. We become less anonymous when we enter into conversation or transact business. Even there, however, social protocols do not require that we become any more identifiable than required for the level of interaction. Our “identity” might be “the woman in the plaid skirt,” “the tall guy who was in here this morning,” or “one of our students.”

We still lack means by which an individual can selectively and gracefully shift from fully to partially anonymous, and from unidentified to identified — yet in ways that can be controlled and minimized (or maximized) as much as the individual (and others with which he or she interacts) permit. In fact, we’re a long way off.

The main reason is that most of the “identity systems” we know put control on the side of sellers, governments, and other institutions, and not with the individual. In time, systems that give users control will be developed. These will be native to users and not provided only by large organizations (such as Microsoft, Google or the government).

A number of development communities have been working on this challenge since early in the last decade, and eventually they will succeed. Hopefully this will be by 2020, but I figured we’d have it done by 2010, and it seems like we’ve barely started.


8. Will the Semantic Web have an impact?

By 2020, the Semantic Web envisioned by Tim Berners-Lee and his allies will have been achieved to a significant degree and have clearly made a difference to the average Internet users.

2 By 2020, the Semantic Web envisioned by Tim Berners-Lee will not be as fully effective as its creators hoped and average users will not have noticed much of a difference.

8a. Please explain your choice and share your view of the likelihood that the Semantic Web will have been implemented by 2020 and be a force for good in Internet users?

Tim Berners-Lee’s World Wide Web was a very simple and usable idea that relied on very simple and usable new standards (e.g. HTML and HTTP), which were big reasons why the Web succeeded. The Semantic Web is a very complex idea, and one that requires a lot of things to go right before it works. Or so it seems.

Tim introduced the Semantic Web Roadmap (http://www.w3.org/DesignIssues/Semantic.html) in September 1998. Since then, more than eleven years have passed. Some Semantic Web technologies have taken root: RDFa, for example, and microformats. But the concept itself has energized a relatively small number of people, and there is no “killer” tech or use yet.

That doesn’t mean it won’t happen. Invention is the mother of necessity. The Semantic Web will take off when somebody invents something we all find we need. Maybe that something will be built out of some combination of code and protocols already laying around — either within the existing Semantic Web portfolio, or from some parallel effort such as XDI. Or maybe it will come out of the blue.

By whatever means, the ideals of the Semantic Web — a Web based on meaning (semantics) rather than syntax (the Web’s current model) — will still drive development. And we’ll be a decade farther along in 2020 than we are in 2010.


9. Are the next takeoff technologies evident now?

1 The hot gadgets and applications that will capture the imagination of users in 2020 are pretty evident today and will not take many of today’s savviest innovators by surprise.

2 The hot gadgets and applications that will capture the imagination of users in 2020 will often come “out of the blue” and not have been anticipated by many of today’s savviest innovators.

9a. Please explain your choice and share your view of its implications for the future. What do you think will be the hot gadgets, applications, technology tools in 2020?

“The blue” is the environment out of which most future innovation will come. And that blue is the Net.

Nearly every digital invention today was created by collaboration over the Net, between people working in different parts of the world. The ability to collaborate over distances, often in real time (or close to it), using devices that improve constantly, over connections that only get fatter and faster, guarantees that the number and variety of inventions will only go up. More imaginations will be captured more ways, more often. Products will be improved, and replaced, more often than ever, and in more ways than ever.

The hottest gadgets in 2020 will certainly involve extending one’s senses and one’s body. In fact, this has been the case for all inventions since humans first made stone tools and painted the walls of caves. That’s because humans are characterized not only by their intelligence and their ability to speak, but by their capacity to extend their senses, and their abilities, through their tools and technologies. Michael Polanyi, a scientist and philosopher, called this indwelling. It is through indwelling that the carpenter’s tool becomes an extension of his arm, and he has the power to pound nails through wood. It is also through indwelling that an instrument becomes an extension of the musician’s mouth and hands.

There is a reason why a pilot refers to “my wings” and “my tail,” or a driver to “my wheels” and “my engine.” By indwelling, the pilot’s senses extend outside herself to the whole plane, and the driver’s to his whole car.

The computers and smart phones of today are to some degree extensions of ourselves, but not to the extent that a hammer extends a carpenter, a car enlarges a driver or a plane enlarges a pilot. Something other than a computer or a smart phone will do that. Hopefully this will happen by 2020. If not, it will eventually.


10. Will the Internet still be dominated by the end-to-end principle?

1 In the years between now and 2020, the Internet will mostly remain a technology based on the end-to-end principle that was envisioned by the Internet’s founders. Most disagreements over the way information flows online will be resolved in favor of a minimum number of restrictions over the information available online and the methods by which people access it.

2 In the years between now and 2020, the Internet will mostly become a technology where intermediary institutions that control the architecture and significant amounts of content will be successful in gaining the right to manage information and the method by which people access and share it.

10a. Please explain your choice, note organizations you expect to be most likely to influence the future of the Internet and share your view of the effects of this between now and 2020.

There will always be a struggle to reconcile the Net’s end-to-end principle with the need for companies and technologies operating between those ends to innovate and make money. This tension will produce more progress than either the principle by itself or the narrow interests of network operators and other entities working between the Net’s countless ends.

Today these interests are seen as opposed — mostly because incumbent network operators want to protect businesses they see threatened by the Net’s end-to-end nature, which cares not a bit about who makes money or how. But in the future they will be seen as symbiotic, because both the principle and networks operating within it will be seen as essential infrastructure. So will what each does to help raise and renovate the Net’s vast barn.

The term infrastructure has traditionally been applied mostly to the public variety: roads, bridges, electrical systems, water systems, waste treatment and so on. But this tradition only goes back to the Seventies. Look up infrastructure in a dictionary from the 1960s or earlier and you won’t find it (except in the OED). Today are still no institutes or academic departments devoted to infrastructure. It’s a subject in many fields, yet not a field in itself.

But we do generally understand what infrastructure is. It’s something solid and common we can build on. It’s geology humans make for themselves.

Digital technology, and the Internet in particular, provide an interesting challenge for understanding infrastructure, because we rely on it, yet it is not solid in any physical sense. It is like physical structures, but not itself physical. We go on the Net, as if it were a road or a plane. We build on it too. Yet it is not a thing.

Inspired by Craig Burton’s description of the Net as a hollow sphere — a three-dimensional zero comprised entirely of ends
— David Weinberger and I wrote World of Ends in 2003 (http://worldofends.com). The purpose was to make the Net more understandable, especially to companies (such as phone and cable carriers) that had been misunderstanding it. Lots of people agreed with us, but none of those people ran the kinds of companies we addressed.

But, to be fair, most people still don’t understand the Net. Look up “The Internet is” on Google (with the quotes). After you get past the top entry (Wikipedia’s), here’s what they say:

  1. a Series of Tubes
  2. terrible
  3. really big
  4. for porn
  5. shit
  6. good
  7. wrong
  8. killing storytelling
  9. dead
  10. serious business
  11. for everyone
  12. underrated
  13. infected
  14. about to die
  15. broken
  16. Christmas all the time
  17. altering our brains
  18. changing health care
  19. laughing at NBC
  20. changing the way we watch TV
  21. changing the scientific method
  22. dead and boring
  23. not shit
  24. made of kittens
  25. alive and well
  26. blessed
  27. almost full
  28. distracting
  29. a brain
  30. cloudy

Do the same on Twitter, and you’ll get results just as confusing. At this moment (your search will vary; this is the Live Web here), the top results are:

  1. a weird, WEIRD place
  2. full of feel good lectures
  3. the Best Place to get best notebook computer deals
  4. Made of Cats
  5. Down
  6. For porn
  7. one of the best and worst things at the same time
  8. so small
  9. going slow
  10. not my friend at the moment
  11. blocked
  12. letting me down
  13. going off at 12
  14. not working
  15. magic
  16. still debatable
  17. like a jungle
  18. eleven years old
  19. worsening by the day
  20. extremely variable
  21. full of odd but exciting people
  22. becoming the Googlenet
  23. fixed
  24. forever
  25. a battlefield
  26. a great network for helping others around the world
  27. more than a global pornography network
  28. slow
  29. making you go nuts
  30. so much faster bc im like the only 1 on it

(I took out the duplicates. There were many involving cats and porn.)

Part of the problem is that we understand the Net in very different and conflicting ways. For example, when we say the Net consists of “sites,” with “domains” and “locations” that we “architect,” “design,” “build” and “visit,” we are saying the Internet is a place. It’s real estate. But if we say the Net is a “medium” for the “distribution” of “content” to “consumers” who “download” it, we’re saying the Net is a shipping system. These metaphors are very different. They yield different approaches to business and lawmaking, to
name just two areas of conflict.

Bob Frankston, co-inventor (with Dan Bricklin) of spreadsheet software (Visicalc) and one of the fathers of home networking, says the end-state of the Net’s current development is ambient connectivity, which “gives us access to the oceans of copper, fiber and radios that surround us.” Within those are what Frankston calls a “sea of bits” to which all of us contribute. To help clarify the anti-scarce nature of bits, he explains, “Bits aren’t really like kernels of corn. They are more like words. You may run out of red paint but you don’t run out of the color red.”

Much has been written about the “economics of abundance,” but we have barely begun to understand what that means or what can be done with it. The threats are much easier to perceive than the opportunities. Google is one notable exception to that. Asked at a Harvard meeting to explain the company’s strategy of moving into businesses where it expects to make no money directly for the services it offers, a Google executive explained that the company looked for “second and third order effects.”

JP Rangaswami, Chief Scientist for BT (disclosure: I consult BT) describes these as “because effects.” You make money because of something rather than with it. Google makes money because of search, and because of Gmail. Not with them. Not directly.

Yet money can still be made with goods and services — even totally commodified ones. Amazon makes money with back-end Web services such as EC2 (computing) and S3 (data storage). Phone, cable, and other carriers can make money with “dumb pipes” too. They are also in perfect positions to offer low-latency services directly to their many customers at homes and in businesses. All the carriers need to do is realize that there are benefits to incumbency other than charging monopoly rents.

The biggest danger for the Net and its use comes not from carriers, but from copyright absolutists in what we have recently come to call the “content” industry. For example, in the U.S. the DMCA (Digital Millenium Copyright Act), passed in 1998, was built to protect the interests of copyright holders and served as a model for similar lawmaking in other countries. What it did was little to protect the industries that lobbied its passing, while at the same time hurting or preventing a variety of other industries. Most notable (at least for me) was the embryonic Internet radio industry, which was just starting to take off when the DMCA came along. The saga that followed is woefully complex, and the story is far from over, but the result in the meantime is a still-infant industry that suffers many more restrictions in respect to “content” than over-the-air radio stations. Usage fees for music are much higher than those faced by broadcasters — so high that making serious money by webcasting music is nearly impossible. There are also tight restrictions on what music can be played, when, and how often. Music on podcasts is also essentially prohibited, because podcasters need to “clear rights” for every piece of copyrighted music they play. That’s why, except for “podsafe” music, podcasting today is almost all talk.

I’ll give the last words here to Cory Doctorow, who publishes them freely in his new book Content:

… there is an information economy. You don’t even need a computer to participate. My barber, an avowed technophobe who rebuilds antique motorcycles and doesn’t own a PC, benefited from the information economy when I found him by googling for barbershops in my neighborhood.

Teachers benefit from the information economy when they share lesson plans with their colleagues around the world by email. Doctors benefit from the information economy when they move their patient files to efficient digital formats. Insurance companies benefit from the information economy through better access to fresh data used in the preparation of actuarial tables. Marinas benefit from the information economy when office-slaves look up the weekend’s weather online and decide to skip out on Friday for a weekend’s sailing. Families of migrant workers benefit from the information economy when their sons and daughters wire cash home from a convenience store Western Union terminal.

This stuff generates wealth for those who practice it. It enriches the country and improves our lives.

And it can peacefully co-exist with movies, music and microcode, but not if Hollywood gets to call the shots. Where IT managers are expected to police their networks and systems for unauthorized copying — no matter what that does to productivity — they cannot co-exist. Where our operating systems are rendered inoperable by “copy protection,” they cannot co-exist. Where our educational institutions are turned into conscript enforcers for the record industry, they cannot co-exist.

The information economy is all around us. The countries that embrace it will emerge as global economic superpowers. The countries that stubbornly hold to the simplistic idea that the information economy is about selling information will end up at the bottom of the pile.


But all that is just me (and my sources, such as Cory). There are 894 others compiled by the project, and I invite you to visit those there.

I’ll also put in a plug for FutureWeb in Raleigh, April 28-30, where I look forward to seeing many old friends and relatives as well. (I lived in North Carolina for most of the 20 years from 1965-1985, and miss it still.) Hope to see some of ya’ll there.

*[Later…] For a bit of context, see Evolution Going Great, Reports Trilobite, in The Onion.

Tags: , ,

Some encouraging words here about Verizon’s expected 4G data rates:

After testing in the Boston and Seattle areas, the provider estimates that a real connection on a populated network should average between 5Mbps to 12Mbps in download rates and between 2Mbps to 5Mbps for uploads. Actual, achievable peak speeds in these areas float between 40-50Mbps downstream and 20-25Mbps upstream.The speed is significantly less than the theoretical 100Mbps promised by Long Term Evolution (LTE), the chosen standard, but would still give Verizon one of the fastest cellular networks in North America.

No mention of metering or data caps, of course.

Remember, these are phone companies. They love to meter stuff. Its what they know. They can hardly imagine anything else. They are billing machines with networks attached.

In addition to the metering problems Brett Glass details here, there is the simple question of whether carriers can meter data at all. Data ain’t minutes. And metering discourages both usage and countless businesses other than the phone companies’ own. I have long believed that phone and cable companies will see far more business for themselves if they open up their networks to possibilities other than those optimized for the relocation of television from air to pipes.

Data capping is problematic too. How can the customer tell how close they are to a cap? And how much does fearing overage discourage legitimate uses? And what about the accounting? My own problems with Sprint on this topic don’t give me any confidence that the carriers know how gracefully to impose data usage caps.

There’s a lot of wool in current advertising on these topics too. During the Academy Awards last night, Comcast had a great ad for Xfinity, its new high-speed service, promoted entirely as an entertainment pump. By which I mean that it was an impressive piece of promotion. But there was no mention of upstream speeds (downstream teaser: 100Mb/s). Or other limitations. Or how they might favor NBC (should they buy it) over other content sources. (Which, of course, they will.)

Sprint‘s CEO was in an another ad, promoting the company’s “unlimited text, unlimited Web and unlimited calling…” Right. Says right here in a link-proof pop-up titled “Important 4G coverage and plan information”, that 4G is unlimited, but 3G (what most customers, including I, still have) is limited to “5GB/300MB off-network roaming per month.” They do list “select cities” where 4G is available. Here’s Raleigh. I didn’t find New York, Los Angeles, Chicago or Boston on the list. I recall Amarillo. Can’t find it now, and the navigation irritates me too much to look.

Anyway, I worry that what we’ll get is phone and cable company sausage in Internet casing. And that, on the political side, the carriers will succeed in their campaign to clothe themselves as the “free market” fighting “government takeovers” while working the old regulatory capture game, to keep everybody else from playing.

So five, ten years from now, all the rest of the independent ISPs and WISPs will be gone. So will backbone players other than carriers and Google.  We’ll be gaga about our ability to watch pay-per-view on our fourth-generation iPads with 3-d glasses. And we won’t miss the countless new and improved businesses that never happened because they were essentially outlawed by regulators and their captors.

Tags: , , , , , , , ,

Tucson05 TitanICBM

After visiting the Titan Missle Museum in Arizona, Matt Blaze wrote, How did we keep from blowing ourselves up for all those years?

Good question.

Take a listen the next time you hear somebody say “Good question.” It means they don’t have the answer. Maybe it also means the best questions are unanswerable.

And maybe we also need to keep asking them anyway, for exactly that reason. This was a lesson I got a long time ago, and reported in 2005, in this post here:

About ten years ago I took a few days off to chill in silence at the New Camaldoli Monastery in Big Sur. One of the values the White Monks of the monastery share with Quakers in Sunday meeting is confinement of speech to that which “improves on the silence”. (Or, in the case of the monks, fails to insult the contemplative virtues of silence.) It was there that I had an amazing conversation with Father John Powell, who told me that any strictly literalist interpretation of Christ’s teachings “insulted the mystery” toward which those teachings pointed — and which it was the purpose of contemplative living to explore. “Christ spoke in paradox”, he said. Also metaphor, which itself is thick with paradox. Jesus knew, Father Powell said, that we understand one thing best in terms of another which (paradoxically) is literally different yet meaningfully similar.

For example, George Lakoff explains that we understand time in terms of money (we “save”, “waste” and “spend” it) and life in terms of travel (we “arrive”, “depart”, “fall off the wagon” or “get stuck in a rut”). For what it’s worth, George is Jewish. Like Jesus.

The greatest mystery of life, Father Powell explained, isn’t death. It’s life. “Life is exceptional”, he said. For all the fecundity of nature, it is surrounded by death. Far as we can tell, everything we see when we look to the heavens is dead as a gravestone. Yet it inspires the living. “Life”, he said, sounding like an old rabbi, “is the mystery”.

I was a kid in the fifties, when the U.S. and the Soviet Union were busy not talking to each other while planting thousands of nuclear-tipped ICBMs  in the ground, pointed at each other’s countries. They were also sending thousands of additional warheads to sea in nuclear submarines. Every warhead was ready obliterate whole cities in enemy territory. Our house was five miles from Manhattan. We had frequent air raid drills, and learned how to “duck and cover” in the likely event of sudden incineration. Like many other kids in those days, I wished to enjoy as much of life as I could before World War III, which would last only a few hours, after which some other species would need to take over.

I was no math whiz; but I was an authority on adults and their failings. I could look at the number of missles involved, guess at all the things that could go wrong, and make a pretty good bet that something, sooner or later, would. I wasn’t sure we would die, but I was sure the chances were close to even.

In his new book The Dead Hand, Washington Post reporter David E. Hoffman explains exactly how close we came:

At 12:15 A.M., Petrov was startled. Across the top of the room was a thin, silent panel. Most of the time no one even noticed it. But suddenly it lit up, in red letters: LAUNCH.

A siren wailed. On the big map with the North Pole, a light at one of the American missile bases was illuminated. Everyone was riveted to the map. The electronic panels showed a missile launch. The board said “high reliability.” This had never happened before. The operators at the consoles on the main oor jumped up, out of their chairs. They turned and looked up at Petrov, behind the glass. He was the commander on duty. He stood, too, so they could see him. He started to give orders. He wasn’t sure what was happening. He ordered them to sit down and start checking the system. He had to know whether this was real, or a glitch. The full check would take ten minutes, but if this was a real missile attack, they could not wait ten minutes to nd out. Was the satellite holding steady? Was the computer functioning properly?…

The phone was still in his hand, the duty ofcer still on the line, when Petrov was jolted again, two minutes later.

The panel ashed: another missile launched! Then a third, a fourth and a fth. Now, the system had gone into overdrive. The additional signals had triggered a new warning. The red letters on the panel began to ash MISSILE ATTACK, and an electronic blip was sent automatically to the higher levels of the military. Petrov was frightened. His legs felt paralyzed. He had to think fast…

Petrov made a decision. He knew the system had glitches in the past; there was no visual sighting of a missile through the telescope; the satellites were in the correct position. There was nothing from the radar stations to verify an incoming missile, although it was probably too early for the radars to see anything.

He told the duty ofcer again: this is a false alarm.

The message went up the chain.

How many other events were there like that? On both sides?

I think there lurks in human nature a death wish — for others, even more than for ourselves. We rationalize nothing better, or with more effect, than killing each other. Especially the other. Fill in the blank. The other tribe, the other country, the other culture, the other religion, whatever.  “I’ve seen the future,” Leonard Cohen sings. “It is murder.” (You can read the lyrics here, but I like the video version.)

Yet we also don’t. The answer to Matt’s question — How did we keep from blowing ourselves up for all those years? —is lieutenant colonel Stanislav Petrov, and others like him, unnamed. Petrov had the brains and the balls to prevent World War III by saying “Nyet” to doing the crazy thing that only looked sane because a big institution (in his case, the Soviet Union) was doing it.

We’re still crazy. (You and I may not be, but we are.)

War is a force that gives us meaning, Chris Hedges says. You can read his book by that title, (required reading from a highly decorated and deeply insightful former war correspondent). You can also watch the lecture he gave on the topic at UCSB in 2004. The mystery will be diminished by his answer, but not solved.

Still, every dose of sanity helps.

Tags: , , , , , , , ,

« Older entries § Newer entries »