infrastructure

You are currently browsing the archive for the infrastructure category.

I submit to your interest two speeches that challenge acceptance of status quos by which our collective frogs are slowly boiling.

First is Freedom in the Cloud, by , given at the Internet Society in New York on 5 February.

Second  is Making Sense of Privacy and Publicity, by . Given on 13 March at SXSW. One teaser quote:

A teaser quote from Eben:

…in effect, we lost the ability to use either legal regulation or anything about the physical architecture of the network to interfere with the process of falling away from innocence that was now inevitable in the stage I’m talking about, what we might call late Google stage 1.

It is here, of course, that Mr. Zuckerberg enters.

The human race has susceptibility to harm but Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age.

Because he harnessed Friday night. That is, everybody needs to get laid and he turned it into a structure for degenerating the integrity of human personality and he has to a remarkable extent succeeded with a very poor deal. Namely, “I will give you free web hosting and some PHP doodads and you get spying for free all the time”. And it works.

A teaser quote from Danah:

It’s easy to think that “public” and “private” are binaries. We certainly build a lot of technology with this assumption. At best, we break out of this with access-control lists where we list specific people who some piece of content should be available to. And at best, we expand our notion of “private” to include everything that is not “public.” But this binary logic isn’t good enough for understanding what people mean when they talk about privacy. What people experience when they talk about privacy is more complicated than what can be instantiated in a byte.

To get at this, let’s talk about how people experience public and private in unmediated situations. Because it’s not so binary there either.

First, think about a conversation that you may have with a close friend. You may think about that conversation as private, but there is nothing stopping your friend from telling someone else what was said, except for your trust in your friend. You actually learned to trust your friend, presumably through experience.

Learning who to trust is actually quite hard. Anyone who has middle school-aged kids knows that there’s inevitably a point in time when someone says something that they shouldn’t have and tears are shed. It’s hard to learn to really know for sure that someone will keep their word. But we don’t choose not to tell people things simply because they could spill the beans. We do our best to assess the situation and act accordingly.

We don’t just hold people accountable for helping us maintain privacy; we also hold the architecture around us accountable. We look around a specific place and decide whether or not we trust the space to allow us to speak freely to the people there.

They’re talking about different things, but they overlap. They both have to do with a loss of control, and both set out agenda for those who care. Curious to know what ya’ll think.

We had a week of record rain here in Eastern Massachusetts. Lots of roads were closed as ponds and brooks overflowed their banks, and drainage systems backed up. At various places on Mass Ave north of Cambridge water was gushing up out of blown-off manhole covers. Traffic was backed up all over the place. Yesterday, the first after the rains stopped, many residents were pumping out basements through fat hoses that snaked out into streets. Much of this water only pooled somewhere else, since many drainage systems were filled too.

In some ways I’ve never stopped being the newspaper photographer I was forty years ago, in my first newspaper job (I didn’t have that many, total, but that was more fun than the rest of them). So I went out and looked for some actual floods worth shooting and found Magnolia Field in Arlington, near the Alewife T stop at the end of the Red Line. It’s a big soccer and lacrosse field, with with the Minuteman Bikeway at one end and a playground for kiddies at the other. It normally looks flat, but inundation by water proved otherwise. I could tell by the high debris mark that most of the field had been covered with water when the flood was at its maximum depth, but there was plenty left when I showed up and shot the photo above and the rest here.

Here’s the field on a sunny spring day, shot by Bing and revealed, too many clicks down, in its “birds eye view” under “aerial”. Here’s a link to the actual Bing view. And here’s Bing’s contribution to an iPhone app called MyWeather, which does a great job of showing a combination of precipitation and clouds, with looping animation:

I took that screen shot at the end of the storm on Monday night. You can see how it was our corner of a huge cyclonic weather system, rotating around an eye of sorts, out in the Atlantic off the Virginia coast. This winter we’ve had a series of these. As I wrote in an earlier post, it was one of these, spinning like a disk with a spindle in New York City, that brought rain to New England and snow pretty much everywhere else in February.

Now it’s Spring, almost literally. The sky was blue and clear as can be, winds calm, temperature hitting 66°. I know it won’t last, but it’s nice to get a break.

Tags: ,

Earlier this year the Pew Research Center’s Internet & American Life Project and Elon University conducted research toward The Future of the Internet IV, the latest in their survey series, which began with Future of the Internet I – 2004. This latest report includes guided input from subjects such as myself (a “thoughtful analyst,” they kindly said) on subjects pertaining to the Net’s future. We were asked to choose between alternative outcomes — “tension pairs” — and to explain our views. Here’s the whole list:

  1. Will Google make us stupid?
  2. Will we live in the cloud or the desktop?
  3. Will social relations get better?
  4. Will the state of reading and writing be improved?
  5. Will those in GenY share as much information about themselves as they age?
  6. Will our relationship to key institutions change?
  7. Will online anonymity still be prevalent?
  8. Will the Semantic Web have an impact?
  9. Are the next takeoff technologies evident now?
  10. Will the Internet still be dominated by the end-to-end principle?

The results were published here at Pew and Elon’s Imagining the Internet site. Here’s the .pdf.

My own views are more than well represented in the 2010 report. One of my responses (to the last question) was even published in full. Still, I thought it would be worth sharing my full responses to all the questions. That’s why I’m posting them here.

Each question is followed by two statements — the “tension pair” — and in some cases by additional instruction. I’ve italicized those.

[Note… Much text here has been changed to .html from .pdf and .doc forms, and extracting all the old formatting jive has been kind of arduous. Bear with me while I finish that job, later today. (And some .html conventions don’t work here in WordPress, so that’s a hassle too.)]


1. Will Google make us smart or stupid?

1 By 2020, people’s use of the Internet has enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid (http://www.theatlantic.com/doc/200807/google).

2 By 2020, people’s use of the Internet has not enhanced human intelligence and it could even be lowering the IQs of most people who use it a lot. Nicholas Carr was right: Google makes us stupid.

1a. Please explain your choice and share your view of the Internet’s influence on the future of human intelligence in 2020 – what is likely to stay the same and what will be different in the way human intellect evolves?


Though I like and respect Nick Carr a great deal, my answer to the title question in his famous essay in The Atlantic — “Is Google Making Us Stupid?” — is no. Nothing that informs us makes us stupid.

Nick says, “Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.” Besides finding that a little hard to believe (I know Nick to be a deep diver, still), there is nothing about Google, or the Net, to keep anyone from diving — and to depths that were not reachable before the Net came along. Also, compare viewing the Net to using the Net. There is clearly a massive move to the former from the latter. And this move, at the very least, requires being less of a potato.

But that’s all a separate matter from Google itself. There is no guarantee that Google will be around, or in the same form, in the year 2020.

First, there are natural limits to any form of bigness, and Google is no exception to those. Trees do not grow to the sky.

Second, nearly all of Google’s income is from advertising. There are two problems with this. One is that improving a pain in the ass does not make it a kiss — and advertising is, on the whole, still a pain in the user’s ass. The other is that advertising is a system of guesswork, which by nature makes it both speculative and inefficient. Google has greatly reduced both those variables, and made advertising accountable for the first time: advertisers pay only for click-throughs. Still, for every click-through there are hundreds or thousands of “impressions” that waste server cycles, bandwidth, pixels, rods and cones. The cure for this inefficiency can’t come from the sell side. It must come from the demand side. When customers have means for advertising their wants and needs (e.g. “I need a stroller for twins in downtown Boston in the next two hours. Who’s coming through and how?”) — and to do this securely and out in the open marketplace (meaning not just in the walled gardens of Amazons and eBays) — much of advertising’s speculation and guesswork will be obsoleted. Look at it this way: we need means for demand to drive supply at least as well as supply drives demand. By 2020 we’ll have that. (Especially if we succeed at work we’re doing through ProjectVRM at Harvard’s Berkman Center.) Google is well positioned to help with that shift. But it’s an open question whether or not they’ll get behind it.

Third, search itself is at risk. For the last fifteen years we have needed search because Web grew has lacked a directory other than DNS (which only deals with what comes between the // and the /.) Google has succeeded because it has proven especially good at helping users find needles in the Web’s vast haystack. But what happens if the Web ceases to be a haystack? What if the Web gets a real directory, like LANs had back in the 80s — or something like one? The UNIX file paths we call URLs (e.g. http://domain.org/folder/folder/file.htm…) presume a directory structure. This alone suggests that a solution to the haystack problem will eventually be found. When it is, search then will be more of a database lookup than the colossally complex thing it is today (requiring vast data centers that suck huge amounts of power off the grid, as Google constantly memorizes every damn thing it can find in the entire Web). Google is in the best position to lead the transition from the haystack Web to the directory-enabled one. But Google may remain married to the haystack model, just as the phone companies of today are still married to charging for minutes and cable companies are married to charging for channels — even though both concepts are fossils in an all-digital world.


2. Will we live in the cloud or on the desktop?

1 By 2020, most people won’t do their work with software running on a general-purpose PC. Instead, they will work in Internet-based applications, like Google Docs, and in applications run from smartphones. Aspiring application developers will sign up to develop for smart-phone vendors and companies that provide Internet-based applications, because most innovative work will be done in that domain, instead of designing applications that run on a PC operating system.

2 By 2020, most people will still do their work with software running on a general-purpose PC. Internet-based applications like Google Docs and applications run from smartphones will have some functionality, but the most innovative and important applications will run on (and spring from) a PC operating system. Aspiring application designers will write mostly for PCs.

Please explain your choice and share your view about how major programs and applications will be designed, how they will function, and the role of cloud computing by 2020.

The answer is both.

Resources and functions will operate where they make the most sense. As bandwidth goes up, and barriers to usage (such as high “roaming” charges for data use outside a carrier’s home turf) go down, and Bob Frankston’s “ambient connectivity” establishes itself, our files and processing power will locate themselves where they work best — and where we, as individuals, have the most control over them.

Since we are mobile animals by nature, it makes sense for us to connect with the world primarily through hand-held devices, rather than the ones that sit on our desks and laps. But these larger devices will not go away. We need large screens for much of our work, and we need at least some local storage for when we go off-grid, or need fast connections to large numbers of big files, or wish to keep matters private through physical disconnection.

Clouds are to personal data what banks are to personal money. They provide secure storage, and are in the best positions to perform certain intermediary and back-end services, such as hosting applications and storing data. This latter use has an importance that will only become more critical as each of us accumulates personal data by the terabyte. If your home drives crash or get stolen, or your house burns down, your data can still be recovered if you’ve backed it up in the cloud.

But most home users (at least in the U.S. and other under-developed countries) are still stuck at the far ends of asymmetrical connections with low upstream data rates, designed at a time when carriers thought the Net would mostly be a new system for distributing TV and other forms of “content.” Thus backing up terabytes of data online ranges from difficult to impossible.

This is why any serious consideration of cloud computing — especially over the long term — needs to take connectivity into account. Clouds are only as useful as connections permit. And right now the big cloud utilities (notably Google and Amazon) are way ahead of the carriers at imagining how connected computing needs to grow. For most carriers the Internet is still just the third act in a “triple play,” a tertiary service behind telephony and television. Worse, the mobile carriers show little evidence that they understand the need to morph from phone companies to data companies — even with Apple’s iPhone success screaming “this is the future” at them.

A core ideal for all Internet devices is what Jonathan Zittrain (in his book The Future of the Internet — and How to Stop It) calls generativity, which is maximized encouragement of innovation in both hardware and software. Today generativity in mobile devices varies a great deal. The iPhone, for example, is highly generative for software, but not for hardware (only Apple makes iPhones). And even the iPhone’s software market is sphinctered by Apple’s requirement that every app pass to market only through Apple’s “store,” which operates only through Apple’s iTunes, which runs only on Macs and PCs (no Linux or other OSes). On top of all that is Apple’s restrictive partnerships with AT&T (in the U.S.) and Rogers (in Canada). While AT&T allows unlimited data usage on the iPhone, Rogers still has a 6Gb limit.

Bottom line: Handhelds will no smarter than the systems built to contain them. The market will open widest — and devices will get smartest — when anybody can make a smartphone (or any other mobile device), and use it on any network they please, without worrying about data usage limits or getting hit with $1000+ bills because they forgot to turn off “push notifications” or “location services” when they roamed out of their primary carrier’s network footprint. In other words, the future will be brightest when mobile systems get Net-native.


3. Will social relations get better?

1 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a negative force on my social world. And this will only grow more true in the future.

2 In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the Internet has mostly been a positive force on my social world. And this will only grow more true in the future.

3a. Please explain your choice and share your view of the Internet’s influence on the future of human relationships in 2020 — what is likely to stay the same and what will be different in human and community relations?

Craig Burton describes the Net as a hollow sphere — a three-dimensional zero — comprised entirely of ends separated by an absence of distance in the middle. With a hollow sphere, every point is visible to every other point. Your screen and my keyboard have no distance between them. This is a vivid way to illustrate the Net’s “end-to-end” architecture and how we perceive it, even as we also respect the complex electronics and natural latencies involved in the movement of bits from point to point anywhere on the planet. It also helps make sense of the Net’s distance-free social space.

As the “live” or “real-time” aspects of the net evolve, opportunities to engage personally and socially are highly magnified beyond all the systems that came before. This cannot help but increase our abilities not only to connect with each other, but to understand each other. I don’t see how this hurts the world, and I can imagine countless ways it can make the world better.

Right now my own family is scattered between Boston, California, Baltimore and other places. Yet through email, voice, IM, SMS and other means we are in frequent touch, and able to help each other in many ways. The same goes for my connections with friends and co-workers.

We should also hope that the Net makes us more connected, more social, more engaged and involved with each other. The human diaspora, from one tribe in Africa to thousands of scattered tribes — and now countries — throughout the world, was driven to a high degree by misunderstandings and disagreements between groups. Hatred and distrust between groups have caused countless wars and suffering beyond measure. Anything that helps us bridge our differences and increase understanding is a good thing.

Clearly the Internet already does that.


4. Will the state of reading and writing be improved?

1 By 2020, it will be clear that the Internet has enhanced and improved reading, writing, and the rendering of knowledge.

2 By 2020, it will be clear that the Internet has diminished and endangered reading, writing, and the intelligent rendering of knowledge.

4a. Please explain your choice and share your view of the Internet’s influence on the future of knowledge-sharing in 2020, especially when it comes to reading and writing and other displays of information – what is likely to stay the same and what will be different? What do you think is the future of books?

It is already clear in 2010 that the Net has greatly enhanced reading, writing, and knowledge held — and shared — by human beings. More people are reading and writing, and in more ways, for more readers and other writers, than ever before. And the sum of all of it goes up every day.

I’m sixty-two years old, and have been a journalist since my teens. My byline has appeared in dozens of publications, and the sum of my writing runs — I can only guess — into millions of words. Today very little of what I wrote and published before 1995 is available outside of libraries, and a lot of it isn’t even there.

For example, in the Seventies and early Eighties I wrote regularly for an excellent little magazine called The Sun. (It’s still around, at http://thesunmagazine.org) But, not wanting to carry my huge collection of Suns from one house to another (I’ve lived in 9 places over the last ten years), I gave my entire collection (including rare early issues) to an otherwise excellent public library, and they lost or ditched it. Few items from those early issues are online. My own copies are buried in boxes in a garage, three thousand miles from where I live now. So are dozens of boxes of photos and photo albums. (I was also a newspaper photographer in the early days, and have never abandoned the practice.)

On the other hand, most of what I’ve written since the Web came along is still online. And most of that work — including 34,000 photographs on Flickr — is syndicated trough RSS (Really Simple Syndication) or its derivatives. So is the work of millions of other people. If that work is interesting in some way, it tends to get inbound links, increasing its discoverability through search engines and its usefulness in general. The term syndication was once applied only to professional purposes. Now everybody can do it.

Look up RSS on Google. Today it brings in more than three billion results. Is it possible that this has decreased the quality and sum of reading, writing and human knowledge? No way.


5. Will the willingness of Generation Y / Millennials to share information change as they age?

1 By 2020, members of Generation Y (today’s “digital natives”) will continue to be ambient broadcasters who disclose a great deal of personal information in order to stay connected and take advantage of social, economic, and political opportunities. Even as they mature, have families, and take on more significant responsibilities, their enthusiasm for widespread information sharing will carry forward.

2 By 2020, members of Generation Y (today’s “digital natives”) will have “grown out” of much of their use of social networks, multiplayer online games and other time-consuming, transparency-engendering online tools. As they age and find new interests and commitments, their enthusiasm for widespread information sharing will abate.

5a. Please explain your choice and share your view of the Internet’s influence on the future of human lifestyles in 2020 – what is likely to stay the same and what will be different? Will the values and practices that characterize today’s
younger Internet users change over time?

Widespread information sharing is not a generational issue. It’s a technological one. Our means for controlling access to data, or its use — or even for asserting our “ownership” of it — are very primitive. (Logins and passwords alone are clunky as hell, extremely annoying, and will be seen a decade hence as a form of friction we were glad to eliminate.)

It’s still early. The Net and the Web as we know them have only been around for about fifteen years. Right now we’re still in the early stages of the Net’s Cambrian explosion. By that metaphor Google is a trilobyte.* We have much left to work out.

For example, take “terms of use.” Sellers have them. Users do not — at least not ones that they control. Wouldn’t it be good if you could tell Facebook or Twitter (or any other company using your data) that these are the terms on which they will do business with you, that these are the ways you will share data with them, that these are the ways this data can be used, and that this is what will happen if they break faith with you? Trust me: user-controlled terms of use are coming. (Work is going on right now on this very subject at Harvard’s Berkman Center, both at its Law Lab and ProjectVRM.)

Two current technical developments, “self-tracking” and “personal informatics,” are examples of ways that power is shifting from organizations to individuals — for the simple reason that individuals are the best points of integration for
their own data, and the best points of origination for what gets done with that data.

Digital natives will eventually become fully empowered by themselves, not by the organizations to which they belong, or the services they use. When that happens, they’ll probably be more careful and responsible than earlier generations, for the simpler reason that they will have the tools.


6. Will our relationship to institutions change?

1 By 2020, innovative forms of online cooperation will result in significantly more efficient and responsive governments, businesses, non-profits, and other mainstream institutions.

2 By 2020, governments, businesses, non-profits and other mainstream institutions will primarily retain familiar 20th century models for conduct of relationships with citizens and consumers online and offline.

6a. Please explain your choice and share your view of the Internet’s influence upon the future of institutional relationships with their patrons and customers between now and 2020. We are eager to hear what you think of how social, political, and commercial endeavors will form and the way people will cooperate in the future.

Online cooperation will only increase. The means are already there, and will only become more numerous and functional. Institutions that adapt to the Net’s cooperation-encouraging technologies and functions will succeed. Those that don’t will have a hard time.

Having it hardest right now are media institutions, for the simple reason that the Internet subsumes their functions, while also giving to everybody the ability to communicate with everybody else, at little cost, and often with little or no intermediating system other than the Net itself.

Bob Garfield, a columnist for AdAge and a host of NPR’s “On The Media,” says the media have entered what he calls (in his book by the same title) The Chaos Scenario. In his introduction Garfield says he should have called the book “Listenomics,” because listening is the first requirement of survival for every industry that lives on digital bits — a sum that rounds to approximately every industry, period.

So, even where the shapes of institution persist, their internal functions must be ready to listen, and to participate in the market’s conversations, even when those take place outside the institution’s own frameworks.


7. Will online anonymity still be prevalent?

1 By 2020, the identification ID systems used online are tighter and more formal – fingerprints or DNA-scans or retina scans. The use of these systems is the gateway to most of the Internet-enabled activity that users are able to perform such as shopping, communicating, creating content, and browsing. Anonymous online activity is sharply curtailed.

2 By 2020, Internet users can do a lot of normal online activities anonymously even though the identification systems used on the Internet have been applied to a wider range of activities. It is still relatively easy for Internet users to
create content, communicate, and browse without publicly disclosing who they are.

7a. Please explain your choice and share your view about the future of anonymous activity
online by the year 2020

In the offline world, anonymity is the baseline. Unless burdened by celebrity, we are essentially anonymous when we wander through stores, drive down the road, or sit in the audience of a theater. We become less anonymous when we enter into conversation or transact business. Even there, however, social protocols do not require that we become any more identifiable than required for the level of interaction. Our “identity” might be “the woman in the plaid skirt,” “the tall guy who was in here this morning,” or “one of our students.”

We still lack means by which an individual can selectively and gracefully shift from fully to partially anonymous, and from unidentified to identified — yet in ways that can be controlled and minimized (or maximized) as much as the individual (and others with which he or she interacts) permit. In fact, we’re a long way off.

The main reason is that most of the “identity systems” we know put control on the side of sellers, governments, and other institutions, and not with the individual. In time, systems that give users control will be developed. These will be native to users and not provided only by large organizations (such as Microsoft, Google or the government).

A number of development communities have been working on this challenge since early in the last decade, and eventually they will succeed. Hopefully this will be by 2020, but I figured we’d have it done by 2010, and it seems like we’ve barely started.


8. Will the Semantic Web have an impact?

By 2020, the Semantic Web envisioned by Tim Berners-Lee and his allies will have been achieved to a significant degree and have clearly made a difference to the average Internet users.

2 By 2020, the Semantic Web envisioned by Tim Berners-Lee will not be as fully effective as its creators hoped and average users will not have noticed much of a difference.

8a. Please explain your choice and share your view of the likelihood that the Semantic Web will have been implemented by 2020 and be a force for good in Internet users?

Tim Berners-Lee’s World Wide Web was a very simple and usable idea that relied on very simple and usable new standards (e.g. HTML and HTTP), which were big reasons why the Web succeeded. The Semantic Web is a very complex idea, and one that requires a lot of things to go right before it works. Or so it seems.

Tim introduced the Semantic Web Roadmap (http://www.w3.org/DesignIssues/Semantic.html) in September 1998. Since then, more than eleven years have passed. Some Semantic Web technologies have taken root: RDFa, for example, and microformats. But the concept itself has energized a relatively small number of people, and there is no “killer” tech or use yet.

That doesn’t mean it won’t happen. Invention is the mother of necessity. The Semantic Web will take off when somebody invents something we all find we need. Maybe that something will be built out of some combination of code and protocols already laying around — either within the existing Semantic Web portfolio, or from some parallel effort such as XDI. Or maybe it will come out of the blue.

By whatever means, the ideals of the Semantic Web — a Web based on meaning (semantics) rather than syntax (the Web’s current model) — will still drive development. And we’ll be a decade farther along in 2020 than we are in 2010.


9. Are the next takeoff technologies evident now?

1 The hot gadgets and applications that will capture the imagination of users in 2020 are pretty evident today and will not take many of today’s savviest innovators by surprise.

2 The hot gadgets and applications that will capture the imagination of users in 2020 will often come “out of the blue” and not have been anticipated by many of today’s savviest innovators.

9a. Please explain your choice and share your view of its implications for the future. What do you think will be the hot gadgets, applications, technology tools in 2020?

“The blue” is the environment out of which most future innovation will come. And that blue is the Net.

Nearly every digital invention today was created by collaboration over the Net, between people working in different parts of the world. The ability to collaborate over distances, often in real time (or close to it), using devices that improve constantly, over connections that only get fatter and faster, guarantees that the number and variety of inventions will only go up. More imaginations will be captured more ways, more often. Products will be improved, and replaced, more often than ever, and in more ways than ever.

The hottest gadgets in 2020 will certainly involve extending one’s senses and one’s body. In fact, this has been the case for all inventions since humans first made stone tools and painted the walls of caves. That’s because humans are characterized not only by their intelligence and their ability to speak, but by their capacity to extend their senses, and their abilities, through their tools and technologies. Michael Polanyi, a scientist and philosopher, called this indwelling. It is through indwelling that the carpenter’s tool becomes an extension of his arm, and he has the power to pound nails through wood. It is also through indwelling that an instrument becomes an extension of the musician’s mouth and hands.

There is a reason why a pilot refers to “my wings” and “my tail,” or a driver to “my wheels” and “my engine.” By indwelling, the pilot’s senses extend outside herself to the whole plane, and the driver’s to his whole car.

The computers and smart phones of today are to some degree extensions of ourselves, but not to the extent that a hammer extends a carpenter, a car enlarges a driver or a plane enlarges a pilot. Something other than a computer or a smart phone will do that. Hopefully this will happen by 2020. If not, it will eventually.


10. Will the Internet still be dominated by the end-to-end principle?

1 In the years between now and 2020, the Internet will mostly remain a technology based on the end-to-end principle that was envisioned by the Internet’s founders. Most disagreements over the way information flows online will be resolved in favor of a minimum number of restrictions over the information available online and the methods by which people access it.

2 In the years between now and 2020, the Internet will mostly become a technology where intermediary institutions that control the architecture and significant amounts of content will be successful in gaining the right to manage information and the method by which people access and share it.

10a. Please explain your choice, note organizations you expect to be most likely to influence the future of the Internet and share your view of the effects of this between now and 2020.

There will always be a struggle to reconcile the Net’s end-to-end principle with the need for companies and technologies operating between those ends to innovate and make money. This tension will produce more progress than either the principle by itself or the narrow interests of network operators and other entities working between the Net’s countless ends.

Today these interests are seen as opposed — mostly because incumbent network operators want to protect businesses they see threatened by the Net’s end-to-end nature, which cares not a bit about who makes money or how. But in the future they will be seen as symbiotic, because both the principle and networks operating within it will be seen as essential infrastructure. So will what each does to help raise and renovate the Net’s vast barn.

The term infrastructure has traditionally been applied mostly to the public variety: roads, bridges, electrical systems, water systems, waste treatment and so on. But this tradition only goes back to the Seventies. Look up infrastructure in a dictionary from the 1960s or earlier and you won’t find it (except in the OED). Today are still no institutes or academic departments devoted to infrastructure. It’s a subject in many fields, yet not a field in itself.

But we do generally understand what infrastructure is. It’s something solid and common we can build on. It’s geology humans make for themselves.

Digital technology, and the Internet in particular, provide an interesting challenge for understanding infrastructure, because we rely on it, yet it is not solid in any physical sense. It is like physical structures, but not itself physical. We go on the Net, as if it were a road or a plane. We build on it too. Yet it is not a thing.

Inspired by Craig Burton’s description of the Net as a hollow sphere — a three-dimensional zero comprised entirely of ends
— David Weinberger and I wrote World of Ends in 2003 (http://worldofends.com). The purpose was to make the Net more understandable, especially to companies (such as phone and cable carriers) that had been misunderstanding it. Lots of people agreed with us, but none of those people ran the kinds of companies we addressed.

But, to be fair, most people still don’t understand the Net. Look up “The Internet is” on Google (with the quotes). After you get past the top entry (Wikipedia’s), here’s what they say:

  1. a Series of Tubes
  2. terrible
  3. really big
  4. for porn
  5. shit
  6. good
  7. wrong
  8. killing storytelling
  9. dead
  10. serious business
  11. for everyone
  12. underrated
  13. infected
  14. about to die
  15. broken
  16. Christmas all the time
  17. altering our brains
  18. changing health care
  19. laughing at NBC
  20. changing the way we watch TV
  21. changing the scientific method
  22. dead and boring
  23. not shit
  24. made of kittens
  25. alive and well
  26. blessed
  27. almost full
  28. distracting
  29. a brain
  30. cloudy

Do the same on Twitter, and you’ll get results just as confusing. At this moment (your search will vary; this is the Live Web here), the top results are:

  1. a weird, WEIRD place
  2. full of feel good lectures
  3. the Best Place to get best notebook computer deals
  4. Made of Cats
  5. Down
  6. For porn
  7. one of the best and worst things at the same time
  8. so small
  9. going slow
  10. not my friend at the moment
  11. blocked
  12. letting me down
  13. going off at 12
  14. not working
  15. magic
  16. still debatable
  17. like a jungle
  18. eleven years old
  19. worsening by the day
  20. extremely variable
  21. full of odd but exciting people
  22. becoming the Googlenet
  23. fixed
  24. forever
  25. a battlefield
  26. a great network for helping others around the world
  27. more than a global pornography network
  28. slow
  29. making you go nuts
  30. so much faster bc im like the only 1 on it

(I took out the duplicates. There were many involving cats and porn.)

Part of the problem is that we understand the Net in very different and conflicting ways. For example, when we say the Net consists of “sites,” with “domains” and “locations” that we “architect,” “design,” “build” and “visit,” we are saying the Internet is a place. It’s real estate. But if we say the Net is a “medium” for the “distribution” of “content” to “consumers” who “download” it, we’re saying the Net is a shipping system. These metaphors are very different. They yield different approaches to business and lawmaking, to
name just two areas of conflict.

Bob Frankston, co-inventor (with Dan Bricklin) of spreadsheet software (Visicalc) and one of the fathers of home networking, says the end-state of the Net’s current development is ambient connectivity, which “gives us access to the oceans of copper, fiber and radios that surround us.” Within those are what Frankston calls a “sea of bits” to which all of us contribute. To help clarify the anti-scarce nature of bits, he explains, “Bits aren’t really like kernels of corn. They are more like words. You may run out of red paint but you don’t run out of the color red.”

Much has been written about the “economics of abundance,” but we have barely begun to understand what that means or what can be done with it. The threats are much easier to perceive than the opportunities. Google is one notable exception to that. Asked at a Harvard meeting to explain the company’s strategy of moving into businesses where it expects to make no money directly for the services it offers, a Google executive explained that the company looked for “second and third order effects.”

JP Rangaswami, Chief Scientist for BT (disclosure: I consult BT) describes these as “because effects.” You make money because of something rather than with it. Google makes money because of search, and because of Gmail. Not with them. Not directly.

Yet money can still be made with goods and services — even totally commodified ones. Amazon makes money with back-end Web services such as EC2 (computing) and S3 (data storage). Phone, cable, and other carriers can make money with “dumb pipes” too. They are also in perfect positions to offer low-latency services directly to their many customers at homes and in businesses. All the carriers need to do is realize that there are benefits to incumbency other than charging monopoly rents.

The biggest danger for the Net and its use comes not from carriers, but from copyright absolutists in what we have recently come to call the “content” industry. For example, in the U.S. the DMCA (Digital Millenium Copyright Act), passed in 1998, was built to protect the interests of copyright holders and served as a model for similar lawmaking in other countries. What it did was little to protect the industries that lobbied its passing, while at the same time hurting or preventing a variety of other industries. Most notable (at least for me) was the embryonic Internet radio industry, which was just starting to take off when the DMCA came along. The saga that followed is woefully complex, and the story is far from over, but the result in the meantime is a still-infant industry that suffers many more restrictions in respect to “content” than over-the-air radio stations. Usage fees for music are much higher than those faced by broadcasters — so high that making serious money by webcasting music is nearly impossible. There are also tight restrictions on what music can be played, when, and how often. Music on podcasts is also essentially prohibited, because podcasters need to “clear rights” for every piece of copyrighted music they play. That’s why, except for “podsafe” music, podcasting today is almost all talk.

I’ll give the last words here to Cory Doctorow, who publishes them freely in his new book Content:

… there is an information economy. You don’t even need a computer to participate. My barber, an avowed technophobe who rebuilds antique motorcycles and doesn’t own a PC, benefited from the information economy when I found him by googling for barbershops in my neighborhood.

Teachers benefit from the information economy when they share lesson plans with their colleagues around the world by email. Doctors benefit from the information economy when they move their patient files to efficient digital formats. Insurance companies benefit from the information economy through better access to fresh data used in the preparation of actuarial tables. Marinas benefit from the information economy when office-slaves look up the weekend’s weather online and decide to skip out on Friday for a weekend’s sailing. Families of migrant workers benefit from the information economy when their sons and daughters wire cash home from a convenience store Western Union terminal.

This stuff generates wealth for those who practice it. It enriches the country and improves our lives.

And it can peacefully co-exist with movies, music and microcode, but not if Hollywood gets to call the shots. Where IT managers are expected to police their networks and systems for unauthorized copying — no matter what that does to productivity — they cannot co-exist. Where our operating systems are rendered inoperable by “copy protection,” they cannot co-exist. Where our educational institutions are turned into conscript enforcers for the record industry, they cannot co-exist.

The information economy is all around us. The countries that embrace it will emerge as global economic superpowers. The countries that stubbornly hold to the simplistic idea that the information economy is about selling information will end up at the bottom of the pile.


But all that is just me (and my sources, such as Cory). There are 894 others compiled by the project, and I invite you to visit those there.

I’ll also put in a plug for FutureWeb in Raleigh, April 28-30, where I look forward to seeing many old friends and relatives as well. (I lived in North Carolina for most of the 20 years from 1965-1985, and miss it still.) Hope to see some of ya’ll there.

*[Later…] For a bit of context, see Evolution Going Great, Reports Trilobite, in The Onion.

Tags: , ,

Some encouraging words here about Verizon’s expected 4G data rates:

After testing in the Boston and Seattle areas, the provider estimates that a real connection on a populated network should average between 5Mbps to 12Mbps in download rates and between 2Mbps to 5Mbps for uploads. Actual, achievable peak speeds in these areas float between 40-50Mbps downstream and 20-25Mbps upstream.The speed is significantly less than the theoretical 100Mbps promised by Long Term Evolution (LTE), the chosen standard, but would still give Verizon one of the fastest cellular networks in North America.

No mention of metering or data caps, of course.

Remember, these are phone companies. They love to meter stuff. Its what they know. They can hardly imagine anything else. They are billing machines with networks attached.

In addition to the metering problems Brett Glass details here, there is the simple question of whether carriers can meter data at all. Data ain’t minutes. And metering discourages both usage and countless businesses other than the phone companies’ own. I have long believed that phone and cable companies will see far more business for themselves if they open up their networks to possibilities other than those optimized for the relocation of television from air to pipes.

Data capping is problematic too. How can the customer tell how close they are to a cap? And how much does fearing overage discourage legitimate uses? And what about the accounting? My own problems with Sprint on this topic don’t give me any confidence that the carriers know how gracefully to impose data usage caps.

There’s a lot of wool in current advertising on these topics too. During the Academy Awards last night, Comcast had a great ad for Xfinity, its new high-speed service, promoted entirely as an entertainment pump. By which I mean that it was an impressive piece of promotion. But there was no mention of upstream speeds (downstream teaser: 100Mb/s). Or other limitations. Or how they might favor NBC (should they buy it) over other content sources. (Which, of course, they will.)

Sprint‘s CEO was in an another ad, promoting the company’s “unlimited text, unlimited Web and unlimited calling…” Right. Says right here in a link-proof pop-up titled “Important 4G coverage and plan information”, that 4G is unlimited, but 3G (what most customers, including I, still have) is limited to “5GB/300MB off-network roaming per month.” They do list “select cities” where 4G is available. Here’s Raleigh. I didn’t find New York, Los Angeles, Chicago or Boston on the list. I recall Amarillo. Can’t find it now, and the navigation irritates me too much to look.

Anyway, I worry that what we’ll get is phone and cable company sausage in Internet casing. And that, on the political side, the carriers will succeed in their campaign to clothe themselves as the “free market” fighting “government takeovers” while working the old regulatory capture game, to keep everybody else from playing.

So five, ten years from now, all the rest of the independent ISPs and WISPs will be gone. So will backbone players other than carriers and Google.  We’ll be gaga about our ability to watch pay-per-view on our fourth-generation iPads with 3-d glasses. And we won’t miss the countless new and improved businesses that never happened because they were essentially outlawed by regulators and their captors.

Tags: , , , , , , , ,

radiofavesThe great — to me the best radio host ever (he was real and honest and funny and groundbreaking and smart long before was the same, and I am a serious Howard fan too) — once explained his radio philosophy to me in two words:

It’s personal.

From the beginning we have regarded broadcasting as a one-to-many matter, even though the best broadcasters know they are only talking to single pairs of ears, and usually act the same way. Yet stations, programmers and producers put great store in numbers, also known as ratings. Stations, even public ones, lived and died by “The Book” — Arbitron’s regional compilations of results.

At this point something like 2.5 million Public Radio Players — radios for the iPhone — have been downloaded. To the degree that the PRP folks keep track of how much each station and program gets listened to, the results are far different than what Arbitron says. See here for the results, and see here for one big reason why.

At this point Public Radio Player (with which I have some involvent) and other ‘tuners’ for the iPhone (such as the excellent WunderRadio) are my primary radios. I use them when I’m walking, driving, or making coffee in the kitchen at home. I listen to KCLU from Thousand Oaks/Santa Barbara here in Boston, I listen to WBUR, WUMB, WERS, WEEI (Celtics basketball) and other Boston stations when I’m in California. My list of “favorites” (such as the list above, on Wunderradio) runs into the dozens, and includes programs as well as stations. Distinctions between live, podcast, on-demand (podcasts served by stations, live) and other modes are blurring.

Three things are clear to me at this point. First is that it’s very early in this next stage of what broadcasting will become. Second is that it’s more personal than ever. Third is that the time will come when we’ll shut down many (if not most or all) terrestrial transmitters.

On this last topic, a number of landmark AM stations that I grew up listening to — CBL/740 from Toronto, and CKVL/850, CBF/690 and CFCF/940 from Montreal — are all gone. The last two of those went off in January. Those were “clear channel” powerhouses, with signals you could get across the continent at night. I could even get CKVL in the daytime in New Jersey. Now: not there. But the decendents of all those stations are available on the Net, which means they’re available on smartphones with applicatons that play streams. While it’s still not easy to serve streams to thousands (much less millions) at a time, it’s also cheaper than running transmitters that suck 100,000 watts and more off the grid and take up large amounts of real estate (including open land for AM and the tops of mountains and buildings for FM). Not to mention that broadcast towers (which run up to 2000 feet in height) are hazards to aviation, bird migration and surrounding areas when they collapse, which is often.

Anyway, I’ve always thought the ratings were good for the mass-appeal stuff, but way off for stations and programs that appealed to many — but not to enough to satisfy the advertising business. Personal listening is much more idiosyncratic, but also much more interested and involved, than group listening, which actually doesn’t happen.

Therefore I expect radio, or its next evolutionary stage, to be more personal than ever — and therefore better than ever.

Bonus link: JP Rangaswami’s Death of the Download. His closing lines:

And what if the customers have given up and moved on, from the download to the stream?

It was never about owning content. It was always about listening to music.

It was never about product. It was always about service.

The customer is the scarcity. We would do well to remember that. And to keep remembering that.

Tags: , , , , , , ,

Pew Internet‘s latest report, Future of the Internet IV (that’s the Roman numeral IV — four — not the abbreviation for intravenous, which is how my bleary eyes read it at half past midnight, after a long day of travel), is out. Sez the Overview,

A survey of nearly 900 Internet stakeholders reveals fascinating new perspectives on the way the Internet is affecting human intelligence and the ways that information is being shared and rendered.

The web-based survey gathered opinions from prominent scientists, business leaders, consultants, writers and technology developers. It is the fourth in a series of Internet expert studies conducted by the Imagining the Internet Center at Elon University and the Pew Research Center’s Internet & American Life Project. In this report, we cover experts’ thoughts on the following issues:

I’m one of the sources quoted, in each of the sections. The longest quote is two links up, in the end-to-end question.

Sometime later I’ll put up my complete responses to all the questions. Meanwhile, enjoy a job well done by Janna Anderson, Lee Rainie and the crew at Elon University and Pew Internet. There’s much more from (and to, if you wish to contribute) both at Imagining the Internet.

The Cinternet is Donnie Hao Dong’s name for the Chinese Internet. Donnie studies and teaches law in China and is also a fellow here at Harvard’s Berkman Center. As Donnie sees (and draws) it, the Cinternet is an increasingly restricted subset of the real thing:

map[19]

He calls this drawing a “map of encirclement.” That last noun has a special meaning he explains this way:

“The Wars of (anti-)Encirclement Compaign” were a series battles between China Communist Party and the KMT‘s Nanjing Gorvernment in 1930s. At the time the CCP established a government in south-central China (mostly in Jiang Xi Province). The KMT’s army tried five times to attack and encircle the territory of CCP’s regime. And The CCP’s Red Army was almost defeated in the Fifth Encirclement War in 1934. The Long March followed the war and rescued CCP and its army.

Encirclement is more than censorship. It’s a war strategy, and China has been at war with the Internet from the start.

But while China’s war is conscious, efforts by other countries to encircle the Net are not. To see what I mean by that, read Rebecca MacKinnon‘s Are China’s demands for Internet ‘self-discipline’ spreading to the West? Her short answer is yes. Her long answer is covered in these paragprahs:

To operate in China, Google’s local search engine, Google.cn, had to meet these “self-discipline” requirements. When users typed words or phrases for sensitive subjects into the box and clicked “search,” Google.cn was responsible for making sure that the results didn’t include forbidden content.

It’s much easier to force intermediary communications and Internet companies such as Google to police themselves and their users than the alternatives: sending cops after everybody who attempts a risque or politically sensitive search, getting parents and teachers to do their jobs, or chasing down the origin of every offending link. Or re-considering the logic and purpose of your entire system.

Intermediary liability enables the Chinese authorities to minimize the number of people they need to put in jail in order to stay in power and to maximize their control over what the Chinese people know and don’t know.

In its bombshell announcement on Jan. 12, Google cited massive cyber attacks against the Gmail accounts of human rights activists as the most urgent reason for re-evaluating its presence in China. However, the Chinese government’s demands for ever-increasing levels of censorship contributed to a toxic and unsustainable business environment.

Remember that phrase: intermediary liability. It’s a form of encirclement. Rebecca again:

Meanwhile in the Western democratic world, the idea of strengthening intermediary liability is becoming increasingly popular in government agencies and parliaments. From France to Italy to the United Kingdom, the idea of holding carriers and services liable for what their customers do is seen as the cheapest and easiest solution to the law enforcement and social problems that have gotten tougher in the digital age — from child porn to copyright protection to cyber-bullying and libel.

I’m not equating Western democracy with Chinese authoritarianism — that would be ludicrous. However, I am concerned about the direction we’re taking without considering the full global context of free expression and censorship.

The Obama administration is negotiating a trade agreement with 34 other countries — the text of which it refuses to make public, citing national security concerns — that according to leaked reports would include increased liability for content hosting companies and service providers. The goal is to combat the global piracy of movies and music.

I’m not saying that we shouldn’t fight crime or enforce the law. Of course we should, assuming that the laws reflect the consent of the governed. But let’s make sure that we don’t throw the baby of democracy and free speech out with the bathwater, as we do the necessary work of adjusting legal systems and economies to the Internet age.

Next, What Big Content wants from net neutrality (hint: protection), by Nate Anderson in Ars Technica. According to Nate, more than ten thousand comments were filed on the subject of net neutrality with the FCC, and among these were some from the RIAA and the MPAA. These, he said, “argued that the FCC should encourage ISPs to adopt ‘graduated response’ rules aimed at reducing online copyright infringement”, and that they “also reveal a content-centric view of the world in which Americans will not ‘obtain the true benefits that broadband can provide’ unless ‘copyrighted content [is] protected against theft and unauthorized online distribution'”. He continues,

What could graduated response possibly have to do with network neutrality? The movie and music businesses have seized on language in the FCC’s Notice of Proposed Rulemaking that refuses to extend “neutrality” to “unlawful content.” The gist of the MPAA and RIAA briefs is that network neutrality’s final rules must allow for—and in fact should encourage—ISPs to take an active anti-infringement role as part of “reasonable network management.”

Not that the word “infringement” is much in evidence here; both briefs prefer “theft.” The RIAA’s document calls copyright infringement “digital piracy—or better, digital theft,” and then notes that US Supreme Court Justice Breyer said in the Grokster case that online copyright infringement was “garden variety theft.”

To stop that theft, the MPAA and RIAA want to make sure that any new FCC rules allow ISPs to act on their behalf. Copyright owners can certainly act without voluntary ISP assistance, as the RIAA’s lengthy lawsuit campaign against file-swappers showed, but both groups seem to admit that this approach has now been hauled out behind the barn and shot.

According to the RIAA, “Without ISP participation, it is extremely difficult to develop an effective prevention approach.” MPAA says that it can’t tackle the problem alone and it needs “broadband Internet access service providers to cooperate in combating combat theft.”

“No industry can, or should be expected to, compete against free-by-theft distribution of its own products,” the brief adds.

“We thus urge the Commission to adopt rules that not only allow ISPs to address online theft, but actively encourage their efforts to do so,” says the RIAA.

And that’s how we get the American Cinternet. Don’t encircle it yourself. Get the feds to make ISPs into liable intermediaries forced to practice “self discipline” the Chinese way: a “graduated response” that encircles the Net, reducing it to something less: a spigot of filtered “content” that Hollywood approves. Television 2.0, coming up.

Maybe somebody can draw us the Content-o-net.

Tags: , , , , , , , , , , , , , , , , , , , ,

Last month The Kid and I went to the top of the Empire State Building on the kind of day pilots describe as “severe clear.” I put some of the shots up here, and just added a bunch more here, to share with fellow broadcast engineering and infrastructure obsessives, some of whom might like to help identify some of the stuff I shot.

Most of these shots were made looking upward from the 86th floor deck, or outward from the 102nd floor. Most visitors only go to the 86th floor, where you can walk outside, and where the view is good enough. It costs an extra $15 per person to go up to the 102nd floor, which is small, but much less crowded. From there you can see but one item of broadcast interest, and it’s so close you could touch it if the windows opened. This is the old Alford master FM antenna system: 32 fat T-shaped things, sixteen above the windows and sixteen below, all angled at 45°.

From the 1960s to the 1980s (and maybe later, I’m not sure yet), these objects radiated the signals of nearly every FM station in New York. They’re still active, as backup antennas for quite a few stations. The new master antennas (there are three of them) occupy space in the tower above, which was vacated by VHF-TV antennas (channels 2-13) when TV stations gradually moved to the World Trade Center after it was completed in 1975.

When the twin towers went down on 9/11/2001, only Channel 2 (WCBS-TV) still had an auxiliary antenna on the Empire State Building. The top antenna on the ESB’s mast appears to be a Channel 2 antenna, still. In any case, it is no longer in use, or usable, since the FCC evicted VHF TV stations from their old frequencies as part of last year’s transition to digital transmission. Most of those stations now radiate on UHF channels. (All the stations continue to use their old channel numbers, even though few of them actually operate on those channels.) Two of those stations — WABC-TV and WPIX-TV — have construction permits to move back to their old channels (7 and 11, respectively).

That transition has resulted in a lot of new stuff coming onto the Empire State Building, a lot of old stuff going away, and a lot of relics still up there, waiting to come down or just left there because it’s too much trouble to bother right now. Or so I assume.

For some perspective, here is an archival photo of WQXR’s original transmitting antenna, atop the Chanin Building, with the Empire State Building in the background. The old antenna, not used in many years, is still up there. Meanwhile the Empire State building’s crown has morphed from a clean knob to a spire bristling with antennae.

Calling the Fat Tail

I think I’ve figured out a lot of what’s up there, and have made notes on some of the photos. But I might be wrong about some, or many. In any case, a lot of mysteries remain. That’s why I’m appealing to what I call the “fat tail” for help.

The “fat tail” is the part of the long tail that likes to write and edit Wikipedia entries. These are dedicated obsessives of the sort who, for example, compile lists of the tallest structures in the world, plus the many other lists and sub-lists linked to from that last item.

Tower freaks, I’m talking about. I’m one of them, but just a small potato compared to the great , who reports on a different tower site every week. Among the many sites he has visited, the Empire State Building has been featured twice:  January 2001 and November 2003. Maybe this volunteer effort will help Scott and his readers keep up with progress at the ESB.

This Flickr set, by the way, is not at my home pile, but rather at a new one created for a group of folks studying infrastructure at Harvard’s Berkman Center, where I’m a fellow. I should add that I am also studying the same topic (specifically the overlap between Internet and infrastructure) as a fellow with the Center for Information Technology and Society at UCSB.

Infrastructure is more of a subject than a field. I unpack that distinction a bit here. My old pal and fellow student of the topic, , visits the topic here.

Getting back to the Empire State Building, what’s most interesting to me about the infrastructure of broadcasting, at least here in the U.S., is that it is being gradually absorbed into the mobile data system, which is still captive to the mobile phone system, but won’t be forever. For New York’s FM stations, the old-fashioned way to get range is to put antennas in the highest possible places, and radiate signals sucking thousands of watts off the grid. The new-fashioned way is to put a stream on the Net. Right now I can’t get any of these stations in Boston on an FM radio. In fact, it’s a struggle even to get them anywhere beyond the visible horizons of the pictures I took on the empire State Building. But they come in just fine on my phone and my computer.

What “wins” in the long run? And what will we do with all these antennas atop the Empire State Building when it’s over? Turn the top into what King Kong climbed? Or what it was designed to be in the first place?

Infrastructure is plastic. It changes. It’s solid, yet replaceable. It needs to learn, to adapt. (Those are just a few of the lessons we’re picking up.)

Tags: , , ,

I posted a lot today, but nothing matters more — or has been more on the front of my mind — than Haiti. What hell that such an already troubled country should be hit by an earthquake so bad, and so close to its most dense population centers.

So, as I try to get my head around the situation, here’s a list of links, in the order that I visit them:

I’ll add more as time goes on.

Also please read the comments below. The three (so far) from Andrew Leyden are excellent.

I just posted this essay to IdeaScale at OpenInternet.gov, in advance of the Open Internet Workshop at MIT this afternoon. (You can vote it up or down there, along with other essays.)  I thought I’d put it here too. — Doc


The Internet is free and open infrastructure that provides almost unlimited support for free speech, free enterprise and free assembly. Nothing in human history, with the possible exception of movable type — has done more to encourage all those freedoms. We need to be very careful about how we regulate it, especially since it bears only superficial resemblances to the many well-regulated forms of infrastructure it alters or subsumes.

Take radio and TV, for example. Spectrum — the original “bandwidth” — is scarce. You need a license to broadcast, and can only do so over limited distances. There are also restrictions on what you can say. Title 18 of the United States Code, Section 1464, prohibits “any obscene, indecent or profane language by means of radio communication.” Courts have upheld the prohibition.

Yet, as broadcasters and the “content industry” embrace the Net as a “medium,” there is a natural temptation by Congress and the FCC to regulate it as one. In fact, this has been going on since the dawn of the browser. The Digital Performance Right in Sound Recordings Act (DPRSA) came along in 1995. The No Electronic Theft Act followed in 1997. And — most importantly — there was (and still is) Digital Millenium Copyright Act of 1998.

Thanks to the DMCA, Internet radio got off to a long and very slow start, and is still severely restricted. Online stations face payment requirements to music copyright holders are much higher than those for broadcasters — so high that making serious money by webcasting music is nearly impossible. There are also tight restrictions on what music can be played, when, and how often. Music on podcasts is essentially prohibited, because podcasters need to “clear rights” for every piece of copyrighted music they play. That’s why, except for “podsafe” music, podcasting today is almost all talk.

There is also a risk that we will regulate the Net as a form of telephony or television, because most of us are sold Internet service as gravy on top of our telephone or cable TV service — as the third act in a “triple play.” Needless to say, phone and cable companies would like to press whatever advantages they have with Congress, the FCC and other regulatory bodies.

It doesn’t help that most of us barely know what the Internet actually is. Look up “The Internet is” on Google and see what happens: http://www.google.com/search?hl=en&q… There is little consensus to be found. Worse, there are huge conflicts between different ways of conceiving the Net, and talking about it.

For example, when we say the Net consists of “sites,” with “domains” and “locations” that we “architect,” “design,” “build” and “visit,” we are saying the Internet is a place. (Where, presumably, you can have free speech, enterprise and assembly.)

But if we say the Net is a “medium” for the “distribution” of “content” to “consumers,” we’re talking about something more like broadcasting or the shipping industry, where those kinds of freedoms are more restricted.

These two ways of seeing the Net are both true, both real, and both commonly used, to the degree that we mix their metaphors constantly. They also suggest two very different regulatory approaches.

Right now most of us think about regulation in terms of the latter. That is, we want to regulate the Net as a shipping system for content. This makes sense because most of us still go on the Net through connections supplied by phone or cable companies. We also do lots of “downloading” and “uploading” — and both are shipping terms.

Yet voice and video are just two among countless applications that can run on the Net — and there are no limits on the number and variety of those applications. Nor should there be.

So, what’s the right approach?

We need to start by recognizing that the Net is infrastructure, in the sense that it is a real thing that we can build on, and depend on. It is also public in the sense that nobody owns it and everybody can use it. We need to recognize that the Net is defined mostly by a collection of protocols for moving data — and most of those protocols are open to improvement by anybody. These protocols may be limited in some ways by the wired or wireless connections over which they run, but they are nor reducible to those connections. You can run Internet protocols over barbed wire if you like.

This is a very different kind of infrastructure than anything civilization has ever seen before, or attempted to regulate. It’s not “hard” infrastructure, like we have with roads, bridges, water and waste treatment plants. Yet it’s solid. We can build on it.

In thinking about regulation, we need to maximize ways that the Net can be improved and minimize ways it can be throttled or shut down. This means we need to respect the good stuff every player brings to the table, and to keep narrow but powerful interests from control our common agenda. That agenda is to keep the Net free, open and supportive of everybody.

Specifically, we need to thank the cable and phone companies for doing the good work they’ve already done, and to encourage them to keep increasing data speeds while also not favoring their own “content” subsidiaries and partners. We also need to encourage them to stop working to shut down alternatives to their duopolies (which they have a long history of doing at both the state and federal levels).

We also need to thank and support the small operators — the ISPs and Wireless ISPs (WISPs) — who should be able to keep building out connections and offering services without needing to hire lawyers so they can fight monopolists (or duopolists) as well as state and federal regulators.

And we need to be able to build out our own Internet connections, in our homes and neighborhoods — especially if our local Internet service providers don’t provide what we need.

We can only do all this if we start by recognizing the Net as a place rather than just another medium — a place that nobody owns, everybody can use and anybody can improve.

Doc Searls
Fellow, Berkman Center for Internet & Society
Harvard University

[Later…] A bonus link from Tristan Louis, on how to file a comment with the FCC.

Tags: , , , , , , , , , , , , , , , , , , , , , ,

« Older entries § Newer entries »