Archive for July 18th, 2005

Web superservices solution stack, plus integrated superservices design and management tools: “large pieces loosely joined”

ø

Note: I have revised this post, made
some small edits, and   put “Cut Two” ahead of “Cut
One”–somehow the whole thing reads better to me now.  🙂

Cut Two:

Remember Dave Weinberger’s “small pieces loosely joined“?  Well,
the new world features “large pieces loosely joined.” Or rather, “Pieces
large and small loosely joined.”

What is happening before our eyes are that certain web sites are
becoming “web superservices” and are irreversibly changing the
landscape of the web.  A new layer of innovation is here.

Web superservices
provide essential functions for solving problems (such as search,
storage/archive, security, pooling of information, notification of
changes, identification of relationships, analysis of memes), are
available on the web as public or near-public global resources with
enormous economies of scale and scope, have very simple open APIs, and
can be integrated (scripted together and/or customized) by users or
near users  to provide custom solutions to important problems.*

This general observation regarding changes in the landscape is being
nioted by more and more observers of the technology scene.  For
example, Jeff Jarvis has two important current posts, Feedthink and made for the distibuted world.  Both are must reads.  He quotes Fred Wilson in turn on new business models enabled as Microsoft promotes RSS, and a classic piece by Kevin Hale on The Importance of RSS.

Doc Searles in an astute and popular column published yesterday asks
for objective evaluation of search engines for the living web. He was inspired mostly by comments after  after “Robert Scoble posted this and this,” as well as Adam Penenberg’s piece on Technorati and the London bombings in Wired online, calling Technorati “a public utility on a global scale.” 

The
more  general version of what Doc is calling for is objective
evaluation of a variety of web services, compared to each other within classes (e.g. search,
filter, transport, ping, publish, etc.).  We need a J.D. Power for web
superservices.  (BTW if anyoneis passionate about this and thinks they
are qualilfied, send me a business plan.) 

An intelligent evaluation service is especially necessary in the new
world of web superservices because these new services emerge in an ad
hoc, creative and unpredictable manner.  By contrast, traditional
web services exist in a fixed framework (such as .net) and are much
more easily evaluated.

The new web superservices not only enter the landscape from many
directions, they routinely redefine the category in which they
nominally compete.  Is Technorati like Google?  No.  Is
PubSub like Technorati?  No.  Thus an evaluation service,
even of the seemingly simple class called search engines, will need to
evolve as fast as the services themselves. 

The objective criteria will need to be independent and trustworthy,
while being constantly adapted to keep up with how the web
superservices evolve within themselves and co-evolve with others.

Cut One:

But we can take this whole discussion up one G

An investor looks at the RSS ecosystem, Atom 1.0 and RSS 2.0

ø

Tim Bray has written several important pieces on Atom in the past few days,
including one announcing that Atom 1.0 is cooked and ready to serve, and a second comparing Atom 1.0
and RSS 2.0
.  There are a number of fine features in Atom 1.0, and
a great deal of engineering work has been accomplished and the team
deserves congratulations and thanks from the RSS community.  I
think it worthwhile to consider three issues in regard to this event,
from the standpoint of an investor:

Deployment, stability and simplicity are virtues

In Bray’s comparison of RSS 2.0 and Atom 1.0 he starts with the following:

Major/Qualitative Differences

Deployment

2005/07/13: RSS 2 is widely deployed and Atom 1.0 not at all.

Specifications

The RSS 2.0 specification is copyrighted by Harvard
University and is frozen. No significant changes can be made and it is
intended that future work be done under a different name; Atom is one
example of such work.

The Atom 1.0 specification (in the course of becoming an IETF standards track RFC) represents the consensus of the [WWW]Atompub Working Group within the [WWW]IETF, as reviewed and approved by the IETF community and the [WWW]Internet Engineering Steering Group.
The specification is structured in such a way that the IETF could
conceivably issue further versions or revisions of this specification
without breaking existing deployments, although there is no commitment,
nor currently expressed interest, in doing so.

Bray thus grants RSS 2.0’s advantage in deployment, and after
acknowledging but criticizing its stability, Bray goes on to enumerate
a number of technical advantages implemented in the Atom 1.0 specification.

Unfortunately for Atom 1.0, from an investors’ standpoint, deployment
and stability are often what matter most  This is because a business
plan that depends on a standard that is stable and already widely used
in the market only carries with it the risk of its own business model
failing.  On the other hand a business plan that depends on a
standard that has not been deployed–and thus that is not really a
standard at all–carries with it the additional risk that that standard
will not be adopted.  This is a risk well outside of the control
of any given team of entrepreneurs, and outside of the control of an
investor. It adds greatly, therefor, to the risk to the whole business
plan.  Thus given the choice between a plan that depends only on
RSS 2.0 or that is neutral between RSS 2.0 and Atom 1.0, with one that
depends on Atom 1.0, most investors would be wise to go with the former.

In some cases investors are willing to bet on a nascent business
ecosystem as a whole. Even in these cases deployment, stability, and
simplicity are often more important than technical features,
as long as the simple standards that are deployed are workable.  Why is
this? Because they enable a business ecosystem to bring in the most
numbers of participants, and it is the participants that make the
business go. 

This is what happened in the establishment of the current RSS
ecosystem.  What enabled the RSS ecosystem to take off in the
first place was a shared
reliance on three minimalist, relatively stable, and ultimately
ubiquitous standards:  URLs and URIs, SOAP, and RSS 2.0. Because
these standards were simple, they were accessible to
entrepreneurs in small companies as well as large. The direct
cost of standards adoption by large companies–such as the New York
Times–was so
small as to be trivial, which encouraged rapid deployment and
experimentation. Because the standards were simple and the
tools were cheap, individuals experimented and some of their seeds took
root and spread. Podcasting started with Dave Winer
helping Chris Lydon record interviews with noted bloggers. Others
jumped in to help, and the podcasting movement took off on top of the
RSS movement.

Across the landscape small and large RSS experiments fed
on each other,
stimulating  the most rapid technological and business
co-evolution I’ve ever experienced. David Weinberger’s image of “small
pieces loosely connected” became the basis for a whole new approach to
applications.  Ad hoc web services became the norm, connected by
users manipulating dead simple standards, and solving problems
sometimes literally overnight.

Innovation has shifted to a new level of the RSS ecosystem

But there was a surprise in store for the RSS community.  Some of
the ad hoc web services  grew and revealed powerful
economies of scale and scope.  Small pieces with simple interfaces
grew up to become service giants if they were accessible to the
whole web and if they were operationally scalable,  Technorati,
Bloglines, Feedster
and Blogger became web superservices.  What now integrates
the RSS community is a new approach to information technology solutions
that involves scripting together ad
hoc global web superservices to create
powerful, flexible, focused solutions to problems.  Entrepreneurs
are
continuing to discover opportunities to innovate, inventing more and
more of such superservices and joining them together in new and
imaginative ways.

Now that the RSS ecosystem has taken off, the specifics of the
standards tying together the superservices may not matter as much as
before. Ironically, this may open an opportunity for Atom 1.0, because
most of the superservices can read and write multiple RSS-variants in
any case.  But on the other hand, it makes the introduction of
Atom 1.0 a kind of non-event for an investor and a company strategist:
Atom 1.0 becomes another standard that needs to be accomodated perhaps,
but one that is unlikely to presage much of a change in the landscape.

In business ecosystems, the collective innovation of the community is the most powerful force

I believe there will be important applications for Atom 1.0.  Some
of its specific functions will undoubtedly be useful  for
particular applicatons.  So I welcome it to the RSS
ecosystem.  But I am investing in the RSS ecosystem as a whole,
and the RSS approach to information technology services, and not a
specific standard.  The RSS ecosystem now has
millions of participants and is growing at an almost alarming
rate.  The scale, scope, and network effects within the ecosystem
are formidable. The large information technology and
media companies are playing, 
including Google, Yahoo and Microsoft as well as essentially all of
the mainstream media companies. RSS is an ecology of ideas and
technologies, entrepreneurs and companies,
services, applications and communities that is based on the RSS
ideal–Really Simple Syndication. 

The most important thing now is not any particular idea or technology
or even company–the most important thing now is the worldwide
community of innovators that has been mobilized, capitalized, and has
momentum.  The most powerful thing about a business ecosystem is
millions of people accomplishing things together.

Log in