The Commercial Internet

0

This week I read “The Long Tail” by Chris Anderson, “To do with the price of fish” in The Economist, “Shared,Shared, Collaborative and On Demand: The New Digital Economy” by Aaron Smith, “$1 Billion for Dollar Shave Club” by Steven Davidoff Solomon and “Robopocalyspe Not” by James Surowiecki.

These readings were all about the economic effects of the internet on both novel online industries as well as on real-world business. In the case of the Economist article, wireless networking was used to coordinate fish sales in India. Cell phones allowed for fishermen to call ahead and decide where to go to sell the day’s catch for the most profit. Originally, prices differed a lot between markets because if one boat came back with a good catch of a certain fish, others would too and the increased supply often lead to drops in prices at one market while supply was too low at another. Cell phones, and the internet generally, can help to coordinate pricing and make the larger economy for efficient. Ironically, this is what the Soviets were trying to do with their OGAS as I wrote about earlier, while in the US it was illegal to use the internet for commercial purposes until it was privatized circa 1990 (Erik Fair).

Online businesses have also had a huge impact on the retail industry. Chris Anderson discusses this in “The Long Tail” where he shows that low sales volume products are profitable on the internet even when they wouldn’t be worth the shelf space in a physical-world store.  These products are the “long tail.” The three main industries he talks about are the music, books and film industries. Anderson argues that the new ability for record labels and the like to keep selling low volume items past their typical due dates is a great opportunity to make more money, even as peer-to-peer file sharing networks are distributing more and more music less legally. His argument is that as long as the cost is low enough, people prefer buying to downloading illegally because there are costs associated with getting not free things for free on the internet (like getting caught, and the general hassle). Therefore, if record labels provide older and smaller market songs for a low price (49 cents) then people will prefer this to alternative methods. As the record label wasn’t able to sell any of these songs before online sales this would just be extra cash. This argument goes against the oft-touted the-internet-is-killing-our-industry complaint from record labels (although he does predict that the retail (what’s a CD?) music industry won’t last long).

What struck me most is that Anderson wrote this article in 2004 and his analysis still makes sense and his predictions are quite accurate. He predicts that flat fee unlimited streaming services will take over in the future. However, what he doesn’t predict is the rise of crowdsourced and peer-produced content. His article predates YouTube by a couple of months so this omission isn’t a big surprise, but sites like YouTube have been incredibly disruptive because the audience and the content creators are the same. YouTube doesn’t pay people to upload videos, and especially for the first few years of its existence, the site was dominated by amateur videos produced by people who were guarantied no compensation for their efforts. YouTube is a video entertainment company that crowdsources all its content production and only shares advertising profits after the fact — video producers don’t sell their content to YouTube, they only get a share of the revenue it creates. YouTube also became successful before it started sharing profits at all through its partner program in 2007.

12 years after Anderson, Steven Davidoff Solomon wrote about Unilever’s 1 billion dollar purchase of the Dollar Shave Club. Solomon writes that the Dollar Shave Club business strategy was innovative in that the company was only a marketing platform that contracted all the real-world work. It managed to compete against Gillette’s market dominance through a mix of branding, costumer loyalty, and convenience. It also undercut Gillette by forgoing relationships with retail stores (or retail stores entirely). Solomon notes that anyone could have bought Dollar Shave Club razors without branding directly from their supplier’s website — and save money. They didn’t invent a cheaper razor, they just came up with a better way to sell them.

These online “platforms” are very elaborate and often brilliant schemes to profit off of other people’s work. Just like the Dollar Shave club that connected costumers with a cheaper razor manufacturer through an extremely convenient interface, many online companies are just specialized retailers that connect costumers with products. Some, like YouTube, even manage to turn their users into both their products (advertising targets) and the producers of their content. “Platforms” also have the benefit of not having to deal with labor relations and can masquerade as software companies. Of course, not of this is to say that I think this is all bad, YouTube especially has played an important role in lowering the barrier of entry to content production and online music libraries have allowed far more musicians than just the big hits to profit off of their work and connect with fans.

The tubes are in parallel

0

This is the third time I’ve read End-to-end Arguments in System Design (Saltzer, Reed & Clark) and the paper has gotten better every time. The main takeaway for me is that whatever feature you might want to network to offer, it’s usually much better to implement it at the end-point application level than the network level. Ensuring FIFO delivery or ensuring all packets arrive is nice but such features are trouble when all you care about is getting most of the packets there quickly (like in voice calls). The end-points can take care of packet ordering and data integrity by using check sums/hashes of the data and if the application requires both perfect accuracy and timeliness, then network level “solutions” won’t work either because some things can’t be done (networks can get faster and more accurate, but as long as there are errors and the speed of light there will always be a trade-off between speed and reliability). Further, because end-point applications will have to assume some error rate, it will always have to check for errors anyway. The conclusion here is that having the network do more will get in the way of applications that don’t require those features because features inevitably include trade-offs.

I think BitTorrent is a good example of doing things the right way. BitTorrent divides files into “pieces” and checks for errors on each individual “piece”; it doesn’t re-download the whole file because one bit was dropped. It does this in part because all the pieces come from different places, and so it’s important to know that you are getting the right data and not something completely different (when downloading from a single source you may not get what you wanted either, but the only fixable issues are errors, because downloading the wrong file entirely is more a question about life-choices that has nothing to do with networks). I think BitTorrent is a particularly clever protocol because it embraces the weakness of the internet and provides an end-point solution. By weaknesses I mean bottlenecks that happen when a lot of traffic is going to one server (e.g. for downloads), both for the server itself and for its connection to the internet as well as the inevitable reliability issues. There are no bottlenecks when the tubes are all in parallel.

Instead of relying on someone else to fix the system, BitTorrent fixes the problems itself. Organizations that advocate treating packets differently based on what they are might gain from taking this sort of approach. Of course, the subversive nature of BitTorrent might convince them otherwise. Oh well.

ARPANET to Internet and why we might end up with OGAS

0

I’ve finished, Where Wizards Stay up Late by Hafner and Lyon and read “How the Soviets invented the internet and why it didn’t work” by Benjamin Peters.

“Standards should be discovered, not decreed.” -Unnamed TCP/IP advocate.

After finishing Where Wizards Stay up Late, I began to wonder how on earth any of this could have happened especially with so many companies trying to profit off of it at every step and simultaneously stop others from doing so. BBN tried to keep their IMP software secret, but gave in and let people have the source code (for a fee), ARPA nearly sold the whole ARPANET to AT&T (AT&T still thought packet switching wasn’t the future — close call). But in the end the major decisions were made by engineers who yelled at each other a bunch and implemented things in ways that worked even when some governments supported another way (like TCP/IP vs OSI). email was hacked together using ARNPANET’s file transfer protocol and soon began being used for all sorts of not-allowed things like socializing or anything that wasn’t government business (email didn’t care but ARPA technically did). This all happened somehow, with a lot of government funding and companies that funded long-term internal projects that had no immediate value and often weren’t commercially viable (notably, BBN had financial troubles because they failed to make enough of their products profitable — but at the same time they did quite a bit for the Internet). We only need to look at the infamous business model of AT&T/Bell and their telephone network to see a sad example, not of failure but of an inability to allow for real innovation. The ARPANET was in some senses like a monopoly, albeit government controlled (socialists?!) but predominantly benevolent and open/free. Once TCP/IP took over more and more networks started being connected to the ARPANET (hence the Inter-net). This is precisely what AT&T feared most regarding the telephone network; it simply wasn’t in their interests to let other companies profit using their network. The openness of innovation in the ARPANET and early Internet isn’t at all characteristic of the free market, where money and profit drive rather than discovery and true innovation done by people and companies who did not necessarily get rich off of it — even if their idea caught on.

This train of thought led me to wonder what the Soviets were up to at the time. A short search later I found that the Russians did indeed have something, kinda, and that it was a complete failure. OGAS (All-State Automated System) was meant to facilitate economic planning and pricing decisions across the Soviet Union. It was an idealist’s dream for the communist (cyber)state. In typical Soviet fashion that meant nothing and conflicting rational interests kept OGAS from becoming anything. At least, this is how Benjamin Peters describes it. According to Peters, “The first global computer network emerged thanks to capitalists behaving like cooperative socialists, not socialists behaving like competitive capitalists.” I don’t know enough about Soviet politics to know whether waring government ministries could ever be described as capitalist competition, but it does seem plausible all the same.

Peters continued by referencing Bruno Latour’s argument that technology is society made durable. In other words (says Peters) this means that “social values are embedded in technology.” He quickly ties this into modern mass surveillance projects like Facebook, Microsoft Cloud and NSA saying thatthese may lead to continuing the “20th- century tradition of general secretariats committed to privatizing personal and public information for their institutional gain.” As society moves towards accepting centralized control on massive scales (governments that see all and Facebook that knows all) our technology will begin to reflect this centralized philosophy; exactly the  opposite of the “values” of openness, innovation and discovery that were the initial drivers of the Internet.

 

In other news, nano has a built in spell check!

Distributed Networks and Grad Students — Week 1 (ish)

0

A post: I’ve read the first 6 chapters of Where Wizards Stay up Late by Hafner and Lyon as well as a dozen or so wikipedia pages. This post concerns my thoughts on these.

What stuck me most about the early development of the ARPANET were the parallel discoveries of packet-switching (or distributed adaptive message block switching) by Donald Davies and Paul Baran. The idea was that instead of using circuit-switching like the telephone network where an uninterrupted circuit between two participants was held open for the entire length of the call, the next generation network (ARPANET, internet, etc.) should divide data into discrete packets and send them along their way, bouncing from node to node until they arrived at their destination, where they would be reassembled. This method called for a distributed network as Baran showed in 1964.

Baran came up with this idea because a distributed network would be more resistant to nuclear attacks than a centralized network. I think this somewhat silly because having a network that doesn’t fail after one node fails is nice even it’s not the Russians breaking it. The distributed model is necessary for a functioning network that doesn’t need constant maintenance at central hubs. This was Davies’s motivation. He wanted an efficient solution to the networking issue.

To me the fact that both came up with the same idea separately indicates that the distributed net wasn’t just an engineer’s solution to a given problem, but a scientific discovery of an optimal model.  Just like the wheel or arches, the distributed net is a concept beyond its physical applications (let’s not get any further into this as the reader may have majored in philosophy and actually have read Plato). The inventor of the arch may have wanted to add extra features, but the arch remains just a wheel.

Of course the above argument fails to hold after Baran and Davies as those were the last to discover the internet (a distributed network based on packet-switching). Next came the actual real internet aka ARPANET.

ARPA (as in DARPA before they put “defense” into everything) contracted out the work of designing and building their net to several different companies. BBN (Bolt, Beranek and Newman) was, at least in Hafner and Lyon’s account, the central designer of the internet’s physical layer. The wires themselves were from AT&T but those weren’t very special, rather the special parts were the IMPs (Interface Message Processors) put together by BBN. The IMPs were the computers that divided up the data into packets and sent them along to the next IMP until they arrived at the destination IMP which would hand over the data to the computer(s) it was attached to. BBN made several important design decisions concerning their IMPs and how they worked which I list in order of interest to me.

  1. Their threat model was curious grad students who wanted to play around with the new computer in the office.
  2. The IMPs were designed to be as invisible as a wall outlet. (toddlers with forks vs grad students with screwdrivers?)
  3. The IMPs were controlled centrally; BBN could update their software over the air (wires) and nobody (grad students) was supposed to tinker with their own versions of IMP software.

The first line of defense against 1. was a military specification case for their Honeywell 516 minicomputers. (This also meant that all IMPs had identical software and hardware; talk about a large attack surface. Not that it mattered back then). The second line of defense was 2.; who tries to reprogram outlets anyways? By keeping the network working BBN kept the grad students occupied and out of the precious IMPs. Of course this had far far greater implications as it allowed the internet to scale (see Where’s Waldo’s last post) and eventually allowed for a nation internet users without a damn clue as to how the thing works or what it’s made of (it’s probably a series of tubes but don’t quote me on that one).

Lastly; the extremely centralized control of the early ARPANET. BBN’s model of control is the complete opposite to the philosophy of the distributed or even a decentralized network. Yes, the nodes are distributed and packets flow through them whichever way is quickest, but the puppet master’s at BBN still controlled everything except for ARPA which controlled them. But what if ARPA had ceded control to BBN, and AT&T had bought BBN and inevitably ruined everything? If the internet is going to be controlled centrally, then it must be treated as a public good or we risk some telecom coming in and adding “features” (charging per byte etc.). The other solution is to make governance so confusing, opaque and most importantly boring sounding that only a few good(ish) souls bother to take part. As far as I can remember, that’s what ended up happening, along with RFCs.

I think I will write about RFCs next week and maybe by then I’ll have figured out exactly what happened after ARPA. My intuition from seeing Wikipedia’s uses of RFCs is that they are key to more distributed governance.

Log in