~ Archive for Uncategorized ~

The Right to be Forgotten

0

This post is mostly a response to this article by Jeffry Toobin.

The “right to be forgotten” has always struck me as utterly bizarre. The European Court of Justice ruled that the right does not apply to newspaper articles, but does apply to search engines. The former are protected by the freedom of the press, which ranks higher than the right to be forgotten, but Google must removed certain search results for the completely legal news articles. Google is also responsible for deciding which requests to accept, but these requests remain secret. The problem is that there is no way to check what Google has removed and appeal that decision if the censorship harms you. Google does inform webmasters if pages on their sites are hidden but they are not required to do this and some folks are arguing that Google shouldn’t even be allowed to do that. This right to be forgotten is so complete that even the law has no record of what’s forgotten.

The more I think about the right to be forgotten itself doesn’t seem completely unreasonable even if I really don’t like the EU implementation. If someone was wrongly accused of a crime and then exonerated, news stories about the accusation appearing on search results would be unjustly damaging. Part of the issue is that people actually believe what they see on Google but I don’t think there’s any way to change that. We can’t expect newspapers to update old articles if the eventual outcome of the situation is very different from what one expects from an article; would news articles about the OJ Simpson trial have to have little disclaimers saying that he was not found guilty? Does OJ Simpson qualify for the right to be forgotten? Sure he’s famous, and Google has higher standards to forget well known people but OJ never wanted to be well known for a crime he “didn’t” commit. Perhaps he forfeited this right when he wrote If I Did It, or did he regain it when he lost the rights to the book?

Of course there is already the well-document Streisand effect when some prominent figure or group attempts to suppress certain information, including through legal means, and the cover-up has the opposite effect. This is perhaps the reason why take downs are not listed, although the stated purpose of a take down is not to make the information disappear, but simply to make it less prominent on search results. If Google had a searchable directory (that is not covered by the standard Google search) then this would be both transparent, and it would prevent casual discover of the forgotten secret. Already Google has no way is not expected to know exactly when a user comes from a jurisdiction in which something has been “forgotten,” and as far as I know has not been forced to take down results in areas outside the jurisdiction of a given law (at least not regarding claims of the right to be forgotten).  How would European courts respond if google.com published a list of take down requests visible only to users who appear to come from outside the EU? Perhaps these requests have legally binding confidentiality clauses that Google is forced to comply with, but that seems quite odd as the requests are entirely between Google and the person wishing to be forgotten. They are not sealed court records. They are not court records at all. It seems to me the Google is doing it’s best to comply without accepting every single request, playing its part has an enlightened monopoly.

This the last point I want to make in this post; none of this would work if Google wasn’t an effective monopoly. I’ve noticed in the past that the best way to avoid any legally required Google censorship is to use any other search engine. It works really well. Nobody thinks about erasing search results from forgotten search engines. If 90% of searches were shared relatively equally between an handful of companies instead of just Google*, then anyone who wants to be forgotten would have to request take downs with 5 different companies using 5 different platforms. Presumably these companies wouldn’t all honor exactly the same requests, and so a result might be gone from Google but still up on DuckDuckGo and Yahoo!. For the semi-forgotten that’s probably still better than nothing, but it’s not really the same effect as in a Google dominated world. It also places a lot more onus on the requester, and I wonder if in such a world the ruling would have expected the requester and the companies to do this. Perhaps the courts or some government agency would be responsible for centralizing the requests and sending them off to all the search engines. I think it would be very unreasonable to force every search engine to employ the lawyers needed to deal with requests to be forgotten. Google probably likes this because it makes it far more difficult for a new competitor to enter the scene.

In other news, the BBC published a list of articles Google has removed from certain search results.

 

*A google search indicated that this is its current market share

Responsible Cyberwar

0

What means war and what doesn’t? This seems to be the main question in political science regarding cyberconflict. From the perspective of governments the question is more like “how much can we get away with.” From what I know espionage has typically not been treated as an act of war, although the unlucky spies didn’t fare too well regardless. Espionage often involves infiltrating the enemy (and friends!) and if/when those spies get caught the target country gets to decide what to do with them. In this way there’s accountability. In areas like traditional (e.g. terrestrial radio) singles intelligence, it would be quite odd to view listening in as an aggressive act. The hacking of computer systems to gain intelligence seems different from all of these examples; there’s infiltration but no human culprit. As to destructive hacking, I see no significant difference between it and dropping bombs or destroying paper-based information. Just cause you use a computer to do it doesn’t change the fact that you made something happen in the physical world. Cyberattacks can be a lot more specific in what they destroy, in the case of Stuxnet and the Iranian centrifuges the bug literally only broke the machines. The traditional variation of the attack would probably have include several thousand pounds of bunker-busting bombs and left a giant smoking hole in the ground. It would also have killed people, which would probably have been seen as far bigger affront. However, cyberattacks are not inherently narrow in their scope of destruction and they can miss their intended targets or the cyberarms can fall into the wrong hands. The WannaCry attacks are an excellent example of this; NSA tools were leaked and used by criminals to write malware that shutdown hospitals all over the world.* Thinking about it now, this seems very analogous to the Obama era “fast and furious” operation that resulted in the arming of Mexican drug cartels (of course that wasn’t the point, everything just went terribly wrong). The big problem with cyberarms is that if they are leaked they can be copied and adjusted to attack entirely different targets. Once in the wild it’s nearly impossible to stop their spread.

What I’m trying to get at here is that cyberattacks aren’t about war, they’re about creating weapons that will inevitably get into the wild where they can be distributed without limitation. Anyone who thinks they can control cyberweapons and simultaneously use them is delusional. Lucas Kello notes that the cost of cyberattacks include losing precious zero-day vulnerabilities to exploit because those are patched once an attack displays their use. But the cost is higher than that; until patched these zero-days can be used by other countries or criminal gangs or bored teenagers. That’s a really big problem. A responsible government would surely not want such a thing to happen. Nobody would drop re-usable bombs because that would be really stupid.

A responsible government has an alternative option; instead of setting malware-weapons loose they could notify developers of the same vulnerabilities. Any country with an active “cyber-attack” squad already has most of this in place. They already know how to find bugs, the only remaining step is to tell someone who can do something about it.

But what about irresponsible governments? They will certainly use cyberweapons if they have them and so won’t those bugs get out into the wild anyways?

Not if they’re already patched.

 

 

 

*In case I’m wrong and there really was no direct link between leaked NSA tools and WannaCry, such a scenario could happen just as easily as a gun running sting operation could end up arming the cartels and not catching their leaders.

The Middle Mile

0

This week my thoughts have returned again to monopolies. I still dislike them. Susan Crawford’s article on the “middle mile” section of the telecoms network. This is effectively the part of the net that connects local ISPs to the internet’s “backbone”. A comprehensive study by the FCC shows that 95% of locations are served by at most two middle mile providers. According to the Consumer Federation of America, this has cost the American consumer $150 billion since 2010. For me, this is the worst kind of monopoly because consumers don’t even see it. ISPs pay middle mile companies, and whereas everyone and their mother hate their ISP, most people don’t know the middle mile provider their data goes through. Because people don’t know that this company is ripping them off (via their ISP), it’s far easier for those companies to lobby government to deregulate them. Regulation is difficult to maintain when the public is ignorant of the issues at stake.

These debates are just a part of the larger net neutrality debate about what internet delivery companies can and can’t do. Something I’ve learned this semester is that the internet is neutral by design (it treats all data equally). The protocols define the internet and how it works. End users, severs and so on all speak this same language and if they don’t, they won’t work. The Internet Engineering Task Force (IETF) takes care of this and it seems to have done a wonderful job at having such a confusing structure that no single interest or company has been able to capture it. The IETF also has no power, it just makes voluntary standards that help maintain compatibility. However, as long as this compatibility is maintained, ISPs and other internet providing companies can do what they want, unless government or “the market” regulates. I know that in the past ISPs have banned users from attaching certain types of devices to their networks, interestingly one such example were WiFi routers (a modern variant is disallowing third party routers). This has nothing to do with treating data equally, just with how much control the ISPs have over how users use their services. Thus ISPs can still provide a connection to the internet, without providing the same freedoms usually associated with it.

In the summer of 2016 I sat in on a meeting of the Cambridge broadband task force which had been set up by city hall to explore different ways to make broadband connects more attainable/affordable for all city residents. One option they were considering was to provide broadband like any other public utility. I don’t remember if anything came of it but I think that it’s import to look into these sorts of solutions. If there’s anything that the past 100 plus years of telecom history tells us, it’s that the industry tends strongly towards monopolies unless regulated heavily.  I worry even more about the wireless industry because of how a few key players (Qualcomm) have rather predatory patent licensing practices that literally all 3g and 4g phone manufactures have to pay. These patent companies are very similar to the middle mile provides because they are invisible to the consumer yet control the network and, I assume, cost the consumer millions each year. Qualcomm even has ads saying; “You don’t know us but you enjoy us every 9.8 seconds” — as if.

Information Monopolies

0

Fake news is not new. I think the recent clamor about it is just a result of a previous political establishment having been usurped by a new coalition of news media and politicians and the information monopolies that profit with off their rise. Advertising based pseudo-curated content platforms like Facebook and Google have made it possible to target far more individuals far more accurately than ever before. Those in politics who have best taken advantage of this are the ones winning the elections. I’m not saying that Breitbart and co. are scheming geniuses who’ve cracked social media, but rather that social media selected them. Far right conspiracy blogs and news sites aren’t new. Drudge Report started in 1995 as an email-based news blog. Breitbart launched over ten years ago. It just happened that as Facebook and Google pivoted from being personal profiles and hyperlink based search results to platforms that can show us everything we “want”, even before we know we “want” it. Google search and Google news display news based on proprietary magic whatever Google thinks you want to read, or thinks you should read, or gives them the most advertising revenue. Facebook has also aggressively shifted towards showing us the news and more and more “sponsored” content mixed in with the things our friends actually post. As Ethan Zuckerman writes, Facebook denies that it is a publisher yet it clearly is curating content and not giving users any real choices regarding what they see (users can’t modify what type of content the algorithms select for, they can only block specific pages/profiles).

A novel feature of today’s fake news is that different people believe in different lies. In the good old days vast majorities of Americas were caught up in the fake news’ web. Before the invasion of Iraq, 70% of Americans believed that Saddam Hussein played a direct role in the 9/11 attack. There were no such links nor reason to think so. 40% of Americans still think that the US discovered active weapons of mass destruction programs in Iraq, they hadn’t. Heck, in 1876 Western Union, the telegraph monopoly and the Associated Press, the only wire service that was allowed to use Western Union’s lines coordinated to suppress negative news and further positive news about their chosen candidate for president, Rutherford B. Hayes [Tim Wu]. Comparatively, only 10% of Americans believe that there is strong evidence that Obama was born outside the US.

But why Trump? Hayes was Western Union’s candidate, but I doubt Trump was Google’s and Facebook’s. Rather I think that Facebook and Google care primarily about advertising sales, preventing federal regulation and tax evasion; i.e. profits. In response to the election and the spread of fake news, both companies have promised to stop it by improving their proprietary magic algorithms and filter out fake news. The benevolent monopolists in Mountain View and Menlo Park feel that they have a civic obligation (or financial need to appear to be civic minded…) to stop the spread of lies. This is like Western Union and AP promising never to manipulate another election, without giving up their monopolies. The problem is that big tech’s ability to manipulate individuals (voters) has grown so great that it is inherently a threat to democracy. Elections are won by the side that figures out how to best rig the system (news media not necessarily ballot boxes) and manipulate the most voters. Truth be damned.

Obama had big data, JFK had makeup, Roosevelt had the radio, Hayes had the telegraph and Luther had the printing press. Some of these examples are more spectacular than others, but all show a similar trend; new communication technologies reward those best able to use them and disrupt the previous elite. These upsets aren’t necessarily the work of single politicians, the Daily Mail began publishing misinformation since at least the start of the 20th century without being connected to any single prime minister. More tabloids followed and supported both Labour and the Conservatives in sensational fashion. Cable news networks like Fox have unarguably had major impacts on US politics.

The key difference between historical fake news and the modern version is that it has become decentralized. There’s no one person to blame and no one clear motive. I think this is a result of two changes, the generally decentralizing effect of the internet and a loss of trust in large entities. That the internet has allowed more people to create content that can reach arbitrarily large audiences far more easily than before is well documented. Anyone who can use the internet can write a blog or start an online tabloid complete with all caps headlines and retouched photos. The loss of trust in large entities is based on empirical measurements that Ethan Zuckerman points out in his election postmortem. If people don’t trust traditional news sources, companies or governments, then it’s hardly surprising that alternatives sources will gain more attention. Perhaps I’m just an actor in this general trend, but I don’t think any of these institutions deserve anyone’s trust, at least not trust that they are working in our best interests. I’ve never understood why anyone would trust a president’s administration or a newspaper except out of willful ignorance (life’s easier when you just trust ’em). Zuckerman also notes that trust in institutions does not correlate to interpersonal trust; Sweden is an example of a country with high levels of interpersonal trust and low trust in institutions. Sweden generally seems like a pretty healthy democracy. I can’t prove that that not trusting institutions strengthens democracy, but trusting them seems awfully ignorant. Every administration has told its lies and every newspaper has at some point lost touch with its mission (remember that time the New York Times waited until after the election and then some to say that Bush was tapping our phones without warrants?). Wikileaks, the Panama Papers, Paradise Papers and Snowden have shown how governments, companies, business people and leaders all over have skirted their duties, hidden their crimes and lied under oath.

These may not be the reasons why the average person distrusts institutions, but I bet everyone has their own reasons, and just like mine, they will be largely correct if somewhat opportunistic and politically motivated. I have not suggested a single solution or remedy. I’m not sure what could be done and what would work. The one thing I’m fairly certain about is that the big tech monopolies must be broken up or regulated as common carriers. Zuckerman says Facebook should allow users control the filters that dictate the content they see. Facebook will not want to do this because too many users will filter out the ads (although they could still appear on the side). The wonderful thing about government is that it can force companies to comply. Strict regulation may be the only option.

 

Digital Citizenship

0

Estonia:

“Digital citizenship” in the Estonian case seems to be mostly about attracting business. Their pitch is simple; register as an Estonian e-citizen by filling out a couple of forms, get your ID and start your company based in Estonia (which has access to the European Single Market). The German word for shell corporation is “Briefkastenfirma” — literally post box company. Perhaps here the term may become email company.

The Estonian case is more than a Baltic pitch at paradise papers tax evasion because it also includes this concept of proving who you are online and thus being able to digitally sign contracts that are enforceable by law. We can already use credit cards to pay for things online, which is a sort of contract, but they are very limited. However, Credit card fraud is a big problem, and while the general setup and verification process of Estonian e-citizenships may be better than credit cards, humans will prevail and mess up leading to computationally inexpensive ways to steal e-identities. This may lead to some issues.

 

Open source government:

This can mean several very different things. 1) greater transparency, 2) online collaborative lawmaking and 3) open source development of government tools.

1) Greater transparency. Government would be open source in the sense that you can read the source code which would not just include constitutions and laws but also government interpretations of laws so that they are precisely defined and government agenda. Open source might also imply that anyone can submit bug reports or patches which leads us to

2) Collaborative lawmaking. I think that using an approach similar to open source projects for lawmaking is quite promising. It is a format that would allow many to contribute in small or big ways and wouldn’t require any expertise, kinda like wikipedia. Of course would still need some sort of framework to enact the laws which again is possible but would require a fundamental shift in politics. There are examples of projects with large similarities to such a frame work, like the Icelandic constitutional reform in 2010-13 which included the publishing of drafts and public feedback but did not follow the typical format of an open source project. Lastly, for lawmaking to work it presumably must be seen as legitimate by the people (or enforced autocratically but that seems antithetical to the point of open source government) and that is difficult to do when so many can contributed and the Russian troll army has infiltrated — although now that I think of it maybe this is also a feature of more traditional government…

3) Open sourcing government software. This city of Boston has a github page and other cities have done similar things. The US federal government has code.gov. These projects are great but I doubt that making some github pages will lead to much because open source projects depend on dedicated developers and a base of users who care. I think this is the sort of issue that Aaron Schwartz would have been really good at tackling.

Superhuman Intelligence

0

Ramblings of a college student who thinks he’s smarter than he is; mostly in response to this piece by Vernor Vinge.

I’m not in a position to criticize a whole field, or even a small subset of a field, but I find the general approach to finding “superhuman intelligence” quite unconvincing. It seems the limitations are taken to be largely computational, that the human brain is somehow better than modern hardware. I’m not sure something that runs on less than 100 watts is computationally more powerful than a cluster of computers consuming several kilowatts. From what I understand the strength of our brains seems to come from some innate structure or features that are dictated genetically. This allows any given human infant to learn any given human language even with the so-called “poverty of stimulus”.[1] More and more we are finding unexpected characteristics in our “AI” programs, notably those that exhibit human bias in the form of racial prejudice. Surely if these algorithms can learn and “think” just as well as humans then given enough time even the racist ones would conclude that racism has no biological basis. I think rather that any intelligent being can only learn if it has been “pre-programmed” to be “narrow minded” in some sense. Like humans may have some sort of innate understanding of grammar so too must machines. Even if we build our AI using methods that simulate evolutionary processes, we could have to constrain them initially as the space of all possible genetic combinations (or the space of all possible algorithms) of a reasonable length is far far larger than what can reasonable be explored (even by evolution in the “natural” world, even given billions of years…with or without accelerating returns). Assuming a static fitness landscape, where exactly we begin will determine what local maximums (in terms of fitness, i.e. intelligence) are plausibly achieved. There is no guarantee of getting anywhere unless we constrain, and if we do we are not guarantied anything close to a global maximum. Also, if we constrain, then we will be the creators and principle designers of super-intelligence. Somehow this sounds much less exciting — assuming we are pretty dumb, human super-intelligence doesn’t sound nearly as exciting as “superhuman intelligence.”

Another potential way to attain “superhuman intelligence” is by augmenting human intelligence with machines. Vinge discusses a mild form of transhumanism by having humans interact with computers to collaborate on tasks rather than uploading our brains to silicone chips etc. His argument is the humans augmented with machines can reach a higher intelligence by moving beyond what just a human can do. High school algebra is certainly easier with a calculator, but I think that many of the advances in human machine interaction/collaboration have not created more intelligent beings. Cellphone-distraction causes accidents, both for motorists and pedestrians. Studies show that their presence, even when turned completely off, distracts us and makes us perform worse on human-only tasks. Yes we can communicate far more quickly and with far fewer constraints and yes we have the world at our fingertips but most of us just end up opening and closing the same programs in cycles, scrolling through Facebook news feeds consuming adverts (Russian or otherwise) and arguably gaining nothing but the comfort of dull complacency. All while risking death by distracted street-crossing.[2] Moving on to stronger transhumanism (real physical integration) I expect Facebook, Apple and co. will make this just as distracting (even if it increases productivity, as cellphones certainly can) and again it won’t actually be a significant step towards “superhuman intelligence.” At any rate, I think we should first figure out what makes us intelligent (or not) and what intelligence even is (it’s not chess anymore) before claiming the singularity is near.

 

 

[1] Some argue that the poverty of stimulus and universal grammar are not correct theories, but when in doubt I’ll side with Chomsky.

[2] I haven’t found any real evidence that this has happened, but sadly I find it more “inevitable” than super-intelligence.

The Botnet of Things

0

Whenever I read articles about the Internet of Things (IoT), they all seem to be saying the same things; “smart” is good, “smart” things will solve all our problems and most importantly “smart” things are the future. My favorite part is when the article starts proclaiming which problems all these networked doorbells will solve but somehow forget to define the problem or outline how we they will fix anything. It’s always nice to say that there’s this great new technology that will herald in a new era in human development, but I question whether this new thing is really the future (or just a fad) and whether this will actually represent a significant change in our lives or the way we interact with technology.

Of course viewing the IoT as just a part of modern decadence isn’t reasonable either; one can argue that automated window shades that lower when the sun is about to hit your computer screen will just keep you from getting up, looking out the window and noticing you’d much rather be out there enjoying the day than wasting your time on buzzfeed. I’m actually very partial to this argument, but I somehow don’t think that indoor plumbing is a disaster because it keeps you from going outside every morning to the outhouse so there must be limits here. Further, some IoT devices are truly useful, for instance smart thermostats that lower energy consumption by optimizing heating/cooling according to your habits. But back to my earlier point, does any of this make smart thermostats a game changer? Are they more effective in lowering carbon emissions and energy bills than simply turning down the heat and being conscious about energy use? I’m not so convinced.

Lastly, much has been written about security concerns regarding the IoT. Recent DDoS attacks using botnets of routers, home security cameras and other IoT devices have crippled websites and dns severs. This all happens without the actual owners of the physical devices knowing that they are part of an international cyber attack and therefore is a difficult issue to tackle. Nobody in the IoT business or their consumers are incentivized to secure these devices. Not all IoT devices are equally prone to hijacking but security still comes at a premium. Worryingly, embedded smart devices may never be disconnected from the internet and many will never receive security updates (or be running software that is remotely secure). Neither the producers or the consumers of these devices will pay their true price but rather this will be distributed among the victims. I find this highly troubling and indicative of an out-of-control tech industry that “innovates” first and asks questions later. The greatest irony is that some IoT devices are even marketed as increasing “security” by providing smarthome security in the form of locks, alarms and cameras. These devices aren’t just a menace to general internet security, but are often an excellent way for the criminally minded to get access to people’s homes. Imagine how much easier it is to rob a house if you have access to the security cameras ahead of time and slide past the secure smart lock using a months old Bluetooth vulnerability (not that normal locks are secure either but still). I’ll just end by linking to this lovely website I came across where you can scroll through unsecured webcam feeds ordered by location, manufacturer or “popularity.”

 

 

The Commercial Internet

0

This week I read “The Long Tail” by Chris Anderson, “To do with the price of fish” in The Economist, “Shared,Shared, Collaborative and On Demand: The New Digital Economy” by Aaron Smith, “$1 Billion for Dollar Shave Club” by Steven Davidoff Solomon and “Robopocalyspe Not” by James Surowiecki.

These readings were all about the economic effects of the internet on both novel online industries as well as on real-world business. In the case of the Economist article, wireless networking was used to coordinate fish sales in India. Cell phones allowed for fishermen to call ahead and decide where to go to sell the day’s catch for the most profit. Originally, prices differed a lot between markets because if one boat came back with a good catch of a certain fish, others would too and the increased supply often lead to drops in prices at one market while supply was too low at another. Cell phones, and the internet generally, can help to coordinate pricing and make the larger economy for efficient. Ironically, this is what the Soviets were trying to do with their OGAS as I wrote about earlier, while in the US it was illegal to use the internet for commercial purposes until it was privatized circa 1990 (Erik Fair).

Online businesses have also had a huge impact on the retail industry. Chris Anderson discusses this in “The Long Tail” where he shows that low sales volume products are profitable on the internet even when they wouldn’t be worth the shelf space in a physical-world store.  These products are the “long tail.” The three main industries he talks about are the music, books and film industries. Anderson argues that the new ability for record labels and the like to keep selling low volume items past their typical due dates is a great opportunity to make more money, even as peer-to-peer file sharing networks are distributing more and more music less legally. His argument is that as long as the cost is low enough, people prefer buying to downloading illegally because there are costs associated with getting not free things for free on the internet (like getting caught, and the general hassle). Therefore, if record labels provide older and smaller market songs for a low price (49 cents) then people will prefer this to alternative methods. As the record label wasn’t able to sell any of these songs before online sales this would just be extra cash. This argument goes against the oft-touted the-internet-is-killing-our-industry complaint from record labels (although he does predict that the retail (what’s a CD?) music industry won’t last long).

What struck me most is that Anderson wrote this article in 2004 and his analysis still makes sense and his predictions are quite accurate. He predicts that flat fee unlimited streaming services will take over in the future. However, what he doesn’t predict is the rise of crowdsourced and peer-produced content. His article predates YouTube by a couple of months so this omission isn’t a big surprise, but sites like YouTube have been incredibly disruptive because the audience and the content creators are the same. YouTube doesn’t pay people to upload videos, and especially for the first few years of its existence, the site was dominated by amateur videos produced by people who were guarantied no compensation for their efforts. YouTube is a video entertainment company that crowdsources all its content production and only shares advertising profits after the fact — video producers don’t sell their content to YouTube, they only get a share of the revenue it creates. YouTube also became successful before it started sharing profits at all through its partner program in 2007.

12 years after Anderson, Steven Davidoff Solomon wrote about Unilever’s 1 billion dollar purchase of the Dollar Shave Club. Solomon writes that the Dollar Shave Club business strategy was innovative in that the company was only a marketing platform that contracted all the real-world work. It managed to compete against Gillette’s market dominance through a mix of branding, costumer loyalty, and convenience. It also undercut Gillette by forgoing relationships with retail stores (or retail stores entirely). Solomon notes that anyone could have bought Dollar Shave Club razors without branding directly from their supplier’s website — and save money. They didn’t invent a cheaper razor, they just came up with a better way to sell them.

These online “platforms” are very elaborate and often brilliant schemes to profit off of other people’s work. Just like the Dollar Shave club that connected costumers with a cheaper razor manufacturer through an extremely convenient interface, many online companies are just specialized retailers that connect costumers with products. Some, like YouTube, even manage to turn their users into both their products (advertising targets) and the producers of their content. “Platforms” also have the benefit of not having to deal with labor relations and can masquerade as software companies. Of course, not of this is to say that I think this is all bad, YouTube especially has played an important role in lowering the barrier of entry to content production and online music libraries have allowed far more musicians than just the big hits to profit off of their work and connect with fans.

The tubes are in parallel

0

This is the third time I’ve read End-to-end Arguments in System Design (Saltzer, Reed & Clark) and the paper has gotten better every time. The main takeaway for me is that whatever feature you might want to network to offer, it’s usually much better to implement it at the end-point application level than the network level. Ensuring FIFO delivery or ensuring all packets arrive is nice but such features are trouble when all you care about is getting most of the packets there quickly (like in voice calls). The end-points can take care of packet ordering and data integrity by using check sums/hashes of the data and if the application requires both perfect accuracy and timeliness, then network level “solutions” won’t work either because some things can’t be done (networks can get faster and more accurate, but as long as there are errors and the speed of light there will always be a trade-off between speed and reliability). Further, because end-point applications will have to assume some error rate, it will always have to check for errors anyway. The conclusion here is that having the network do more will get in the way of applications that don’t require those features because features inevitably include trade-offs.

I think BitTorrent is a good example of doing things the right way. BitTorrent divides files into “pieces” and checks for errors on each individual “piece”; it doesn’t re-download the whole file because one bit was dropped. It does this in part because all the pieces come from different places, and so it’s important to know that you are getting the right data and not something completely different (when downloading from a single source you may not get what you wanted either, but the only fixable issues are errors, because downloading the wrong file entirely is more a question about life-choices that has nothing to do with networks). I think BitTorrent is a particularly clever protocol because it embraces the weakness of the internet and provides an end-point solution. By weaknesses I mean bottlenecks that happen when a lot of traffic is going to one server (e.g. for downloads), both for the server itself and for its connection to the internet as well as the inevitable reliability issues. There are no bottlenecks when the tubes are all in parallel.

Instead of relying on someone else to fix the system, BitTorrent fixes the problems itself. Organizations that advocate treating packets differently based on what they are might gain from taking this sort of approach. Of course, the subversive nature of BitTorrent might convince them otherwise. Oh well.

ARPANET to Internet and why we might end up with OGAS

0

I’ve finished, Where Wizards Stay up Late by Hafner and Lyon and read “How the Soviets invented the internet and why it didn’t work” by Benjamin Peters.

“Standards should be discovered, not decreed.” -Unnamed TCP/IP advocate.

After finishing Where Wizards Stay up Late, I began to wonder how on earth any of this could have happened especially with so many companies trying to profit off of it at every step and simultaneously stop others from doing so. BBN tried to keep their IMP software secret, but gave in and let people have the source code (for a fee), ARPA nearly sold the whole ARPANET to AT&T (AT&T still thought packet switching wasn’t the future — close call). But in the end the major decisions were made by engineers who yelled at each other a bunch and implemented things in ways that worked even when some governments supported another way (like TCP/IP vs OSI). email was hacked together using ARNPANET’s file transfer protocol and soon began being used for all sorts of not-allowed things like socializing or anything that wasn’t government business (email didn’t care but ARPA technically did). This all happened somehow, with a lot of government funding and companies that funded long-term internal projects that had no immediate value and often weren’t commercially viable (notably, BBN had financial troubles because they failed to make enough of their products profitable — but at the same time they did quite a bit for the Internet). We only need to look at the infamous business model of AT&T/Bell and their telephone network to see a sad example, not of failure but of an inability to allow for real innovation. The ARPANET was in some senses like a monopoly, albeit government controlled (socialists?!) but predominantly benevolent and open/free. Once TCP/IP took over more and more networks started being connected to the ARPANET (hence the Inter-net). This is precisely what AT&T feared most regarding the telephone network; it simply wasn’t in their interests to let other companies profit using their network. The openness of innovation in the ARPANET and early Internet isn’t at all characteristic of the free market, where money and profit drive rather than discovery and true innovation done by people and companies who did not necessarily get rich off of it — even if their idea caught on.

This train of thought led me to wonder what the Soviets were up to at the time. A short search later I found that the Russians did indeed have something, kinda, and that it was a complete failure. OGAS (All-State Automated System) was meant to facilitate economic planning and pricing decisions across the Soviet Union. It was an idealist’s dream for the communist (cyber)state. In typical Soviet fashion that meant nothing and conflicting rational interests kept OGAS from becoming anything. At least, this is how Benjamin Peters describes it. According to Peters, “The first global computer network emerged thanks to capitalists behaving like cooperative socialists, not socialists behaving like competitive capitalists.” I don’t know enough about Soviet politics to know whether waring government ministries could ever be described as capitalist competition, but it does seem plausible all the same.

Peters continued by referencing Bruno Latour’s argument that technology is society made durable. In other words (says Peters) this means that “social values are embedded in technology.” He quickly ties this into modern mass surveillance projects like Facebook, Microsoft Cloud and NSA saying thatthese may lead to continuing the “20th- century tradition of general secretariats committed to privatizing personal and public information for their institutional gain.” As society moves towards accepting centralized control on massive scales (governments that see all and Facebook that knows all) our technology will begin to reflect this centralized philosophy; exactly the  opposite of the “values” of openness, innovation and discovery that were the initial drivers of the Internet.

 

In other news, nano has a built in spell check!

Log in