Standardization

As we come to the end of our look at the beginnings of the Internet, I think it’s valuable to consider the role of standardization — specifically its impact on the way the Internet, and things in general, develop. On the one hand, there is the very obvious fact that, when producing something for a large scale, there has to be some agreement between the involved parties. To give a simple real-life example, there would be no cooperation between people if we didn’t not have standard way(s) to communicate. If each person spoke a different language, we certainly wouldn’t get anywhere.The analogue with regards to the Internet is, of course, the various protocols that define, at least to some extent, how users of the network ought to act. From TCP/IP, which has weathered the test of time, to HTTP, there a numerous standardizations that allow the Internet to run.

In my opinion, however, there is equal merit to individuality, or at least, competing standards. Almost always, the first idea is not the best one, or even the second best. Either, we build upon our original ideas and greatly refine them, or sometimes, we throw them out entirely, substituting a superior concept. A “free market” of ideas, where people are able to propose their own thoughts on something can be incredibly instrumental to its ultimate success. Through this open system of evaluation, people are able to test out things for themselves, in the best case perpetuating a process of iterative refinement and, at least, providing several options from which to choose the best. Looking back at the Internet, had OSI never existed, we never would have known how good TCP/IP was. And, perhaps, if more people had been willing to challenge the status quo and develop their own protocols, we might have had an even more efficient system.

Of course, it is pretty much never too late to change and improve a system. There a constantly changes being made to the Internet, despite its massive scale today. And, as a corollary, there are definitely plenty of aspects of the Internet that aren’t standardized. An incredible amount of competing technologies and philosophies exist and continue to arise — e.g. when’s the last time someone developed something with Flash? So, I guess, as with just about everything else, we are forced to conclude that standardization is beneficial in moderation. It’s a good starting point to set a few ground rules, but ideally, design should be flexible and subject to constant re-evaluation and improvement.

The Instantaneous Nature of the Net

The beginnings of the Internet are pretty amazing — not just because the ideas were so revolutionary and probably outlandish for the time, but more so because of how far we have progressed since then. The reliability testing that had to be done for FTP, isolating each piece of the network to see which part or parts were failing, is akin to tearing through the walls in your house to see if a rat has chewed through one of your electric cables when a light isn’t working in your house. It is very much analog, tangible. Today, we would never think twice about whether a file sent over the Internet reached its destination. We drag the file to the browser, click “send,” and can 99.9999% of the time safely assume that the file will go to its intended recipient.

Much more interesting to me, however, is the idea of instantaneousness. Whenever we use the Internet, unless we’re stuck on a pesky 3G connection — where’s 5G at already? – there is a certain expectation that everything will load immediately. In communications, especially, this is important. Whether using iMessage, Facebook Messenger, or even e-mail, the message is received pretty much right after it’s sent. This is a far cry from when, back in the days of the ARPANET, e-mail was bundled and sent over FTP once a day. Truly, as technology progresses, we become more and more reliant on its capabilities. If e-mail were as slow or as unreliable today as it was back in those days, our society would function a whole lot differently.

We use e-mail for work, school, news, advertising. If, as back in the day, we had to call up each person individually to send across a message immediately, it would take hours out of the day, not to mention the very likely chance that at least a few people would be away form their (very stationary) phones. In the modern age, technology makes us much more productive. Some might say that we’re becoming lazy or losing our “real human interaction” by spending all of our time staring at a screen in lieu of a face-to-face conversation. But used effectively, these technologies can really drive us forward in our everyday lives. With specific regards to communication, given the efficiency of today’s systems and products, we need not spend too much time on the Internet and with our devices to get things done — certainly not nearly as much as we would have had to if trying to do the same things at the same scale back in the days of the ARPANET. For example, to reach a wide audience today is simply to post a tweet or send out a mass e-mail, whereas 40 years ago, that might have entailed calling individuals, or sending out many, many pieces of mail.

It will be interesting to watch, especially since we seemed to have reached almost real-time in our Internet communications, where future improvements will take place and how they will change the way which, and frequency with which, people will use the Internet.

Hello world!

Recently the issue of privacy has resurfaced in my musings. As I was reading some articles for my Expos class Privacy and Surveillance, I was reminded of our discussion about the joint AI venture between Microsoft and Amazon. When I first read the article, I was quite surprised, since Amazon has always seemed to have a “what you do, we can do better ourselves.” But, after some consideration, my surprise has started to turn into a bit of apprehension. After all, we are in an age where pretty much everything we do can be accessed through the Internet. Our first layer of personal security is the fact that people/companies on the Internet don’t and can’t know everything about us. The “walled garden” has actually, to a large extent, probably protected our security. It has always been in a company’s best interest to keep our data for themselves,  often to target ads/services towards specific demographics. There hasn’t really been incentive for companies to pool or share user’s personal data, unless, of course, money is involved.

This joint venture and similar collaborations, however, necessitate the sharing of data. As it is, these AI assistants collect an incredible amount of information. It has been proven time and again that, even though these companies claim that their assistants only listen when called, Alexa and Cortana are always listening. It’s bad enough that Amazon has been listening in on family conversations in the living around. It’s bad enough that Microsoft has been tracking everything we do on our computers, from work to play. Now, however, Amazon and Microsoft have access to the other’s data pool. Alexa knows everything about a user that once only Cortana knew – and vice versa. Both companies have a lot fuller of a picture about who are using their devices. From a purely AI perspective, this all sounds great. Responses will be more accurate and tailored towards individuals. But, what is the cost to privacy.

It has been said that in the digital era, there is no such thing as privacy. In a world moving towards data collection, mining, etc. — big data — I am increasingly inclined to agree. But, it isn’t just the corporations themselves that are a concern. Piles of personal data are quite attractive to the hoards of black hat hackers out there. I’d be interested to see a) what else these companies are doing with user’s data and b) what exactly they’re doing on the security front.