Is Cyber Real?

In class this week with Dr. Michael Sulmeyer, we had a particularly policy-fueled discussion. For a considerable portion of our time together, we landed on the thought-provoking issue of Russian interference with this past election, examining their actions and how the U.S. ought to have responded. I found the comparison between physical attacks and cyber attacks to raise an interesting series of questions. For, in this day and age, where the Internet holds a great deal of power in our lives, does “illegal” activity online warrant the same level of response as a crime committed in person?

Personally, I have a hard time justifying a physical response to a cyber-provocation. In the case of Russia, for example, unless their actions had caused some sort of physical repercussions, e.g. violence where people died, I don’t think a military response would have been necessary. There are plenty of cases in which online activity incited some sort of tangible action–take the online posts which organize violent protests or terrorist attacks, for example. In those instances, I think it’s much easier to argue for a strong, physical response in those cases.

In the election interference issue, I would much rather have seen an equivalent action from the U.S. Sanctions are pretty weak and clearly haven’t stopped Russia from doing what they want. Rather, I lean on the side of sneaky, anonymized action on the Russian Internet. Something like the political response, which Dr. Sulmeyer suggested, seems reasonable. Just as they undermined our democratic process, it seems best to have undermined their government, i.e. Putin, with action that results in the same type of propoganda.

At the root of this debate, moreover, lies a much more general discussion. That is, whether or not the Internet has really become so engrained in our lives that we treat everything on it as “real.” For me, there is way too much volatility online–way too much unchecked, free space–that it is hard to take everything on the net at face value. For the time being, and I could easily see this changing in the near future, I still see a distinction between real life and life on the Web, particularly when there are dire consequences in the case of physical intervention. Most importantly, war that starts as cyberwarfare should stay cyber.

Internet Governance

In this week’s discussion, we returned to exploring the history of the Internet. Our guest, Professor Zittrain, talked at length specifically about the topic of Internet governance. Even today, control over the Internet remains an important issue. In the early day’s, there were a host of organizations – IANA/Jon Postel, ICANN, etc. – which claimed to regulate the Internet in some way. In a sense, this was fine back in the day. Everyone who used the Internet probably knew each other, at least tangentially. Jon Postel knew the people to whom he was assigning domain names, and there was not really any competition at play.

Today, however, we live in an era where the Internet is largely decentralized – in theory, anyway. With so many users and so many sites serving up content, there can’t really be one entity which decides who gets what. In different countries, for example, the Internet looks vastly different, especially when comparing the loose regulation of the U.S. to, say, the tight constraints of China. Yet, on the other hand, we also have large companies like Facebook and Google, in particular, in a sense curating many Internet users’ experiences. To navigate to a website, most people search it up on Google. And, a lot of people will rely heavily on the Facebook feed for updates.

Similarly, when it comes to domain names, we have companies like GoDaddy and Amazon making profits. Here especially, one must ask, what qualifies these companies to make money off a system that is intended to be open and decentralized. It seems wrong for someone to be making a profit off of something that should be the public domain (no pun intended). Yet, the other solution is to have some agency control aspects of the Internet. Certainly, we wouldn’t want the government to poke its head into this realm, and there isn’t an organization that would handle it for free or without bias.

Perhaps then, as with the economy, the best way to make something as free as possible is to turn it into a competitive marketplace. Of course, even in our open market, we still have protections in place, to prevent monopolies, for example. The question remains then, can the Internet really remain decentralized forever, and if not, who should take the responsibility for its regulation, and to what extent?

 

 

The New Generation of Internet

The Internet was designed by humans for humans…or so we thought. To the extent that we have used the Internet in our daily lives, there has always been the expectation that what we saw was human-generated. That is, the information, the articles, the comments, etc. were all written by real people for other people to consume. Yet, we are moving to an age where computer-generated content is becoming a larger and larger percentage of what people see online.

Already, there are fake social media accounts which artificially inflate the popularity of some users on these sites. On Instagram, for example, people buy these bots to increase their follower count and the number of likes the receive on their posts. Similarly, we see computer-generated likes on Reddit and Youtube. It is this latter case which can be worrying. To a large degree, it undermines the legitimacy of the approval and disapproval system, since there is no longer the concept of one user-one vote. At least in my experience, there is an inherent trust that comments with hundreds/thousands of likes and up-votes are actually popular. Rarely would I stop to consider that those numbers were fake.

Yet, it would appear that, especially with the rapid development of machine learning and AI, a lot more content on the Internet will be computer-generated. Indeed, some companies actually seem to be embracing this trend. Already, simple news stories are written by programs fed with facts and set to write based off some template. At a recent Quora tech talk which I had the chance to attend, CEO Adam D’Angelo also was receptive to the idea of machine-generated content. He noted that while the technology is still premature, he could easily envision questions on his site being written by AI. Again, he was a bit more hesitant about the idea of computer programs answering questions, since in his opinion, that requires human experiences. But, it still indicates that the online community is positive about the role of AI.

I, however, remain skeptical about the potential dilution of the Internet resulting from a plethora of computers generating and posting content. It’s already hard enough to sift through the pile of content posted by people on social media. Imagine trying to decipher the truth and/or the relevant information when you don’t even know whether it was written by a human…

Open Government

Although I didn’t have the chance to stay for the entirety of this week’s seminar, it was very interesting to hear David Eaves’s perspective on how the Internet is and will affect government. In particular, I find the idea of open government quite interesting, especially as we move deeper into an age of Big Data. There is no doubt that the government holds, and will continue to amass, vast quantities of data. A significant portion of our Internet dealings, from the emails we send to the purchases we make on Amazon, end up in the databanks of the NSA. We’re far removed from the era of targeted wiretaps. Data collection is easier and broader than ever before.

With that much data comes a great amount of power. The government could, compiling data from a variety of sources, generate very detailed, accurate profiles on pretty much anyone in the U.S. — maybe even the world. It very likely knows more about us than our closest friends and family. Imagine if it used that knowledge to blackmail citizens….

To an extent, the transparency characteristic of an open government should serve to mitigate that power dynamic. If the dealings of the government are available for all to see, theoretically with greater and greater frequency, as documents and the like shift from paper to digital formats, there is an inherent check on what it does. However, it is important to note that the government still requires some deal of secrecy to function. If all our strategies were out on the Internet, the U.S. would be vulnerable to attacks of all kinds from foreign agents. Furthermore, if everyone knew what the government was looking at, it would be easier to skirt around the law.

Already, there are some initiatives to establish a bit of openness in the government (c.f. data.gov), but as we know from events such as the Snowden leaks, there is still as vast amount of secrecy. It will be interesting to see how involved the populous will be in pushing the government towards a more open approach and by extension to what extent the government will actually reciprocate and be straightforward about their dealings.

 

The Future of AI

As we move into consideration of where technology will take us, it becomes more and more of a game of speculation. This week’s discussion of AI and the ideas behind the singularity felt deeply in the realm of sci-fi writers and directors. It’s amazing to think that in this day and age — where to be frank, Siri can barely understand basic queries — we are already starting to worry about the creation of a real-life Skynet.

I’m skeptical about the idea that something which stems from the human mind could possibly surpass any conceptions of intelligence. Almost anything created by humans, especially something on the scale that would kick-off the singularity, tends to be flawed in some way. And, I highly doubt that the program would be smart enough to fix itself…So, even if we somehow got to the cycle of self-iteration characteristic of the singularity, wouldn’t there always be some backdoor in the superintelligence, giving humans the chance to shut it down? I’m not sure that, within the limitations of the physical word, there could ever even be an all-powerful AI.

Perhaps more attainable is the idea of an AI that can mimic a human. Here too, I am hung up on the concept that there is inherently something special about the neurochemical reactions that drive human life. It’s really difficult to believe, though it may be true, that something as complex and powerful as the human mind could be replicated by a (very complex) circuit, provided the model were taught well. Given the limitations of current computing, for example the constraints of the number of bits in memory, it seems especially farfetched. Indeed, most everything in tech that attempts to reproduce real life is a mere approximation — e.g. images are represented by pixels in a limited colorspace.

Then again, maybe it’s possible, just beyond the limitations of our current technologies. This gets back to the idea I discussed in my last blog post, that it is difficult for humans to recognize the potential of what’s in front of them, even just in terms of “smart” consumer items. There really is no telling where we are heading in the future with tech, let alone something as complicated and exciting as AI. Right now, we’re left speculating.

 

Skeptics and Visionaries

The “Internet of Things.” Never in my – relatively short – amount of time here on Earth did I imagine that my fridge would talk to my phone, or Netflix, or just about anything on the Internet. And, actually, the one I have doesn’t. For now, I just my put my food in there and take it out when I’m hungry, hoping it’s not spoiled by the time I get to it. But, I honestly wouldn’t mind having my fridge tell me when I’m running out of milk or let me play Angry Birds while I wait for the microwave to finish. In fact, it might be kind of nice. Sure, I really don’t need this kind of functionality from a refrigerator. Yet, if it is capable of making my life easier, even by a tad bit, is a smart fridge really that bad?

A lot of people are skeptical of the trend towards interconnectedness. To an extent, I understand the worry; it is a little weird to think that Google could hold all my thermostat data by way of Nest. And, I certainly get the fear that a hacker could screw with my car while I’m driving down the highway, putting me in a perilous situation. Privacy and, especially, security are serious concerns that always arise with technological advancements — often for good reason. In fact, a little skepticism here and there might be a good thing. It could force companies to put a little more care into their encryption algorithms and general security practices, if consumers show some hesitation or hold out entirely on buying a product.

I think too often, though, we as consumers — especially, those who tend to be skeptic — struggle, and often fail to see the potential of some technologies and products. After all, it is more or less a cost-benefit analysis that causes people to either approve or steer clear of some advancements. A fridge that costs more, collects data about your food choices and quite frankly, probably everything it hears, doesn’t sound so good if all you’re getting in return is essentially a big tablet embedded in your fridge. Might as well just tape an iPad to your fridge door. But, the idea of a smart fridge is so much more. Envision a fridge that knows your grocery list and orders things for you when you’re running low on a particular item. Envision a fridge that can more finely and more efficiently control its temperature, saving energy and allowing food to last longer.

Sometimes when products are put out in their infancy, they get crushed in the market. Consumers don’t really and, to be fair, shouldn’t necessarily have to see products for their potential, when they are spending so much money on them. So shout out to the visionaries who can see far enough into the future to know where their (seemingly minute) technological advancements of today will lead them in 5, 10, or however many years. And, props, too, to the people who share the same foresight, buying not into the tangible product of today, but the very intangible but realizable ideas of tomorrow (a cliche example, but seriously c.f. Elon Musk and all the people who bought the original Model S and Roadster).

The Human Element

There is something special about human interaction. It’s certainly much more pleasant to call customer service and hear a human voice on the other hand than the robotic din of an automated system; not to mention, the human (at least some of the time) will actually help you get what you need. Walking into a store, big or small, and reaping the benefits of the employee’s knowledge and experience can be much nicer than sifting through online reviews and trying to find an item in the back aisle. And, for whatever inexplicable reason, something becomes unique when we know that a human was in someway involved with it. In particular, following our discussion in class, I feel this pertains to the manufacturing industries.

For a long time, Rolls Royce has been considered the creme of the crop when it comes to automobiles. Not only are they notoriously comfortable, but truly their allure lies in the exquisite craftsmanship and attention to detail put into each vehicle that leaves the production line — I hesitate to even use that term, because of how much “soul,” for lack of a better term, is involved in the process. From a purely utilitarian standpoint, a car made by machines would probably more reliable, and definitely much cheaper than a Rolls. The handcrafted engine will probably breakdown, and will definitely be a pain to service. But, there continues to be a market for these and other similar luxury items. In a sense, people view these hand-made items almost as forms of art.

Humans are not always rational creatures. Humans are definitely not perfect. When a machine does something, as far as we are concerned, the outcome is more or less perfect. Sculpture is flawed. Yet, we admire without end the works of Bernini and Michaelangelo. We don’t show the same appreciation for 3D-printed models (unless of course, we design them ourselves). Character comes not from being perfect, but from the story, the passion, the thought behind the product. We humans appreciate the impractical, exactly because there is something special to be appreciated.

I don’t think robots are going to completely take over manufacturing. Sure, mass produced products like phones and t-shirts can be made by automated processes. Even then, though, there will always be humans at the inception. The design, the engineering, the testing — not everything can be done by a computer. As long as humans don’t themselves become robots, that is don’t completely lose personality, there will always be a place for a human somewhere along the line. Especially at the higher end of the market, I would be shocked to see a move away from hand-crafted, manually-intense, labor processes anytime soon.

 

Standardization

As we come to the end of our look at the beginnings of the Internet, I think it’s valuable to consider the role of standardization — specifically its impact on the way the Internet, and things in general, develop. On the one hand, there is the very obvious fact that, when producing something for a large scale, there has to be some agreement between the involved parties. To give a simple real-life example, there would be no cooperation between people if we didn’t not have standard way(s) to communicate. If each person spoke a different language, we certainly wouldn’t get anywhere.The analogue with regards to the Internet is, of course, the various protocols that define, at least to some extent, how users of the network ought to act. From TCP/IP, which has weathered the test of time, to HTTP, there a numerous standardizations that allow the Internet to run.

In my opinion, however, there is equal merit to individuality, or at least, competing standards. Almost always, the first idea is not the best one, or even the second best. Either, we build upon our original ideas and greatly refine them, or sometimes, we throw them out entirely, substituting a superior concept. A “free market” of ideas, where people are able to propose their own thoughts on something can be incredibly instrumental to its ultimate success. Through this open system of evaluation, people are able to test out things for themselves, in the best case perpetuating a process of iterative refinement and, at least, providing several options from which to choose the best. Looking back at the Internet, had OSI never existed, we never would have known how good TCP/IP was. And, perhaps, if more people had been willing to challenge the status quo and develop their own protocols, we might have had an even more efficient system.

Of course, it is pretty much never too late to change and improve a system. There a constantly changes being made to the Internet, despite its massive scale today. And, as a corollary, there are definitely plenty of aspects of the Internet that aren’t standardized. An incredible amount of competing technologies and philosophies exist and continue to arise — e.g. when’s the last time someone developed something with Flash? So, I guess, as with just about everything else, we are forced to conclude that standardization is beneficial in moderation. It’s a good starting point to set a few ground rules, but ideally, design should be flexible and subject to constant re-evaluation and improvement.

The Instantaneous Nature of the Net

The beginnings of the Internet are pretty amazing — not just because the ideas were so revolutionary and probably outlandish for the time, but more so because of how far we have progressed since then. The reliability testing that had to be done for FTP, isolating each piece of the network to see which part or parts were failing, is akin to tearing through the walls in your house to see if a rat has chewed through one of your electric cables when a light isn’t working in your house. It is very much analog, tangible. Today, we would never think twice about whether a file sent over the Internet reached its destination. We drag the file to the browser, click “send,” and can 99.9999% of the time safely assume that the file will go to its intended recipient.

Much more interesting to me, however, is the idea of instantaneousness. Whenever we use the Internet, unless we’re stuck on a pesky 3G connection — where’s 5G at already? – there is a certain expectation that everything will load immediately. In communications, especially, this is important. Whether using iMessage, Facebook Messenger, or even e-mail, the message is received pretty much right after it’s sent. This is a far cry from when, back in the days of the ARPANET, e-mail was bundled and sent over FTP once a day. Truly, as technology progresses, we become more and more reliant on its capabilities. If e-mail were as slow or as unreliable today as it was back in those days, our society would function a whole lot differently.

We use e-mail for work, school, news, advertising. If, as back in the day, we had to call up each person individually to send across a message immediately, it would take hours out of the day, not to mention the very likely chance that at least a few people would be away form their (very stationary) phones. In the modern age, technology makes us much more productive. Some might say that we’re becoming lazy or losing our “real human interaction” by spending all of our time staring at a screen in lieu of a face-to-face conversation. But used effectively, these technologies can really drive us forward in our everyday lives. With specific regards to communication, given the efficiency of today’s systems and products, we need not spend too much time on the Internet and with our devices to get things done — certainly not nearly as much as we would have had to if trying to do the same things at the same scale back in the days of the ARPANET. For example, to reach a wide audience today is simply to post a tweet or send out a mass e-mail, whereas 40 years ago, that might have entailed calling individuals, or sending out many, many pieces of mail.

It will be interesting to watch, especially since we seemed to have reached almost real-time in our Internet communications, where future improvements will take place and how they will change the way which, and frequency with which, people will use the Internet.

Hello world!

Recently the issue of privacy has resurfaced in my musings. As I was reading some articles for my Expos class Privacy and Surveillance, I was reminded of our discussion about the joint AI venture between Microsoft and Amazon. When I first read the article, I was quite surprised, since Amazon has always seemed to have a “what you do, we can do better ourselves.” But, after some consideration, my surprise has started to turn into a bit of apprehension. After all, we are in an age where pretty much everything we do can be accessed through the Internet. Our first layer of personal security is the fact that people/companies on the Internet don’t and can’t know everything about us. The “walled garden” has actually, to a large extent, probably protected our security. It has always been in a company’s best interest to keep our data for themselves,  often to target ads/services towards specific demographics. There hasn’t really been incentive for companies to pool or share user’s personal data, unless, of course, money is involved.

This joint venture and similar collaborations, however, necessitate the sharing of data. As it is, these AI assistants collect an incredible amount of information. It has been proven time and again that, even though these companies claim that their assistants only listen when called, Alexa and Cortana are always listening. It’s bad enough that Amazon has been listening in on family conversations in the living around. It’s bad enough that Microsoft has been tracking everything we do on our computers, from work to play. Now, however, Amazon and Microsoft have access to the other’s data pool. Alexa knows everything about a user that once only Cortana knew – and vice versa. Both companies have a lot fuller of a picture about who are using their devices. From a purely AI perspective, this all sounds great. Responses will be more accurate and tailored towards individuals. But, what is the cost to privacy.

It has been said that in the digital era, there is no such thing as privacy. In a world moving towards data collection, mining, etc. — big data — I am increasingly inclined to agree. But, it isn’t just the corporations themselves that are a concern. Piles of personal data are quite attractive to the hoards of black hat hackers out there. I’d be interested to see a) what else these companies are doing with user’s data and b) what exactly they’re doing on the security front.