“The Effects of the Internet on the Economy, from Working to Shopping to Finding Information”

This past week, the Equifax breach has risen up in the public consciousness as the latest example of corporations failing to properly secure consumer data. Leaks like this are becoming normalized in some sense. Target had one. Yahoo had one. Heck, even the Office of Personnel Management within our federal government suffered a massive-scale data breach. Yet the consequences of losing control over our data, our identity tied to it, are becoming increasingly severe as we move more parts of our lives into the digital realm. It’s clear that we need a new paradigm for building security into critical services, but that’s easier said than done. The Atlantic published a poignant piece today on how critical systems have failed over the years and what’s being done to build systems that live up to a different standard today, titled “The Coming Software Apocalypse.”

Another trend we’re seeing with recent events is the close intermingling of technology and civic life or politics. Facebook’s sales of ads to Russian bot accounts demonstrates the threat targeted advertising and social media pose to our democratic systems. While Facebook has promised to take more significant efforts to block political manipulation on their platform, it remains to be defined what exactly the role of a giant social media platform is in today’s age. Sure, it’s a corporation designed to extract maximum profits through an advertising revenue model, but at what point does it become so big as to warrant further scrutiny/regulation? When 2/7 of the world is using a platform, it no longer behaves like a simple product and rather becomes almost like a new public space for the world to gather, albeit one that isn’t truly free nor public. But what responsibilities does this place on a corporation that acts as a content gatekeeper? Does the government intervene or just let the free market do its own thing. The concern with the latter approach is that the technology sector is increasingly becoming an oligopoly, with Facebook, Apple, Google, Amazon, and a few other firms controlling the major platforms, and therefore serving as gatekeepers for any upstart entrants.

On another note, I’m fascinated with the future of work in technological, AI-first society. The increasing emergence of automation in occupations poses fundamental questions about what it means to be a human? Do we work to live or live to work? Say it is possible for most jobs to be automated away, and somehow, we manage to reap the benefits of this increased productivity equitably. Some may still argue that life would lose meaning in this scenario, with massive hordes left with no sense of purpose in their life, relegated to beings who sleep and eat. Of course, this is a far-away scenario, but imagining it helps analyze proposals meant to address technological unemployment, like universal basic income. I worked on a year-long research project last year analyzing the implications of technological unemployment on public policy, specifically looking at case studies like the trucking industry. A cursory reading of opinions on the topic reveals a key difference at root: will artificial intelligence be a substitute or a complement to human labor? Throughout history, fear mongering has occurred with the introduction of new technology, only for the fears to be unfounded as people find new, more interesting things to work on with the extra time. Will it be any different this time? Our relation to work has already begun to shift with the rise of the gig economy, for better or worse, which is another topic that could be analyzed to no end.

The long tail model has shifted the way in which we consume goods and services, and how companies effectively take a data-centric, targeted approach to finding new sales. The shift in consumer patterns has been most felt in retail. While 50 years ago, it was the big-box stores obsoleting mom-and-pop outfits, now the same big-box retailers are feeling increasingly threatened by online giants like Amazon. Well, mostly Amazon. The ease of scaling infrastructure on the cloud as compared to the ground is what enables companies like Amazon to utilize a long tail approach. In fact, Amazon turned out to be so good at scaling that they launched Amazon Web Services as a standalone service, creating a new revenue stream beyond advertising. With the rise of machine learning and predictive analytics, retailers can target us better than ever using carefully assembled profiles. I think the future of e-commerce is one in which the long tail can be taken to its logical extreme, where instead of presenting personalized recommendations, a computer just orders what we need before we can be bothered to think about it. Amazon’s already working toward this vision, with Dash buttons mounted in relevant places, subscriptions to products, and by tying into smart devices with various sensors. Peak retail is when shopping is impossibly effortless.

 

“The Technology of the Internet”

We started off our discussion with some food for thought regarding the current turbulent state of the tech industry. It seems as though Uber has been put through scandal after scandal over this past year, ranging from accusations of stealing Waymo’s intellectual property, to Susan Fowler’s recounting of systemic sexual harassment, to Travis Kalanick’s hostile encounter with his Uber driver. Ultimately, it was probably for the best for Travis to step down, both for himself and for the company’s return to normalcy. In other ways, however, this could prove to be a critical test at Uber, demonstrating whether the company can successfully move past a “bully” phase to a mature, compliant enterprise, without sacrificing the intense innovation that got the company to market dominance. It remains to be seen how Dara Khosrowshahi will take the reins at Uber, and what direction he takes the company in.

I enjoyed our discussion of the past week’s turbulence within the world of cryptocurrency and blockchain. Crypto seems to have finally hit the mass market, perhaps inevitably with the skyrocketing price of Bitcoin putting it in mainstream news outlets constantly. I believe the underlying technology under blockchain has the potential to upend just about any industry where transparent, auditable, and distributed data is important. It’s often said that today, we have the internet of information, but blockchain allows us to finally access an internet of value, perhaps fundamentally transforming our economic systems. I hope we continue to discuss cryptocurrencies and blockchain technology as we move into our discussion of the Internet economy next week.

Returning to the readings, the idea of the Internet seems obvious in hindsight. There are many different networks with unique features but sharing much of the same functionality, so why not link them together? Putting this in words is easy, but implementing this technology required far more thought and technical effort. The genius behind the Internet is that it doesn’t require much of a central body to set rules. Instead, each network can implement different features and a “gateway” can facilitate cross-network communication simply by acting as a host on multiple networks. The early creators of the Internet had much foresight, thinking of scaling toward a future they could scarcely imagine. The continued use of TCP/IP stands as a testament to the thought they put in with regards to scaling. I was amazed by how well-thought out the “End-to End Arguments in System Design” paper was. By removing complexity within the network and reducing it to dumb pipes, they enabled far more adaptive innovation from the host-side, without requiring complicated network upgrades.

Finally, I thought the “Web Turns 28” letter accurately captured much of the challenges facing the Internet and our relationship to it as a society today. In many ways, the three issues raised are all interconnected. As governments and corporations control increasingly large portions of our online platforms, they gain access to incredible amounts of personal information. Whether this is a “fair tradeoff” for the free services we obtain is difficult to say, but with rapid advancements in the fields of big data and machine learning, this means a few pieces of information can be used to assemble a much larger predictive model, allowing for precise targeting of advertising, news (whether fake or true), and other information. Essentially, organizations get an inside view of our thoughts and actions in a way which would not have been possible a decade ago. In some ways, this loss of privacy seems inevitable as technology marches forward, will consumers take a stand? I’m interested in where governments decide to go in regulating privacy and data collection on the internet, and what they decide to collect themselves. One thing’s for sure: there’s no going back, so we might as well adapt our current frameworks.

“The Evolution of the Internet”

Last week, we collectively groaned as we learned about the struggles of creating a system of hardware and software that was seamlessly interoperable. Unfortunately, that story doesn’t change much as we move forward into the software implementation of host-to-host communication on the ARPANET. To be fair, standards and interoperability are something we still struggle with to this day. Why can’t you plug your iPhone’s USB-A cable into your MacBook’s USB-C port? (Props to Apple for adopting the Qi wireless standard charging protocol for the iPhone X announced today. As we were mentioning in our discussion, once a standard reaches a certain level of permeation, it’s difficult for a company to resist adoption.) If one company can’t maintain full interoperability within their suite of devices, it’s a miracle we managed to settle on fundamental protocols like IP at all.

An interesting point from the reading was how much autonomy was granted to the select group of grad students who developed these communications protocols. Much of this was due to a lack of interest from the orthodoxy, but it turned out to be a blessing in disguise, allowing for bountiful experimentation and innovation. This seems to be a repeating case throughout history, where a lack of intervention can sometimes lead to the best outcomes. For example, the urban areas most targeted for renewal and removal of “blight” ironically became the most damaged from well-intentioned government efforts, while neighborhoods like Boston’s North End, left untouched, flourished organically. In the case of ARPANET, the lack of close supervision allowed for rapid evolution of the network in way unforeseen by the general establishment.

As we touched on in our discussion, the freedom afforded to the developers of ARPANET also enabled the creation of a new set of cultural norms. Today, we still find ourself pioneering cultural norms on different platforms. How are “green texts” perceived compared to “blue texts” (iMessage’s subtle classification might have an intended effect)? What belongs on your Instagram, and what goes on your “finsta”? How does anonymity color our interactions with our peers, and does this anonymity give license to be offensive? Technology develops in tandem with society, with new use cases emerging organically initially, and eventually being integrated into products we use. When a critical mass began downloading Emoji fonts for use in messages, for example, companies were quick to integrate them closer with the OS, further implanting them in our cultural vocabulary.

For those that do not adapt, however, demise seems to be a fairly inevitable conclusion. We mentioned DEC in our discussion, a company I should not recognize were it not for my readings of post-mortems. I was recently reading an Economist article on the company from 1984 in Lamont. The article described IBM’s embrace of personal computing contrasted with DEC’s reluctance to leave behind its familiar world of profitable minicomputers. While this may have been a smart short-term business decision by whoever analyzed the potential costs/revenues, it’s clear that choices like this led to the company’s eventual downfall. When it comes to the internet and personal technology, those who do not move fast and break things tend to be broken themselves in no time.

“From Idea to Reality”

I found our discussion of the Alexa-Cortana integration to be a fitting introduction to the challenges of digital standardization, without which the Internet could not exist today. I’ll start with a discussion of the personal assistant sector before extrapolating to the broader Internet. AI-driven digital assistants that live on our phones, smart speakers, and just about anywhere imaginable have become an integral part of today’s digital landscape, creating a consumer-facing focal point for advances in natural language processing and artificial intelligence. For consumers, however, it’s still a bit of a Wild West. Sure, Alexa can talk to my lightbulb, but what about Siri? Who do I call to turn up the thermostat? The lack of a common protocol hinders usability and broader adoption of these assistants beyond early-adopters.

In this context, it makes perfect sense why Amazon and Microsoft would want to tie their assistants closer together. By doing so, they create a virtual environment from which users can access all the services they want. They sacrifice a walled-garden approach to make a more usable product for consumers. Taking a look at the two companies’ prior actions, this isn’t too far out of the ordinary. Microsoft has shown itself willing to get its Office software on as many platforms as possible, including iOS and Android, in an effort to being wherever their users are. Amazon has opened up Alexa to third-party software/hardware developers through Alexa Skills Kit and Alexa Voice Service, respectively. It all fits into an ideology of moving fast and breaking things, being unafraid to cannibalize your own product lest technology leave you behind.

As we read in When Wizards Stay Up Late, excessive competition and lack of coordination hindered US defense capabilities until Eisenhower established ARPA to unify military R&D. Similar to Amazon and Microsoft, where competition from Google/Apple forced collaboration, Russian competition forced the US to revamp its military operations. I’ve heard about the rivalry between military branches firsthand while working at the Naval Research Lab this past summer. My mentors would always keep in mind that competition for resources is intense, and so developing systems that dynamically allocate these was part of my work there.

What’s fascinating about the book’s introduction to ARPANET is the collaboration between military, industry, and academia. ARPA had the resources and governmental authorization to set up such a complex network, BBN Technologies provided instrumental contracting in building the infrastructure, and universities comprised the major nodes in the network. Collaboration like this is what allowed the US to become the birthplace of the Internet. Likewise, the idea of IMPs sharing a common protocol for sending packets is ultimately a precursor to Internet routers and HTTP today. Somehow, the Internet was able to set up these base protocols that enabled boundless innovation atop in the future.

 https://xkcd.com/927/

Nonetheless, we still see many of the issues stemming from the early days of the network today. We mentioned in our discussion that security was not an integral consideration when designing the Internet originally. Since then, the scope of the Internet has expanded exponentially. How do keep billions and their data safe? How can we create a secure online voting system? Much work involves adapting to use the network in new clever ways, but often, we also need to revamp fundamental parts of the Internet, while avoiding breaking protocols. Moving from IPv4 to IPv6 has been one such transition. One thing’s for sure: somehow we agreed on this standard, and it’s here to stay, touching everything in the process.