“Cyber War, Cyber Conflict, and Cyber Crime”

This past week, a few hundred million dollars worth of Ethereum were rendered inaccessible due to a poorly written smart contract, highlighting a lax attitude toward security in the blockchain world. Parity, a popular Ethereum wallet, offered a type of “multi-signature wallet” controlled by a smart contract. These contracts were themselves reliant on library code contained in another master contract, which was responsible for the flaw. Seven days ago, a Github user going by “ghost” casually posted an issue on Parity’s repository claiming that he killed the library code. Furthermore, he clarifies that he’s new to the Ethereum ecosystem and was simply playing around. How could this be, right? Surely some random person can’t walk in and grab the keys of a safe and throw them away? Yet that’s exactly what happened. He first made himself the owner of the library contract (warning bell, anyone could do this!), and then sent a kill signal, leading the library contract to self-destruct. Without this library, nobody is able to access the funds within their multisig Parity wallets, affecting major Ethereum stakeholders such as ICOs raising millions. First of all, the wallet library probably should not have been implemented as just a regular smart contract. Second, the contract lacked proper access control. Third, there existed a self-destruct function without any means of recovery.

In the world of traditional finance, properly audited, secure code is a paramount concern since vast amounts of customer funds are being entrusted to firms in question. Meanwhile, in the crypto world, the situation is still a Wild West, partly because there’s no real government regulation in place and the technology is still maturing, finding its best practices. If adoption needs to grow, and innovative applications be found, we need a new approach to developing smart contracts with the precise eye of a lawyer, a proper audit system in place. Furthermore, perhaps we should work on protecting people from themselves when they’re developing contracts, by altering the programming language itself. Ethereum smart contracts are typically written in Solidity, which offers much room for error. One could draw an analogy to C code: versatile and powerful but offering many opportunities for an inexperienced programmer to cause a major bug. A higher-level language could offer more protection. Maybe one could even use formal proof-based mathematical verification to ensure a contract behaves as intended.

I mention this as an example of decentralized, powerful technology leaving gaping holes for adversaries to exploit, much like the internet in general. Imagine if we do one day operate our voting systems and other critical infrastructure on the blockchain. Indeed, much of our critical infrastructure today is internet-connected, creating many potential vectors for adversaries to find flaws, conduct espionage, and bring down our systems. While we like to talk about government strengthening our cyber defenses, much of this burden is ultimately shouldered by the variety of private and decentralized products we use every day, by nature of the tech industry. Every last citizen and product can be vulnerable, and will be targeted, making the challenge of cyber defense uniquely difficult. All it takes is one government agent clicking on that all-too-tempting phishing email from “àpple.com”, or a zero-day exploit in Windows 10, for critical national security apparatus to become possibly exposed. Or perhaps not a government agent, but an important corporate figure will be targeted, having impacts on the broader US economy. Corporate espionage betwen the US and China is a continuing topic of concern.

Lockscreen bypasses on iOS seem to spring up every year, and FaceID was just shown to be fooled by a certain specially crafted mask. Facebook and Twitter have been used at media for spreading propaganda. It becomes clear quickly that government must coordinate its efforts with the private industry in order to stand a chance against foreign adversaries in a cyber warfront. Meanwhile, corporations must strike a balance between maintaining governmental independence and accepting help from the government in taking defensive measures to preserve security.

“Who Runs the Internet: Jurisdiction and Governance”

At first glance, the EFF manifesto looked like an overly optimistic techno-libertarian view of the world. After all, from today’s perspective, much of the dreams espoused in the letter seem moot. Is the internet really a tool for individual liberty, or one enabling manipulation and control on a scale we can’t imagine? The reality is that today, government, including the US, do indeed try to regulate the internet. Moreover, even if we ignore the government’s interference, corporations are more dominant than ever in setting the rules of the internet. 25 years ago, the internet may have seemed like a playground of personal webpages and trivia that are out there for you to discover. Today, much of that information is funneled through a small set of internet giants, like Google and Facebook. In exchange for that convenience, we allow these companies to determine what’s interesting and what’s not, what’s acceptable and what’s not. Remember the shock when we found out Facebook was running internal experiments toying with user emotions by altering their news feed content? And content does not flow as seamlessly across geopolitical borders as hoped. Yes, we have VPNs and various piracy methods, but content is often locked to one region or another. Nonetheless, the internet on the whole still remains decentralized at least in theory, with any restrictions on free flow of information circumventable with enough effort.

 

When it comes to regulating the internet, concentration of market power in one place creates an easy point for the government to target. Interestingly enough, we seem to have come full circle with walled gardens, as we’ve noted throughout our seminars. We went from the AT&T telephone monopoly, easily controlled by the government, to an online walled garden created by services like CompuServe, to a more decentralized system of web pages on the internet. Today, once again, companies like Google and Facebook are increasingly acting as gatekeepers to the broader internet. These platforms, knowing our preferences, are able to surface the content we want better than any other means. With Google’s AMP pages and Facebook’s Instant Articles, even outside content is hosted within these walled gardens, based on the premise of a good user experience. When platforms start to play more of a gatekeeper role, they expose themselves to questioning when questionable content spreads on these networks, as occurred during the 2016 presidential campaign. And with concentration of power within a few platforms, the government is able to exert its influence more efficiently, as we saw with recent Congressional grilling of tech companies in relation to Russian interference. With the internet playing an increasingly dominant role in civic society and the economy, government is going to continue to try and figure out its place in setting the rules, for better or for worse.

“Voting, Polling, and Politics in the Connected World”

This was quite a fitting time to discuss the reality of politics moving increasingly to the digital sphere, for better or worse. Fears of Russian interference in the 2016 election reached a fever pitch this past week, with Congress grilling tech executives about their role in allowing for misinformation and bot spam to spread on their platforms. As the citizens of a nation conduct their lives online, it’s only natural that politics evolves to meet them where they are, ruthlessly taking advantage of social media and digitized information to mobilize voters to cause an intended effect.

The internet has created a new set of industries around marketing, like SEO (search engine optimization) or narrowly tailored social media marketing, or the phenomenon of “influencers” whose professional lives and personal lives blend seamlessly. Barack Obama was perhaps the first presidential candidate to begin taking advantage of the social media revolution, targeting millennials especially, on platforms like Facebook and Twitter in 2008/2012. Back then, social media was seen as a force for good in politics, keeping to the mission of transparency and increased engagement with relevant constituents. Little did we foresee that by bringing political campaigning online, we’d be opening up our democratic infrastructure to attack on an unprecedented scale. Before the internet, attacks would generally have to be through the traditional media gatekeepers, or by physically altering voting booths. Now, all it takes is a connection to the internet to allow anyone, including non-citizens, to spread political information. And due to the viral nature of online echo chambers, that information can have drastic effects on the outcome of an election.

Even domestically, campaigns are adapting to a digital reality of information being the most important commodity. By analyzing online behavior of potential voters, campaigns can build deeply personalized models and target voters in ways that make them likely to respond positively. Subtle additions to their news feeds, which citizens rely on for their information, can alter behavior, which both private industry and political campaigns use to their advantage. Since targeted advertising relies heavily on psychological tricks, it brings up important ethical questions of how these advertisements should be labeled. Should an Instagram influencer be allowed to show off a company’s product without disclosing the funding they’re receiving in exchange? These questions become even more critical when dealing with political campaigning. Without appropriate labeling, money is essentially able to buy votes, since whoever controls the money is able to purchase the most advertising, and platforms are all too happy to take that money.

Along with targeted advertising, the second disconcerting aspect of 2016’s campaigning was the spread of fake news. Some of this may have been associated with large-scale political disruption campaigns, but much of it has been attributed to entrepreneurial individuals trying to drive clicks to their websites for ad revenue. The question is whether platforms like Facebook and Twitter are willing to do anything about this. After all, more users and engagement on their platforms is traditionally a good thing they can pitch to advertisers, right? Sure, they could take steps to combat misinformation, but corporations don’t just do things out the goodness of their hearts. However, with increasing Congressional scrutiny, companies would rather take some voluntary steps to quell the tide rather than risk burdensome regulations being imposed on them. It’s under this calculus that Mark Zuckerberg announced during Facebook’s earnings that the company would be taking greater steps to combat fake news and bot accounts, at the risk of sacrificing profits. If Facebook does indeed take a greater role in such efforts, we run another risk of having a corporation determining who/what is real and not, a perhaps even more dystopian scenario as we live out our lives on these platforms. For now, I think the best approach to take would be to algorithmically flag questionable content and present a warning to users, allowing for informativeness taking advantage of artificial intelligence while avoiding the pitfalls of outright censorship.

 

 

Cambridge Analytics

“Digital citizenship”

The internet has fundamentally transformed the environment in which governments operate in, from a generally static one with low information velocity, to a global, incredibly dynamic one with incredibly fast information exchange. With this comes a great increase in globalization; after all, the internet, is not bounded by country (generally, barring the Great Firewall, etc), so commerce and governance disseminate across the world. From our discussion, the themes that emerged suggested two possible end-game outcomes of increased digitization are either a very centralized, all-powerful government with full access to our information, and thereby our lives, contrasted with a completely decentralized world where nation-states have very little power, with even the concept of a country having little meaning. What’s crucial to understand is the internet is at its core agenda-agnostic, happy to enable both scenario 1 and scenario 2 all through a little protocol called TCP/IP.

While discussing the concept of government by API, a possibility allowing for streamlined development, innovation, and lower costs, an interesting issue that stood out to me was how we would deal with identities and authentication in such a paradigm. If access to government services occurs mainly through the internet, the ability to uniquely and securely identify citizens becomes a core part of the national infrastructure, for failing to do so ultimately results in an erosion of sovereignty and opens up pathways to fraud, at the same time leaving the legitimately needy stranded. The US has made very little progress on this front, with our means of identification typically reserved to Social Security numbers or drivers licenses. Both of these mechanisms have shaky foundations. Social Security numbers were never designed as identity numbers, and recent breaches like with Equifax essentially make SSNs semi-public information. So SSNs work as a unique identifier but not necessarily as an authenticator. Meanwhile drivers licenses, used as photo IDs at places like voting stations, come with their own challenges, namely that using them would exclude those who typically tend to be lower-income or minorities.

While the US has stalled on that front, other countries like India, with their Aadhaar program, have made broad-based efforts to create an digital identity system for the 21st century. I’ve actually received an Aadhaar ID myself, when visiting for a couple months. The process is painless, and involves simply showing up with a couple documents and getting a biometric scan. Privacy concerns are of course a concern in such a system, but by being very careful with what information is actually stored (the relevant hashes), the government is able to open up the doors for innovation atop the platform while minimizing losses should leakage occur. In the US, such a broad governmental collection of data would likely be politically infeasible, so we talked about private solutions possibly emerging. For example, Google and Facebook accounts already authenticate us to a variety of services, and comprise of a trove of information that essentially represents our digital life. At a certain inflection point, our Facebook IDs could become more relevant that our SSNs. The idea of private companies doing “public” services is increasingly common around the world. Take Sesame Credit, under the Alibaba family, which has become the de facto platform for credit ratings in China. This trend is not necessarily good or bad, but we must carefully consider the incentives that arise once a firm becomes a natural monopoly, and ensure that a private solution truly is the optimal one.

One thing’s for sure: in a post-Arab Spring, post-“fake news” world, the internet has shifted all prior assumptions of the relationship between the people and the state.

 

“AI, the Internet, and the Intelligence Singularity– Will the Machines Need Us?”

This past week was another crazy roller-coaster ride for Bitcoin, hitting an all-time high of $5000, before almost immediately hitting another milestone of $5100, $5200, … moon.

I was looking at possible models of Bitcoin’s valuation growth proposed about a year ago, and we’ve surpassed their wildest predictions based on prior performance. In a sense, any such talk of exponential growth reminds me of much of the literature around the singularity, like what we just read. (Sidenote: The Bitcoin analogue for the AI singularity, that is, a point of no return, is “hyperbitcoinization”, if/when fiat currencies lose all value.) We might plausibly accept that we’ll hit the singularity maybe in the next 20-50 years, but humans find difficult to grasp just how fast AI will be advancing after the singularity. This WaitButWhy post points out that at the singularity, AI will have reached the intelligence of the smartest person on the planet, but just a few minutes later, it could be magnitudes smarter than that person. Software can iterate on itself much faster than genetic evolution can. One could think of this concept through the lens of compound interest. Once we have more capital that our principal produced for us, that new higher value will generate even more returns, creating that lucrative exponential growth we yearn for.

This is why Google’s AutoML excites me so much. We use machine learning in all sorts of domains to optimize processes in ways humans could never foresee, so why not recursively apply the same technique to machine learning itself? ML on ML on ML on ML. As Sundar Pichai elegantly stated, “We must go deeper.” Developing the actual techniques behind machine learning libraries is something very few people have the requisite expertise for, and even then, it’s a extremely difficult problem to tackle. But when a computer is given the task to optimize it’s own machine learning algorithm, it can simply try all sorts of tweaks and observe how success is changing, seeking to minimize some sort of loss function. Recently, there’s been a lot of excitement over AlphaGo again, which has improved itself by many magnitudes simply by playing itself. The same idea is at play here.

Recent advances in AI bring up the perennial questions around whether superintelligent AI will be a boon to society or the death of all humankind. These questions devolve into sensationalized headlines all too often, and even in discussion,  lead to endless philosophical debate loops that lose touch with reality. This debate blew into the public sphere this past year as a spate between fellow tech billionaires Elon Musk and Mark Zuckerberg. Musk has been warning about the dangers of uncontrolled AI for a while, and recently has adopted a focus on responsible AI research through OpenAI. Zuckerberg countered this darker assessment by pointing to the benefits that AI has already unleashed, ranging from better healthcare to safer roads. When it comes down to it, they’re debating AI at two different stages of evolution, one closer to 2017 and one closer to the singularity.

Frankly, I can’t make any predictions about how, if, and when the singularity will play out, but I think it’s critical that we adapt to the short-term consequences of AI. Much of this is economic. I did a year-long research project last year analyzing policy implications of technological unemployment and inequality, with a focus on autonomous vehicles. In the US, truck driving is one of the largest professions, and yet one of the most vulnerable to automation. We already have Level 3 autonomous vehicles on the street, and it’s only a short matter of time, with some regulatory approval, before we reach Level 5, or full autonomy. What happens when such a massive group of people lose their livelihoods to a computer? In the past, technology has supplanted the more mechanical tasks and enabled humans to move to bigger and better things. But how much longer is that sustainable? Will there be a point at which AI can perform any task better than us?

Technology has democratized access to information, jobs, and other resources but yet in many ways, it has helped to boost inequality simply due to the massive scales at which corporations can operate at. With automation, a scenario evolves where fewer and fewer people own the machines/algorithms doing all the work, and they reap all the rewards. Without a system like universal basic income to redistribute the fruits of this newfound productivity, inequality is bound to only continue to grow. Eventually, this reaches a breaking point, either when the masses protest or when there are simply no more consumers to purchase the goods and services being produced.

Regardless, all hail our new AI overlords.

“The Effects of the Internet on the Economy, from Working to Shopping to Finding Information”

This past week, the Equifax breach has risen up in the public consciousness as the latest example of corporations failing to properly secure consumer data. Leaks like this are becoming normalized in some sense. Target had one. Yahoo had one. Heck, even the Office of Personnel Management within our federal government suffered a massive-scale data breach. Yet the consequences of losing control over our data, our identity tied to it, are becoming increasingly severe as we move more parts of our lives into the digital realm. It’s clear that we need a new paradigm for building security into critical services, but that’s easier said than done. The Atlantic published a poignant piece today on how critical systems have failed over the years and what’s being done to build systems that live up to a different standard today, titled “The Coming Software Apocalypse.”

Another trend we’re seeing with recent events is the close intermingling of technology and civic life or politics. Facebook’s sales of ads to Russian bot accounts demonstrates the threat targeted advertising and social media pose to our democratic systems. While Facebook has promised to take more significant efforts to block political manipulation on their platform, it remains to be defined what exactly the role of a giant social media platform is in today’s age. Sure, it’s a corporation designed to extract maximum profits through an advertising revenue model, but at what point does it become so big as to warrant further scrutiny/regulation? When 2/7 of the world is using a platform, it no longer behaves like a simple product and rather becomes almost like a new public space for the world to gather, albeit one that isn’t truly free nor public. But what responsibilities does this place on a corporation that acts as a content gatekeeper? Does the government intervene or just let the free market do its own thing. The concern with the latter approach is that the technology sector is increasingly becoming an oligopoly, with Facebook, Apple, Google, Amazon, and a few other firms controlling the major platforms, and therefore serving as gatekeepers for any upstart entrants.

On another note, I’m fascinated with the future of work in technological, AI-first society. The increasing emergence of automation in occupations poses fundamental questions about what it means to be a human? Do we work to live or live to work? Say it is possible for most jobs to be automated away, and somehow, we manage to reap the benefits of this increased productivity equitably. Some may still argue that life would lose meaning in this scenario, with massive hordes left with no sense of purpose in their life, relegated to beings who sleep and eat. Of course, this is a far-away scenario, but imagining it helps analyze proposals meant to address technological unemployment, like universal basic income. I worked on a year-long research project last year analyzing the implications of technological unemployment on public policy, specifically looking at case studies like the trucking industry. A cursory reading of opinions on the topic reveals a key difference at root: will artificial intelligence be a substitute or a complement to human labor? Throughout history, fear mongering has occurred with the introduction of new technology, only for the fears to be unfounded as people find new, more interesting things to work on with the extra time. Will it be any different this time? Our relation to work has already begun to shift with the rise of the gig economy, for better or worse, which is another topic that could be analyzed to no end.

The long tail model has shifted the way in which we consume goods and services, and how companies effectively take a data-centric, targeted approach to finding new sales. The shift in consumer patterns has been most felt in retail. While 50 years ago, it was the big-box stores obsoleting mom-and-pop outfits, now the same big-box retailers are feeling increasingly threatened by online giants like Amazon. Well, mostly Amazon. The ease of scaling infrastructure on the cloud as compared to the ground is what enables companies like Amazon to utilize a long tail approach. In fact, Amazon turned out to be so good at scaling that they launched Amazon Web Services as a standalone service, creating a new revenue stream beyond advertising. With the rise of machine learning and predictive analytics, retailers can target us better than ever using carefully assembled profiles. I think the future of e-commerce is one in which the long tail can be taken to its logical extreme, where instead of presenting personalized recommendations, a computer just orders what we need before we can be bothered to think about it. Amazon’s already working toward this vision, with Dash buttons mounted in relevant places, subscriptions to products, and by tying into smart devices with various sensors. Peak retail is when shopping is impossibly effortless.

 

“The Technology of the Internet”

We started off our discussion with some food for thought regarding the current turbulent state of the tech industry. It seems as though Uber has been put through scandal after scandal over this past year, ranging from accusations of stealing Waymo’s intellectual property, to Susan Fowler’s recounting of systemic sexual harassment, to Travis Kalanick’s hostile encounter with his Uber driver. Ultimately, it was probably for the best for Travis to step down, both for himself and for the company’s return to normalcy. In other ways, however, this could prove to be a critical test at Uber, demonstrating whether the company can successfully move past a “bully” phase to a mature, compliant enterprise, without sacrificing the intense innovation that got the company to market dominance. It remains to be seen how Dara Khosrowshahi will take the reins at Uber, and what direction he takes the company in.

I enjoyed our discussion of the past week’s turbulence within the world of cryptocurrency and blockchain. Crypto seems to have finally hit the mass market, perhaps inevitably with the skyrocketing price of Bitcoin putting it in mainstream news outlets constantly. I believe the underlying technology under blockchain has the potential to upend just about any industry where transparent, auditable, and distributed data is important. It’s often said that today, we have the internet of information, but blockchain allows us to finally access an internet of value, perhaps fundamentally transforming our economic systems. I hope we continue to discuss cryptocurrencies and blockchain technology as we move into our discussion of the Internet economy next week.

Returning to the readings, the idea of the Internet seems obvious in hindsight. There are many different networks with unique features but sharing much of the same functionality, so why not link them together? Putting this in words is easy, but implementing this technology required far more thought and technical effort. The genius behind the Internet is that it doesn’t require much of a central body to set rules. Instead, each network can implement different features and a “gateway” can facilitate cross-network communication simply by acting as a host on multiple networks. The early creators of the Internet had much foresight, thinking of scaling toward a future they could scarcely imagine. The continued use of TCP/IP stands as a testament to the thought they put in with regards to scaling. I was amazed by how well-thought out the “End-to End Arguments in System Design” paper was. By removing complexity within the network and reducing it to dumb pipes, they enabled far more adaptive innovation from the host-side, without requiring complicated network upgrades.

Finally, I thought the “Web Turns 28” letter accurately captured much of the challenges facing the Internet and our relationship to it as a society today. In many ways, the three issues raised are all interconnected. As governments and corporations control increasingly large portions of our online platforms, they gain access to incredible amounts of personal information. Whether this is a “fair tradeoff” for the free services we obtain is difficult to say, but with rapid advancements in the fields of big data and machine learning, this means a few pieces of information can be used to assemble a much larger predictive model, allowing for precise targeting of advertising, news (whether fake or true), and other information. Essentially, organizations get an inside view of our thoughts and actions in a way which would not have been possible a decade ago. In some ways, this loss of privacy seems inevitable as technology marches forward, will consumers take a stand? I’m interested in where governments decide to go in regulating privacy and data collection on the internet, and what they decide to collect themselves. One thing’s for sure: there’s no going back, so we might as well adapt our current frameworks.

“The Evolution of the Internet”

Last week, we collectively groaned as we learned about the struggles of creating a system of hardware and software that was seamlessly interoperable. Unfortunately, that story doesn’t change much as we move forward into the software implementation of host-to-host communication on the ARPANET. To be fair, standards and interoperability are something we still struggle with to this day. Why can’t you plug your iPhone’s USB-A cable into your MacBook’s USB-C port? (Props to Apple for adopting the Qi wireless standard charging protocol for the iPhone X announced today. As we were mentioning in our discussion, once a standard reaches a certain level of permeation, it’s difficult for a company to resist adoption.) If one company can’t maintain full interoperability within their suite of devices, it’s a miracle we managed to settle on fundamental protocols like IP at all.

An interesting point from the reading was how much autonomy was granted to the select group of grad students who developed these communications protocols. Much of this was due to a lack of interest from the orthodoxy, but it turned out to be a blessing in disguise, allowing for bountiful experimentation and innovation. This seems to be a repeating case throughout history, where a lack of intervention can sometimes lead to the best outcomes. For example, the urban areas most targeted for renewal and removal of “blight” ironically became the most damaged from well-intentioned government efforts, while neighborhoods like Boston’s North End, left untouched, flourished organically. In the case of ARPANET, the lack of close supervision allowed for rapid evolution of the network in way unforeseen by the general establishment.

As we touched on in our discussion, the freedom afforded to the developers of ARPANET also enabled the creation of a new set of cultural norms. Today, we still find ourself pioneering cultural norms on different platforms. How are “green texts” perceived compared to “blue texts” (iMessage’s subtle classification might have an intended effect)? What belongs on your Instagram, and what goes on your “finsta”? How does anonymity color our interactions with our peers, and does this anonymity give license to be offensive? Technology develops in tandem with society, with new use cases emerging organically initially, and eventually being integrated into products we use. When a critical mass began downloading Emoji fonts for use in messages, for example, companies were quick to integrate them closer with the OS, further implanting them in our cultural vocabulary.

For those that do not adapt, however, demise seems to be a fairly inevitable conclusion. We mentioned DEC in our discussion, a company I should not recognize were it not for my readings of post-mortems. I was recently reading an Economist article on the company from 1984 in Lamont. The article described IBM’s embrace of personal computing contrasted with DEC’s reluctance to leave behind its familiar world of profitable minicomputers. While this may have been a smart short-term business decision by whoever analyzed the potential costs/revenues, it’s clear that choices like this led to the company’s eventual downfall. When it comes to the internet and personal technology, those who do not move fast and break things tend to be broken themselves in no time.

“From Idea to Reality”

I found our discussion of the Alexa-Cortana integration to be a fitting introduction to the challenges of digital standardization, without which the Internet could not exist today. I’ll start with a discussion of the personal assistant sector before extrapolating to the broader Internet. AI-driven digital assistants that live on our phones, smart speakers, and just about anywhere imaginable have become an integral part of today’s digital landscape, creating a consumer-facing focal point for advances in natural language processing and artificial intelligence. For consumers, however, it’s still a bit of a Wild West. Sure, Alexa can talk to my lightbulb, but what about Siri? Who do I call to turn up the thermostat? The lack of a common protocol hinders usability and broader adoption of these assistants beyond early-adopters.

In this context, it makes perfect sense why Amazon and Microsoft would want to tie their assistants closer together. By doing so, they create a virtual environment from which users can access all the services they want. They sacrifice a walled-garden approach to make a more usable product for consumers. Taking a look at the two companies’ prior actions, this isn’t too far out of the ordinary. Microsoft has shown itself willing to get its Office software on as many platforms as possible, including iOS and Android, in an effort to being wherever their users are. Amazon has opened up Alexa to third-party software/hardware developers through Alexa Skills Kit and Alexa Voice Service, respectively. It all fits into an ideology of moving fast and breaking things, being unafraid to cannibalize your own product lest technology leave you behind.

As we read in When Wizards Stay Up Late, excessive competition and lack of coordination hindered US defense capabilities until Eisenhower established ARPA to unify military R&D. Similar to Amazon and Microsoft, where competition from Google/Apple forced collaboration, Russian competition forced the US to revamp its military operations. I’ve heard about the rivalry between military branches firsthand while working at the Naval Research Lab this past summer. My mentors would always keep in mind that competition for resources is intense, and so developing systems that dynamically allocate these was part of my work there.

What’s fascinating about the book’s introduction to ARPANET is the collaboration between military, industry, and academia. ARPA had the resources and governmental authorization to set up such a complex network, BBN Technologies provided instrumental contracting in building the infrastructure, and universities comprised the major nodes in the network. Collaboration like this is what allowed the US to become the birthplace of the Internet. Likewise, the idea of IMPs sharing a common protocol for sending packets is ultimately a precursor to Internet routers and HTTP today. Somehow, the Internet was able to set up these base protocols that enabled boundless innovation atop in the future.

 https://xkcd.com/927/

Nonetheless, we still see many of the issues stemming from the early days of the network today. We mentioned in our discussion that security was not an integral consideration when designing the Internet originally. Since then, the scope of the Internet has expanded exponentially. How do keep billions and their data safe? How can we create a secure online voting system? Much work involves adapting to use the network in new clever ways, but often, we also need to revamp fundamental parts of the Internet, while avoiding breaking protocols. Moving from IPv4 to IPv6 has been one such transition. One thing’s for sure: somehow we agreed on this standard, and it’s here to stay, touching everything in the process.