One Last Time


Hey everyone! My math final is over, so I’m finally going to wrap up the semester with one more blog post that should have been posted two weeks ago. This week’s topic was twofold and almost disjoint: the influence of the Internet on the US Presidential election and the Internet in developing countries. It was a fitting end to the semester: candy, a lively discussion, and plenty of good cheer to go around. That said, there were a few things said that really pushed my buttons.

If you haven’t heard about Free Basics, it’s a program Facebook was trying to institute in India that would have given limited Internet access to millions of Indians, particularly in rural areas. India soundly rejected it. Why you ask? Well, there were various logistical difficulties. For example, many of the people using it were people who had gone over their data limits, rather than the first-time Internet users Facebook was trying to reach. These are reasonable issues to be concerned about. However, some people seem to take issue with it on a philosophical, not just logistical, level They seem to be of the impression that because Facebook could not provide full Internet access, they should instead provide no Internet access.


The idea underlying this (to me cognitively jarring) statement I think is net neutrality. Everyone should have equal access to the Internet because the Internet’s nature is to be open, not restricted to the amount of money you have. I am totally on board with this. I get off the train when someone says that this means we should deny disadvantaged people wifi if we can’t give them the entire Internet. Restricted access to the Internet isn’t as good as full access to the Internet. You know what’s worse? No access to the Internet. People who use net neutrality, or any sort of equality argument to reject Free Basics, are actively working against their goal of a more egalitarian society. And in a world where technological divide may be the most important, that’s really bad.

I think what is going on here is something called the Copenhagen Interpretation of Ethics. If you’re not a hyperlink person, it’s the (fallacious) idea that by interacting with a situation, you become responsible for it. That even if you improve the situation, you become responsible for the fact that it’s not as good as it should be, even though you weren’t before (for example, Apple isn’t responsible for people starving in China. Then they open a factory there, employing people in poor conditions but still better than starving and now they’re the devil. They should definitely close the sweatshop). If you accept this argument, I can take that up with you later. For now, I’ll assume you agree with me that this seems pretty silly.

I think something like the Copenhagen Interpretation of Ethics is going on here. We can’t make their situation equal to ours, so we shouldn’t make it any better at all. This is, in my opinion, one of the ways that well-intentioned people end up doing harm. As they say in the hyperlink that you definitely followed: Almost no one is evil. Almost everything is broken.

Happy holidays.

(P.S. I promise I’m an idealist)

Online Identity


There is a common complaint that online interaction is less authentic than personal interaction. We can take as long as we want perfecting a Facebook post, and to a lesser extent a text, whereas conversation requires an immediate response. We can choose which photos to upload to Instagram to show our lives in the coolest light possible, whereas it’s much harder to fake it in real life. When we interact with others online, there’s a pressure to seem like nothing is going wrong, that we’re always having the time of our lives.

These are all valid points, but I think they fail to capture the full complexity of what is going on. I make the counterargument that in many ways online interaction is more authentic than personal (physically proximate) interaction.

Sometimes people have fleeting thoughts that, if expressed, would cause others around them to be distressed and maybe fearful. But if the person never says it, we are none the wiser, and most people would not hold others to thoughts that they actively rejected. Basically, we can choose who we are. By a similar process, we choose who we are online; it’s just a question of how much filtering goes on. Just because presentation is more controlled doesn’t mean it’s less authentic. If that were true, the most authentic form of a person would be projecting every thought they had on a screen above their head at all times. And if you think that is authenticity, then I don’t think we should be prioritizing authenticity above all else. Then you have to strike a balance somewhere between showing what we are now and what we aspire to be, and I’d be very suspicious of anyone who said that our current filtering level (of personal interaction) is exactly the right balance. Online, we show what we strive to be, but this is just as important a part of our personality as what is observable from moment to moment. Does it capture the full picture of us? No. But it is important, and shouldn’t be dismissed out of hand as “inauthentic”.

But say you really dislike the intentionality of such interaction. I still hold that because many people don’t realize the audiences they’re addressing, some online interactions are more revealing than personal interactions. I have many friends who I knew as normal, nice people, and would never have known that they support Trump, or are outspokenly pro-gun, or are anti-science if I hadn’t seen their posts on Facebook. I think this is important because more than making my day a little sadder when I see these things, it shows just how tailored personal interactions are. Almost everyone changes their representation of themself to some degree when interacting with different people or groups. Some barely do it at all, some are silver-tongued snake charmers, but almost everyone does it, even if only subconsciously. But if a person isn’t very conscious of the audiences that can see what they do online, they may end up revealing a side of themself to audiences they wouldn’t normally. This could take the form of seeing a really thoughtful and heartwarming post that awkward kid in your class made to one of his close friends. Or it could take the form of seeing your friend quote false statistics in the name of causes you despise. The point is, not everyone is a social media master, and you can learn a lot about a person if they weren’t thinking about you when they liked or commented on someone else’s post.

I’m not saying that social media is the be-all-end-all of human interaction. It has real and constraining limits. But to blindly categorize it as lesser than “less authentic” than personal interaction is to miss the larger picture of how our online and physical selves relate to the world around us. In this day and age, you need to understand both.

Cyber Bad Stuff


The theme of last week was that no one governs the Internet. So that begs the question, “How do you stop people from doing bad things on the Internet?” Assume, for the purposes of this post, that people agree on what constitutes a “bad thing”. In class, Jonathan Zittrain outlined three methods of dealing with crime in general: prevention (stopping it from happening), rule and sanction (punishing it when it does happen), and resilience (mitigating damage). When pursuing these strategies there are different methods employed: laws, social norms, technology/architecture, and manipulation of markets spring to mind. I’m a fan of markets, so let’s talk about that.

If you want to stop people from smoking cigarettes, taxing it into oblivion will over time reduce the number of people who smoke (even adjusting for chemical dependence). What is a cyber analog? If someone really wants, they can download digital files from the Internet, and it’s hard to track down and stop everyone who does that. But, as Steve Jobs said, you’re working for less than minimum wage. It’s just such a hassle to download all those songs yourself, especially at a rate that won’t be noticed, when a Spotify subscription will give you access to the same songs. Or take CryptoLocker or other phishing. If you make your system harder to penetrate, hackers will develop better malware. But if you keep on doing it, hackers have to work harder and harder for the same payoff, making it less worth it for them to run scams. Eventually, they’ll stop because it just won’t be worth it. I think in a lot of economically motivated cybercrime, market manipulation may be the way to go in reducing it.

Market regulation has its limits. Ideological agents don’t respond to economic incentives in the ways you would expect them to. This can take two main forms: states and individuals. States have more resources and so are harder to prevent, but assuming you can trace the attack (which may not be as hard as people think) they respond better to rule and sanction because they can’t just disappear into the shadows. The sole remaining group is ideologically motivated individuals. But this has been a problem with terrorism forever. The strategy so far seems to be prevention and resilience. Try and make your systems secure enough to keep out most two-bit hackers and resilient enough (through backups and such) to survive most attacks. The really scary thing is when states fund cyberterrorist groups. That’s an unsolved problem with traditional security, but is perhaps exacerbated by the digital world. Maybe we should figure it out.


Internet Governance (Or Lack Thereof)


Several weeks ago we were reading “Where wizards stay up late: the origins of the Internet”. One of the interesting tidbits was that ARPA offered that AT&T could become a regulated monopoly on the Internet and AT&T said no because the Internet would never work. At the time, we simply held it up like “Haha boy were they wrong.” This week, though, we learned that the entire traditional infrastructure ignored it. IBM, AT&T, the FCC, the ITU, telecommunications companies and regulators alike thought it wouldn’t fly. There were no guarantees, no security, no carrier model. But, like the bumblebee, fly it did. With everyone ignoring it, it grew up largely unregulated.

This week we asked the question, “Who governs the Internet?” As our guest speaker, Scott Bradner, half-joked at the beginning of class: “no one”. I say joke because by this point regulators have seen the bumblebee fly and there are certainly national attempts at Internet regulation. China springs to mind, with their Great Firewall and their censoring of content that might “confuse” the populace. Other countries have some amount of internal regulation But I say half-joke because outside of China, there are no country borders on the Internet, and no good way of settling jurisdiction. And this can create conflict.

Take, for example, the EU’s right to be forgotten. For background, the right to be forgotten allows a citizen in an EU to approach a company with a search engine, like Google, and demand they take down information that is true but no longer relevant (for some definition of relevant). For example, if I’m 70 years old and un upright citizen and don’t want my grandkids seeing that I was arrested at 14 for shoplifting, I can tell Google to take down any search results pointing to that information and they have to do it. Regardless of whether this law is good or not, it begs the question “What happens if someone outside the EU makes the same search?” Whose law takes control? The EU’s law or the other country’s law? Unfortunately, there aren’t really any rules for this because the Internet grew up as an unregulated upstart, ignored by the International Telecommunications Union (ITU), which regulates traditional international telecommunication. This has caused a not insignificant amount of drama. The situation as it stands is that Google will appropriately filter searches made within the EU, and will not for searches outside the EU (this raises questions of what if EU citizens are outside the EU, or foreigners within the EU, but at first approximation seems a good middle ground).

Though the Google situation seems like it’s in a stable place, but jurisdictional problems over the Internet are accelerating, not slowing. And this implies that there is a place for Internet governance. But what form should it take? There could be an international body that governs interactions and Internet relations between countries. This seems difficult, if for no other reason than the scope of the Internet and all the different possibilities for growth and change making it difficult for such a body to keep up. Or there could be simple ruling that no country can make rules that interfere with the internal workings of another country. This seems more reasonable, and may be the way forward if the world is looking to rain in the chaos that is the Internet. Or maybe, like Snowden and Assange, the world will break down such structure. Who knows?

Week 8: Digital Government


Today we had a guest participant in our discussion—David Eaves. We covered a broad range of topics about government and technology, from the failure of to the way that businesses optimize in the market for government tech contracts. But the thing that stuck in my mind? Apparently I have three degrees of separation from Edward Snowden (Snowden->Bruce Scheier->Professor Waldo->me). Well, it’s a good thing the NSA doesn’t care about you at all unless you have three degrees of separation or less from a person of interest. Oh wait.

I have been concerned about the surveillance the US government does on its citizens, but so far it has been with a certain degree of removal. After all, no one at the NSA really cares what some random teenager is doing; they have much bigger fish to fry. Now, aside from having something new to say about myself the next time I do an icebreaker game, I am also on the list of people the NSA might take interest in. I realize that little of substance has changed; I will probably still fly under the radar, unless I do things like attend anti-NSA rallies or order yellowcake uranium or broadcast to the world that I have three degrees of separation with Edward Snowden. Oh wait.

It wasn’t until today that I “felt it in my bones”, as my Physics professor would say. That I realized something as trivial as taking a course on the Internet could raise my surveillance status. That I considered censoring my writing (although it should be apparent I didn’t) to avoid the (admittedly mild) suspicion of the government. That when Professor Eaves said that the NSA mostly just keeps a log of calls, and only selectively listens in, the full weight of George Orwell’s writing (quote 2) sunk in. Tomorrow I may go right back to my regularly (over)scheduled life of problem sets and fencing. But today I feel it. Today my personal privacy is, well, personal.

Online Voting, Security, and Privacy


This week in Freshwoman Seminar 50, we’re talking about the Internet and politics. My favorite of the topics is Internet voting. It’s an exciting idea: lower costs, better turnout, no ambiguity (a la Florida 2000), and a much more efficient system overall. So what’s the holdup?

Well, security. There are opportunities for fraud, both on the scale of individual votes and in databases of stored votes. In other online fraud situations, like credit card and bank fraud, resolution of the situation is based in a firm understanding of your identity. Voting is supposed to be anonymous, so there’s no record of your vote. This makes sorting out fraud much more difficult.

But what if the vote wasn’t anonymous? What if there was a record in a database somewhere that has your identity attached to your vote? You wouldn’t be able to access the official record (so you can’t sell your vote), nor would anyone else, so for most intents and purposes it would be anonymous. And yet in the case of fraud there would be a record who voted which way, making the situation more easily rectifiable.

But then important information about you is sitting in a database somewhere? Doesn’t that give up a huge amount of privacy? I wonder if there are other debates like this going on *cough* Facebook *cough* Google *cough*. There is already a huge amount of data about you stored in databases, and used in less kosher ways (sold to advertising companies) than the government would use it (do nothing). In fact, likelihood is there’s enough data about you on the Internet to tell who you’re going to vote for. It’s possible that the fact that this data exists is bad, and we should be undertaking efforts to limit the amount of personal information out there. However, I expect that because people are becoming more nonchalant about their personal information being out there, they will care less about voter anonymity in the future. Perhaps this is how online voting will come about.

What the Heck is Intelligence?


Hi everyone! I doubt anyone outside of my class reads this (in fact, with what someone in class said today in mind, I doubt anyone outside of Professor Waldo and Dean Smith read this), but if you exist, I didn’t post last week because Columbus Day is a Harvard holiday, so we had no class. But now we’re back and better than ever, moving into artificial intelligence the Singularity.

If you’re unfamiliar with the Singularity, think about it like this: humans manage to create superhumanly intelligent beings. These beings create beings who are even more intelligent and who create beings… I hope you see where this is going. The resulting intelligence explosion is commonly called the Singularity, as it will propel us into a completely new era, one where human intelligence is virtually obsolete.

In class, we had a big debate about whether we could create a superhuman intelligence that was like a human in every aspect. I argue it’s possible (there’s nothing magical about the brain) but it seems like an absolutely awful idea. One person in class raised the point that we worry that more intelligent beings will want to subjugate us, just as we have subjugated all other life on this planet. He continued to ask, “Does that speak to the beings we will create or to ourselves?” Maybe that’s what we fear because that’s what we do. And in that case, why in the world would we want to create a human-like super-intelligence? It would be forging our own shackles. Perhaps more intelligent beings will necessarily subjugate less intelligent beings. But we KNOW human-like intelligent beings do that. Why would we create them? People often try to explore new technological territory by doing something that is familiar with it, but with the stakes this high, we can’t afford to mess up.

Of course, if we’re not creating human-like intelligence, what exactly is “intelligence”? I would define intelligence as the ability for an entity to function on its own, conducting various input-to-output functions that self-optimize as the entity takes in more input. For example, as humans we take in input (stove is hot) and produce output (take hand off stove). In the future, our decision-making functions change to prevent us from putting our hand on the stove in the first place. You could take issue with this definition—indeed, approximately half of class today was spent trying to nail down a definition of intelligence. But I like it. Intelligence in this way is not restricted to things like playing chess or solving math problems. It can apply to social interaction, emotions, literally anything you can imagine that an intelligent being would have to do. It also seems to fit well with ideas of “machine learning”, about computers changing their algorithms based on the input they get.

Of course, this definition opens the door to discussion of free will because if you believe in free will, you probably don’t accept my idea of decision-making, empathy, emotions, etc. as just input-output functions. But they certainly seem to be. The human brain is just a collection of neurons firing, or on a lower level, just a collection of chemical reactions. There is nothing magical about it. Which isn’t to say it isn’t a beautiful piece of machinery, just that it’s deterministic. If you put in some input, you will get a pre-determined output (note again that I don’t think recreating human brains is good—this is just to give an example of a well-known intelligence). So while free will is a white lie I tell myself to make my head hurt less when grappling with decision theory, it is certainly not true and shouldn’t enter discussions of intelligence.

So now we have this definition of intelligence. How do we build God with it? How do we not mess this up? I don’t know, but I believe these will be the most important questions of our generation.

The Internet of Things, and Various Related Ethical Musings


You wake up on a cold winter’s morning. The movement sensors in your bed register that you’re awake. Your water heater starts heating up water for your shower so that it’s hot from the moment you turn it on. Your coffee boiler turns on and starts brewing coffee, just the right strength for how much sleep you got last night. Your car turns on and starts to defrost the windshield. A morning like any other.

This is the Internet of Things at work. While the scenario is a bit exaggerated, it is not too far in the future. But what exactly is the Internet of Things? I would describe it as when our devices connected over the Internet interact very tangibly with the world around us. Our laptops aren’t part of the Internet of Things, but things like the Nest Learning Thermostat are. These devices can act and interact and adapt largely without human intervention. At some point, according to some, they will become so connected that they will form a cohesive platform to be programmed. Imagine the potential. We have to think bigger than the opening scene to this post. We can automate our cars to drive themselves—this is even already being done. No, think even bigger than that. We can leave everything rote and boring to machines, “automating the mundane” to borrow a phrase. Now think even bigger. One lesson I’ve learned so far from the Internet is that when you develop something, it will be used in ways you couldn’t even imagine when designing it. We can’t even begin to understand the potential for the Internet of Things. So I keep an open mind to the idea that it will represent a fundamental way in the way we live our lives, in ways that we can’t grasp yet.

So what is this post? Some kind of fluff popular science piece where I rave about how great some new technology is, even though I don’t really understand it? No (though it’s true that I’ve only scratched the surface of understanding). I’m here to talk about ethical implications. I went to an event last Friday hosted by the Harvard Computer Science Department called “The Internet of Things”. One of the speakers that struck me most was Jim Waldo, Professor of the Practice of Computer Science at Harvard. Professor Waldo introduced the trolley car problem, explaining that there was no consensus among the general population: different schools of thought and slight changes to the circumstances change the decisions people think are correct to make about the tradeoff of human lives. And yet, somewhere in Silicon Valley, some designer of self-driving cars is building those trade-offs directly into their algorithms about which lives to prioritize if a crash is unavoidable. Engineers create policy decisions with their work.

It’s not enough to “leave it to the politicians”. Public policy is 5-20 years behind technological development, and politicians rarely understand technology well enough to make informed decisions about its regulation. If we as engineers aren’t thinking about how our technology will be used and abused, no one is.

So if the Internet of Things could represent a fundamental change in our way of life, drawing us ever closer to realizing the dream held oh so many years ago by J.C.R. Licklider of human-computer coupling, we need to be asking questions about how data will be gathered, stored, made secure, and used. What are the answers? I don’t know. Let’s find out.




…you didn’t really think I could make it through a post about the internet of things without some Wall-E reference, did you?

Image result for wall-e

The Online Economy


Few people would deny that the Internet has changed the media by which industries like the music industry distribute product. There are jokes about how young kids today haven’t ever used a CD (and what the heck is a record?). Until this week, however, at least I did not fully appreciate how completely the Internet has transformed not just the way we get our product, but the operation of the entire industry.

It’s not just the music industry. Newspapers, books, hotels, cars, and many more industries are being turned upside-down by the unprecedented efficiency granted by the Internet. I could spend all day delving into just one of these industries, but instead I want to focus on a couple of trends that I see spanning the affected industries.

The first is that Internet companies are not looking to hire a lot of people. One of the main advantages granted by the Internet is that more can be done by fewer people. When Blockbuster turned over to Red Box in Chicago, the 1323 employees of Blockbuster turned into 7 employees of Red Box, which was offering the same service. My question in class was “where are these workers going? Even if 7 of them get retrained to be Red Box employees, what happens to the other 1326?” The answer, it seems, is that the service industry is expanding. The opening markets sometimes create jobs. Think of Uber: someone has to drive the cars (at least until self-driving cars are developed further). Many workers, however, tend to get dropped into “Do you want fries with that?” positions. This creates a hollowing-out of the middle class, with a select few able to take advantage of the new technology and many of the working class getting the short end of the stick.

The second is that Internet companies know a lot about us, and because advertising is almost exclusively how websites make money, they will use our information to show us targeted ads. Originally, I had no idea why anyone would dislike this. Google or Facebook or whoever is going to show me ads no matter what, so they might as well be things that I might actually find useful. What’s the big deal? It’s helping me and the person trying to sell to me. The question someone posed, though, is “what if they offer you a higher price because they know you have more money or want the item more?” This doesn’t sit as well with me, perhaps because of society’s ingrained ideas of economic fairness and transparency. There’s certainly an argument that in a perfectly efficient market, companies will charge you more if they can based on your situation, and you’ll negotiate a better rate if you can based on the seller’s situation. The problem is as a consumer you don’t know the seller’s situation, so the idea of negotiation and perfect efficiency seems to go out the window. It looks more like a scam, when market vendors in foreign countries charge Americans more because we don’t know what their goods are worth.

I’m not certain about much with regards to what is right and wrong here. But these definitely seem like questions we should be asking about the Internet, rather than fighting over the privatization of domain name registrations.

Know your limits


I have thus far neglected to mention what source material we are reading in class. Over the past few weeks, the class has been reading “Where wizards stay up late: the origins of the Internet” by Katie Hafner and Matthew Lyon. It was written in 1996, so at this point it’s twenty years old, but it’s mostly a history text. It doesn’t try to make a lot of predictions about the future of the Internet, so it has aged well. If it had tried to make predictions, it probably would have failed. In general, futurology is very difficult because there are so many variables that we \don’t even consider that will affect the future in surprising ways. Thus most people who claim they have the answer are grossly overconfident.

By the way, the name of the class is “What is the Internet, and what will it become?” Which is not to say that either of my professors is “grossly overconfident”. We haven’t gotten to the “What will it become?” part, and I imagine our goal will be to explore different plausible futures and leave the matter unresolved but with a better understanding of it, not to settle on a specific future that will definitely happen.

I see similarities between this approach and our other reading for this week, “End-to-end arguments in system design” by J.H. Saltzer, D.P. Reed, and D. D. Clark. The authors don’t do much to promote particular ways to design systems because the nature of your approach will be altered by what you want to maximize, the limits of your hardware, etc. Instead, they raise a number of ways to think about designing systems so that you can understand the task better.

The one hard piece of advice they do give is what is now known as the end-to-end principle: basically, make the network simple and the endpoints complex. This localizes most problems, splitting up the workload and keeping the network flexible and with room to grow. In the case of the ARPANET, rather than requiring each host to learn how to communicate directly with every other host, the system only needed the hosts to communicate with their IMPs, which all spoke the same language and could easily talk to the other IMPs. The endpoints (host-IMP interaction) were complex, while the network (IMP to IMP interaction) was simple.

The end-to-end principle has specific applications, but is very general and leaves room for adaptation. This is good on the part of the authors. Recognizing your argumentative limits is very important in the world, whether it be specificity of system design principles or the uncertainty of predicting the future or other situations.

That’s all for this week. Until next week!

Log in