Allison's Reflections

Just another Weblogs at Harvard site

Blog #1011: Final Thoughts

Filed under: Uncategorized — allee at 12:01 am on Tuesday, November 29, 2016

Today was the last seminar of the year. First off, I’d like to thank my wonderful professors and peers; without them, I wouldn’t have learned so much. Whether it was agreeing vehemently regarding the recent political circumstances or arguing about what constitutes what “singularity” really entails, every discussion was an opportunity for me to better understand the Internet and the future of technology.

Now…onto the seminar itself. I found that one of the most interesting parts of today’s discussion was about Facebook Basics. For those who don’t necessarily know, Basics is an initiative that would make certain websites accessible to many people across the globe. We had quite the heated discussion during our seminar today about whether it would be a good idea.

On one hand, there’s the argument that Basics goes directly against the philosophy of net neutrality. Because it would only allow people to access a limited variety of websites, it would be inherently restricting the knowledge that users could gain.

One the other hand, isn’t some information better than no information? One of my classmates made an analogy to sweatshops; if someone’s starving, then isn’t providing them with a factory job with harsh conditions still better? Unfortunately, this caused more controversy.

Personally, I think that Basics is a great idea. Unlike sweatshops, I don’t think that limited internet connectivity is inherently detrimental to one’s quality of life. If the websites were selected by the government, then brainwashing would be a viable threat. But Facebook is the one establishing what websites can be accessed. And I don’t think AccuWeather is about to be the next threat for brainwashing people.

BUT there are other concerns, as addressed in this article I found. It’s important to recognize that Basics is not a charity; as the article points out, there are obviously commercial interests. Thus, Basics could disrupt the market as we know it now. Thus, I think it’s imperative to examine the potential repercussions of Basics more before actually implementing it. In general, I think that thinking ahead in technology is so, so important, and I hope that as I try to join the innovating global community, I’ll keep that in mind. This seminar has really ingrained that in me, and I really appreciate that. 🙂

Again, thank you for a wonderful semester! 😀

Blog #1010: “Take a Screenshot- It’ll Last Longer”?

Filed under: Uncategorized — allee at 4:37 pm on Sunday, November 13, 2016

Snapchat. Yik Yak. Zap. What do all of these social networking platforms have in common?

In my opinion, they are all attempted loopholes around the necessity for the right to be forgotten. With Snapchat or Zap, the content that you send “disappears” within a certain time frame. With Yik Yak, what you post is absolutely anonymous (or so they say). After all, if there’s no evidence that you ever posted anything, there’s no need to rely on a “right to be forgotten”, right?

Caitlin Dewey of the Washington Post makes this claim in her article, even confidently concluding that “Today’s kids don’t need eraser laws — they’re erasing themselves.” She cites the statistic that “the percentage of those officers who say social media has negatively impacted someone’s chances has fallen, in the past two years, from 35 to 16 percent.”

However, I’m not as confident as Dewey. I am admittedly one of the millenials hooked on Snapchat; I mean, who could resist those rainbow-vomiting filters and face swaps? As a frequent user, I am more than acquainted with how you can set timers (up to 10 seconds but as short as 1) for how long the receiver of your Snap can view your photo or video. I’ve sent goofy faces for 1 split second to friends, thinking that they’d laugh but not have the photo to keep.

But I’ve definitely mastered the art of screenshotting within a second, as have my friends. So every February, my Facebook wall is inundated by their collections of my silly selfies. They’re not harmful in any way, but they definitely aren’t something that I want future employers looking up.

So there’s an element of transferability between what is categorized as “volatile social networking services” and other social networks such as Facebook or Twitter, where data is thought to be eternal. I know that Snapchat attempts to mitigate this by notifying senders when the receiver of a Snap takes a screenshot; however, if someone sends something truly regrettable, I imagine the incentive to eternalize it with a simple press of two buttons outweighs the con of having the sender know it was.

I wonder how such the creators of “volatile SNS” will engineer their products to deal with this; after all, if there’s an ability to preserve content, that defeats the purpose of the service. But at the same time, the screenshotting on a phone will always exist. In this published research paper I found, the authors propose that users will switch SNS when the new service provides increased “privacy protection, volatility, and system security”, which is consistent with my observations as a Snapchat user. (Perhaps this article is also relevant to the rise and fall of social networks in general and therefore the seminar our class is planning to put together.) I’d love to discuss this in seminar sometime.

Blog #1001: Cyber War with a New Commander-in-Chief

Filed under: Uncategorized — allee at 10:14 pm on Thursday, November 10, 2016

When the news hit of a hack into the Democratic National Committee, I remember feeling scared, concerned, and confused. I remember wondering who could have been responsible.

“It also could be somebody sitting on their bed that weighs 400 pounds.”

Thank you, Mr. Future President, for that enlightening answer.

With the recent election results, I couldn’t help but read this week’s assigned articles on cyber warfare with Donald Trump in mind. How will he deal with cyber warfare and crime?

It didn’t help that a suggested article on one of the assigned ones was “Trump’s Win Signals Open Season for Russia’s Political Hackers“. The author, Andy Greenberg, cites the spike in activity of Fancy Bear or APT28. This political hacking group from Russia was pointed to as the culprits of the DNC hack by the security firm Crowdstrike. I feel like Greenberg’s argument is very extreme, and reading it, I’m not sure if I 100% buy it. He attributes the increased activity to the hackers being encouraged by recent hacking successes (for example, of the DNC); isn’t this independent of whether Trump won or not? I would love some clarification on the article at some point. While I can see that Trump is more condoning of Russia than other political figures, I’m not sure that that is sufficient for hackers to go all-out.

During the first presidential debate (the same one where he made the aforementioned statement), Trump did express concern over cyber warfare; he explicitly declared that “we have to get very, very tough on cyber and cyber warfare. It is a huge problem.” But I feel like he doesn’t understand the subtleties of it. In the above Wired article, James Lewis of the Center for Strategic and International Studies even asserted that “Trump’s win may now delay America’s response or reduce its efficacy.” Quite honestly, I don’t really understand the implications of his presidency on cyber policy. Erik Gartzke contends that “Unless cyberwar can substitute for a physical surprise attack, there is no reason to believe that it will be used in place of conventional modes of warfare” and that for a cyberattack to be of great magnitude, it must be done in conjunction with a physical attack. I’m hoping that hackers won’t figure out a way to effectively execute this in the next four years; but if they do, what will be the US’s response? Will Trump prioritize dealing with cybersecurity? How would checks and balances change who really has a say in cyberpolicy? I’d love to discuss these questions in seminar next week. 🙂


Blog #1000: Human Rights in Cyberspace

Filed under: Uncategorized — allee at 5:44 pm on Friday, October 28, 2016

In “A Declaration of the Independence of Cyberspace”, Barlow asserts that “[Government’s] legal concepts of property, expression, identity, movement, and context do not apply to [Cyberspace]. They are all based on matter, and there is no matter here.” Moreover, he contends that “we are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

Am I the only one who feels like this laissez-faire-esque approach to the internet is not necessarily as positive as Barlow makes it out to be?

As I’ve brought up in past blog posts, I’ve experienced “flaming” firsthand while playing eSports. Freedom of speech in real life is, of course, guaranteed by the First Amendment, but that doesn’t mean that anyone can say anything with no repercussions. Especially because of the anonymity granted in Cyberspace, I find that many people are willing to express hurtful or derogatory sentiments without fear of consequence. I think that especially as there have been tragic cases related to cyberbullying recently, there need to be some ground rules. The question is, do those need to be set by a country’s government, or is there another body that can take care of this? I looked into whether the IETF had anything to say about this, as it seems that they are a prevalent community when it comes to the evolution of the internet.

There seem to exist a lot of groups within the IETF that handle matters relevant to human rights and freedom of speech. There’s the Human Rights Protocol Considerations Research Group, which is “chartered to research whether standards and protocols can enable, strengthen or threaten human rights included not limited to the right to freedom of expression and the right to freedom of assembly”. I found it intriguing that these two rights are actually guaranteed by Amendments to our Constitution. The next thing I wondered was how effective the IETF is in managing human rights-related issues.

As usual, a simple Google search turned up a multitude of results. One that I found particularly intriguing was this article. It recounts some of the key points made at IETF 91, which was held in Hawaii in November 2014. The Human Rights Protocol Considerations Research Group was created as a result of this meeting; however, the article mentions that considerable concerns were raised regarding the potential for the politicization of the IETF if human rights were even researched. At the meeting, one respondent stated, “we have to stop pretending that technology is a nonpolitical decision”.

Talk about controversial.

I think one of the largest issues with the relatively decentralized structure of the IETF is that because it’s so open, there’s not really anyone making executive decisions regarding human rights. While that’s also a beauty of an open community, I think that it can be harmful. Clearly, flaming and cyberbullying continue to be unfortunately omnipresent. At what point, if any, would it be appropriate for the government to step in? Or would government intervention completely stem the freedom associated with the internet, as Barlow would no doubt suggest? I’d love to hear everyone’s opinions on this in seminar soon 🙂


Blog #0111: e-America?

Filed under: Uncategorized — allee at 1:31 pm on Friday, October 28, 2016

After last week’s focus on voting and its potential to transition to becoming an online process, I was left with many doubts regarding the intersection of politics and technology. This week, I was introduced to a very intriguing concept– e-Residency. This is offered to every world citizen by the country of Estonia. It allows e-Residents to sign official documents, conduct monetary business, and even declare taxes all online. The website writes, “Estonia is proudly pioneering the idea of a country without borders”.

Would a similar system be implementable in the US?

While Estonia’s e-Residency program seems to be geared towards business owners, its existence led me to wonder whether other citizenship-related matters could be transferred online. For example, the process of naturalization is currently a multi-step process in the United States. One question that popped up for me was whether this process (or at least part of it) could be put online. According to the official website of the Department of Homeland Security, this is the current procedure for applying for citizenship:

  1. Prepare the Form N-400, the application for naturalization
  2. Send in the Form N-400 by snail mail to USCIS, located either in Phoenix, AZ or Dallas, TX (which state depends on individual’s state of residence)
  3. Physically go to a biometrics appointment if necessary
  4. Complete an in-person interview at a USCIS office, where a speaking, reading, writing, and civics test will also be administered

Considering this procedure, I see only step 2 as easily transferable to an online method. While not every applicant must go in for a biometrics appointment, those who do get their fingerprints collected, photo taken, and name signed for electronic capture. This doesn’t seem like something that could be done via the Internet without compromising the security of the current system. Moreover, the speaking, reading, writing, and civics tests seem like they could be cheated on if they took place online.

However, while naturalization may not seem feasible online, surely there are ways to implement some degree of digital citizenship in the US. Also, as the world becomes increasingly interconnected, I wonder if there will ever be a point where separate countries choose to pool some of their government data on citizens in one database (say, to check whether an applicant for naturalization is being truthful about their records), and if so, what information they’d be willing to put out. I’d love to discuss what could potentially be done online in terms of digital citizenship specifically within the US next week. 🙂

Blog #0110: Russia and the 2016 US Election

Filed under: Uncategorized — allee at 2:45 pm on Friday, October 14, 2016

I remember sitting with my fellow interns on a sweltering July afternoon, finishing up our soft serve ice creams with the TV on in the background. We were silent, as we usually were on days when the dessert was particularly good. But then we heard a recently all-too-familiar voice declare:

“Russia, if you’re listening, I hope you’re able to find the 30,000 emails that are missing.”

Needless to say, the rest of our lunch break was filled with exclamations of disbelief, wry laughter, and heated discussion. Admittedly, I’m not the most well-versed in politics. But from my peers’ reactions, I know I’m not the only person who did a double-take at Trump’s words. It was the first time that we had heard a political figure ask for a foreign power to breach our national security measures. (Here’s a video of the occurrence for anyone who feels like they haven’t heard Trump’s voice enough yet.)

Whether or not Trump was joking (which I really hope he was, as I do with most of what he says), his words emphasized the reality of Russia altering our political matters via the Internet to me. In one of the articles we read– “When Will We Be Able to Vote Online?” by David Pogue— online voting was argued to be infeasible with current security measures. But I think it’s important to recognize that even without online voting, there are still ways for other countries to influence the 2016 elections. If Facebook is able to manipulate a statistically significant number of voters (as shown in the article by Micah Sifry we read), then there are surely ways for Russia to change the outcome of our elections even if we are not using the Internet to directly vote.

So how exactly would foreign powers do this? I decided that a little Google searching would yield the answer. I ended up on an article from the Huffington Post. In it, the author, Michael Gregg, asserts that Russia or any other hackers could change our election results in the following ways:

  • Hacking a voting machine
  • Shutting down the voting system or election agencies
  • Deleting or altering election records
  • Hijacking a candidate’s website
  • Organizational doxing (publishing private information– essentially what Trump encouraged Russia to do with Clinton’s emails)
  • Targeting campaign donors

I’m sure there are other ways for our election results to be changed, too. I explored other essays written by Bruce Schneier, the author of one of the essays we were assigned this week. In many of them (such as this article published around the same time the aforementioned Trump incident happened), he argues that such hacks are a national threat to our democratic country. I agree with this, but what might be done to protect our election? What measures are already in place, and why aren’t they effective? Are there new measures being formulated now? But would extra security measures compromise individual privacy? How might this conflict of interest play out? This intersection of politics and technology is fascinating but also scary to me, so I’d love to discuss them in seminar soon. 🙂

Blog #0101: 2045: A New Odyssey?

Filed under: Uncategorized — allee at 1:46 pm on Wednesday, October 12, 2016

“I’m sorry Dave, I’m afraid I can’t do that.”

Those are the words from the iconic movie 2001: A Space Odyssey that many of us sci-fi nerds reference. But I’ve never paused to consider what I would do if my phone or laptop consciously disobeyed a “command”, or user input. The Singularity seems to make that a very real possibility.

While whether the Singularity will actually happen in the near future is hugely up for debate– after all, Ray Kerzweil and Paul Allen’s opinions seem to be in direct opposition with each other– the potential repercussions of it taking place seem immeasurable. For me, it isn’t a HAL-esque situation of artificial intelligence consciously harming a human being that I realistically fear the most. Maybe it’s because my sci-fi-loving father has made me sit through too many robot attacks with him, but I feel like there is a substantial amount of distress regarding that threat already. Thus, when the point where artificial intelligence gains autonomy arrives, I think innovators and the public alike will be meticulously accounting for it. Because it’s such an obvious potential design flaw, wouldn’t it have been tirelessly addressed ahead of time?

The consequence I feel trepidation towards is perhaps more subtle, but more probable in my opinion. In my Expository Writing class, we are currently studying antimodernism. We read from T.J. Jackson Lears’ No Place of Grace, which discusses the rise of antimodernism during the turn of the 20th century. During this time, Lears asserts that it was common for people to feel “weightless”– driven only by a clock and the promise of capitalism, they were bound to a stifling mold of their own “commodified selves”. Many felt that autonomy, the fulfillment of self and risk-taking, and most of all, the intensity of life were gone. As the Singularity approaches, this same problem, which has continued to exist since, might be exacerbated.

After all, what do artificial intelligence bots know of spontaneity? They are programmed, and that makes everything seem predictable. Thus, “weightlessness” would be built into their systems. And even if AI bots were able to feel emotions, wouldn’t they inherently feel “commodified”, which is a root of weightlessness? If they became members and therefore influences on human society, this feeling might permeate non-AI subjects– us– as well. Even now, our generation is constantly told to “unplug”. I always interpret as an urge to truly live– to never let living vicariously through our screens become a substitute for genuine human connection and feeling. Would this even be an option with omnipresent AI in our lives?

Of course, it’s hard to judge how the Singularity (or even really advanced AI) might psychologically and emotionally impact people. But in a society that’s already so fixated on efficiency and the clock, I can’t imagine this problem being worsened. Would we increasingly suffocate from a lack of meaningful living as we know it today? Or would there be a certain point where we would revert to some of our old ways, even if it meant regressing technologically? Would we even want to after establishing so much reliance on AI by that point in time? I’d love to discuss this idea in our seminar next week. 🙂 Until then!

Blog #0100: Avoiding a “WALL-E” Situation

Filed under: Uncategorized — allee at 8:10 pm on Wednesday, September 28, 2016

When I read about the “Internet of Things”, I viscerally thought of the animated movie, “WALL-E”. For those of you who haven’t seen it (please do— I definitely recommend it), it’s about a robot named WALL-E (short for Waste Allocation Load Lifter Earth-class) that is the last one on Earth. In this film, the human beings have all left Earth after polluting it to the point where it is no longer inhabitable and live in a spaceship instead. Everyone is overly reliant on technology and obese; many see this film largely as a satire on consumerism with a cautionary message.

Perhaps it was WALL-E’s influence on me, but I began considering what it means to be “overly reliant” on technology. Where do we draw the line between optimizing the world’s efficiency and allowing technology to do everything for us? Personally, when I was reading Wasik’s article, “In the Programmable World, All Our Objects Will Act As One”, I thought that some of the implementations were a little excessive. For example, is it really necessary to have a coffee pot that “talks” to the alarm clock? Anyone would agree that having a hot cup of coffee being served up to you automatically would be nice, but it just seems like such an insignificant task that you might as well just do it yourself.

Besides such features of the programmable world being potentially extravagant, I also think they could come with negative effects on the users. The world likes to make fun of us millennials for being married to our phones and disconnecting from reality, and the internet of things might exacerbate this. Wasik includes an image in his article from Wired by Michael Wolf that provides specific examples of how the programmable world might work. One of those is that if a baby is crying, the room will first try to soothe him and if that doesn’t work, it would text the parent at the next-door neighbor’s cocktail party. I found this example at once sort of funny and depressing. To me, it seems ridiculous that a parent would leave his or her young child to go to a cocktail party. But a programmable world would make that more of an accepted option, and I think that this is only a small example of the variety of irresponsibilities and loss of human connection that could stem from the internet of things.

Going back to WALL-E, another theme in the movie is environmental health. If we were to make the majority of our world programmable, that means that many everyday objects would have to be embedded with sensors. How would the production of this technology and the energy required to sustain it affect the Earth? If people were careless with the disposal of old possessions (would embedded objects qualify as e-waste now?), what would happen? Or could “smart” objects actually help the environment? There are several objects that could turn off (for example, lights and heating) when not in use, ultimately conserving energy when it’s not needed. These aren’t questions that I have enough information to answer, but I’d definitely like to discuss this particular facet of the internet of things in seminar next week! 🙂

Blog #0011: Assembly Line 2.0? My Thoughts on Crowd Work

Filed under: Uncategorized — allee at 10:30 pm on Thursday, September 22, 2016

What could one do if 627 minutes were added onto each day?

While I can’t speak for everyone, I know that I’d probably go on a run, practice oboe, and sleep (as well as procrastinate away more of that precious time than I’d like to admit). But even with all of those activities factored in, I’d still probably have multiple hours to spare. This was the number of minutes that Henry Ford’s implementation of the assembly line reduced the production time of his famous Model T by. When the new strategy was introduced, it revolutionized the industry and economy!

I personally see the same potential in paid, online crowd work, which was defined in one of our readings as “the performance of tasks online by distributed crowd workers who are financially compensated by requesters (individuals, groups, or organizations)” (Kittur 2). But as Kittur warns, there are many possible flaws with crowd work in practice. In this blog, I’d like to add to some of the specific points in Kittur’s argument both for and against crowd work. When doing so, I’d like to also compare and contrast some of the characteristics of crowd work to the kind of work I did at my internship at a .com company this past summer.


  • Flexible workforce and no shortage of experts in a certain geo: Because of the pool of available workers, crowd work would certainly have a very accommodating resource at its disposal. While Kittur does not explicitly state what this “flexibility” is in regards to, I interpreted his words to be referring to chronological, cultural, and linguistic constraints. These were obstacles that often detracted from efficiency at my workplace. The company’s website was run in over 50 different languages to appeal to a wide range of clients. However, this made it imperative for the company to hire employees to specialize in each language. The company had to expend money for recruiting; moreover, once hired, language specialists in the US would have to work undesirable hours to accommodate for clients in the country they specialized in. If crowd work were to be introduced, surely there would be  plenty of individuals within the pool of workers, so economic efficiencies and strains on workers’ lifestyles would be reduced.
  • Chances for income and social mobility in disadvantaged areas: This particular point had a lot of appeal to me. Especially in developing countries, perhaps crowd work would allow previously unemployed individuals to work. This could stimulate a lot of economic growth, given that these individuals wouldn’t be hugely displacing current workers (a con mentioned by Kittur).


  • Potential for super low pay: Attempts to implement crowd work on a major scale would certainly bring up issues with the current policies and regulations regarding employees’ rights. Who would be in charge of creating rules for crowd work employees and employers, and who could possibly enforce them? Would a minimum wage no longer be set? For individuals whose incomes come only from crowd work, should a set of benefits be promised? There are so many questions that would come with crowd work becoming mainstream, especially if people began working exclusively on crowd work projects.

Reading Kittur’s arguments also led me to think further about the idea of anonymity and accountability, two entities that intertwine very interestingly in the online realm. There is a certain sense of anonymity on the Internet, which I see as a potential con for crowd work. Perhaps crowd work employers would not be as careful with background checks, especially because a requirement to provide too much information might deter potential applicants. SSN’s aren’t something people just disclose on the Internet, and identity theft rates would surely increase if people began posing as employers and asking for personal information. There is an ongoing debate regarding ex-convicts’ employment rights, and crowd work might add another dimension to that. In addition, increased anonymity might lead to people feeling less accountable for their work. This could result in the inefficiencies some people associate with working from home such as an increase in careless mistakes or shirked responsibilities. I’d like to discuss this in seminar next week to learn about my peers’ perspectives on this. Until then! 🙂

Blog #0010: IPRs and Open Source

Filed under: Where Wizards Stay Up Late — allee at 1:46 pm on Thursday, September 15, 2016

For me, one of the most frequently mentioned topics in high school was intellectual property. Lexington High School’s Honor Code addressed the necessity to respect individuals’ intellectual property; my economics class discussed the implications of IPRs on a country’s economic productivity; my US history class would often have debates around IPR policy as part of our current events section. Thus, when IPRs made an appearance in Where Wizards Stay Up Late, I was intrigued.

BBN’s initial refusal to release the IMP code was a blatant attempt to control every part of the current network— essentially, monopolize control of a unique resource. While the source code for the IMPs is not exactly a product being sold on the market, I find that many economic ideas are still relevant. If I may be visual for a second, here is a graph of a perfectly competitive firm:


We can see that economic welfare (basically the benefits society reaps due to the sale of the product) is quite plentiful. On the graph, the total economic welfare is represented by the sum of consumer and producer surplus.

Now, here is the graph for a monopoly:


(Source for graphs: Essential Foundations of Economics by Robin Bade and Michael Parkin, 7th Edition)

Because a monopoly will choose to produce at a level below the demand for the commodity to raise prices, the total economic welfare is reduced. In our reading, BBN would be the monopolistic firm in question. The text mentioned “deadweight loss” (harm to society) such as the Network Measurement Center at UCLA being unable to function efficiently.

In this specific case, it was quite clear what the benefits and detriments of BBN keeping the source code private were. However, this caused me to wonder about the converse scenario— open sourcing. I remember hearing about Google choosing to open source TensorFlow and chose to read about it (here is the link, if anyone is interested). The basic idea of this particular article is that Jeff Dean believed that open-sourcing would make collaboration between Google’s researchers and other scientific communities easier and faster. In addition, individuals could improve the source code with few barriers.

Of course, as Satell includes as a caveat in his piece, total openness would harm a firm (hence Google keeping its search engine’s workings a secret). But generally, I see open source code as a great thing. Much like the RFCs had been at the beginning, I feel like they’re an invitation to join a larger community. They share a spirit with the ARPANET’s first users, who tinkered with the network on their own and contributed ideas freely. That’s how electronic mail came to be, and while I have a love-hate relationship with my inbox, it’s certainly connected the world in a new way. It’s evidence of how much this kind of innovative environment can cause great improvements in society.

However, I’m sure there are even more subtleties to IPRs and the choice between privacy and open-sourcing. I’d love to examine TensorFlow or another case study next week in our seminar. Until next time, then!

Next Page »