You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Reflection

All too suddenly, our seminar has come to an end. I just wanted to take this final blog post to reflect a little on the class and my gratitude for being a part of it.

I took this class because it was out of my comfort zone and, although most of the topics we discussed are still a bit out there for me, I am so glad that I had the opportunity to engage with this entirely new material with you all. Although I may still not know the intricacies of how the internet was founded or exactly how blockchain works, I now have a narrative in my mind for how it all came about and a better understanding for how incredible the creation of this “collective hallucination” was. I am grateful for the insights into cybersecurity Dr. Michael Sulmeyer provided for us, and how they completely rejected my assumption that security breaches are infrequent and possible because of a lack of due diligence. I also very much appreciated the detail with which we explored Bitcoin and cryptocurrency, as well as the broader technology of blockchain. I’d of course heard countless success stories of those who invested in Bitcoin early and knew that it was a digital currency that was not affiliated with any specific nation, but did not know much about this mysterious moneymaker beyond that. This course, and our discussions, have clarified this cutting-edge topic for me and, possibly most rewarding of all, led to wonderful spontaneous conversations with friends and family. I love when course material is relevant enough to serve regular conversation—indeed, that is one of the times when I most value my education.

Most of all, I am grateful to have spent this first semester of college with all of you. It was a tumultuous experience of adjustment for all of us I’m sure, and to have the consistency and unrestrained excitement of this class to look forward to every Monday was an anchor for me. Thank you so much.

 

Second Life

One of the most intriguing parts of our final seminar, for me, was our brief discussion of the virtual platform  Second Life before we dove into the internet of value. Although it’s heyday was in 2007, Second Life still has 600,000 regular users according to the article “The Digital Ruins of a Forgotten Future” that was published by the Atlantic this month (thanks Hannah for recommending it in class!) . The article describes how the platform serves as an escape for many people, and offered the examples of a parent of  children with serious developmental disorders and a woman diagnosed with multiple sclerosis who both use the platform to live free from their limitations and obligations for some time. The article also described how Second Life could serve as a chance to pursue one’s unfulfilled dreams. For Jonas Tancred,  who was interviewed for the article, that meant becoming a musician. He performed concerts in the platform in front of raving audiences (while in real life he played in his kitchen on a guitar plugged into his laptop) and grew so popular that eventually he was offered a real-life record deal in New York City. Not only that, but he met a woman on Second Life after one of his concerts who he would eventually be the mother of his child.

This blurring of the lines virtual and the physical world reminded me of the science fiction book Ready Player One by Ernest Cline. In the book, the physical world is in a state of despair and chaos due to an energy crisis, and the only escape is the virtual reality game/society called the OASIS. The OASIS is more immersive than Second Life, and is described as being accessible via a visor and haptic gloves. Wealthy players could even purchase entire haptic bodysuits to feel fully engaged in the virtual world. Although Second Life is of course not nearly as widespread nor as technologically advanced as the OASIS in the novel, I was intrigued to see that the fantastical ideas in a book I read just recently have in fact been played with for over a decade.

Though I can see many merits of a virtual world as an opportunity to escape the binds of prejudice, responsibility, poverty, and to interact with people from all across the world, I worry that it is also a way to evade the challenges that we face in the real world. Indeed, in Ready Player One, the world is in ruins, and the protagonist lives in a trailer park (the trailers are precariously stacked on top of each other because it is so overpopulated) that is riddled with violence. The world is past saving, and all the people can do is log into the OASIS to escape it for a while.

Although Second Life was admittedly never widespread and currently has but 600,000 players, the Atlantic article argues that the desire to create a curated, ideal version of oneself has just been played out on social media platforms life Facebook and Instagram instead. People may feel more comfortable because they are not creating a fully fabricated avatar, but the fundamental will to escape real life remains the same. There are many problems with having a hand-picked online persona in my opinion, as I described in my last post. In addition, though, it would be devastating and even apocalyptic to see people dive too far into the virtual world,  so discouraged by the real world that they abandon any efforts to address its issues.

Friend or Foe?

This week, as we went around the table and gave our standard introductions, our esteemed guest Professor Latanya Sweeney asked us each to add whether we view the internet as a friend or a foe. Caught a bit off guard, my first instinct was to say that the internet is our foe. At least, that it already has the capability to be.

Just the day before, I had read an article that a friend on Facebook had circulated about a college freshman named Madison Holleran who, contrary to an outward presentation online of being a happy, successful student-athlete, had battled with depression and ultimately took her own life in 2014 (see this video report by ESPN). Though the focus of the tragedy is definitely not her social media accounts but her clinical depression and the difficulty of transitioning to college, it powerfully demonstrates how curated and untrue our online personas are; Madison’s Instagram presented a content a smiley undergraduate who dreamed of athletic glory out on the track, not one who battled with inner demons. This veil can prevent users from receiving help,  as possibly in Madison’s case, because they seem to be thriving on the surface. In addition, these hand-picked, painstakingly assembled images of perfection that serve as our social media presences can be destructive for others.

A couple weeks ago, my parents sent me this video made by a current freshman at Cornell portraying loneliness as a freshman transitioning into college. She describes how even though she knows that “social media is fake and stuff,” the constant stream of videos and pictures from her high school friends having the time of their lives in college only added to the sense of isolation.  This sentiment was all too relatable for me, and of course extends beyond the scope of college—in general, people use social media to show the best sides of themselves, and viewers perceive these curations as their daily lives and feel alone in their entirely normal imperfect and bumpy lives.

On the flip side, the internet can do a whole lot of good for people socially. As we read in the article “Trust me, I’m your smartphone,” the internet can serve as a lifeline for minority demographics in particular who find support and comfort through connection with people of similar experiences. And admittedly I haven’t even touched on the purposes the internet serves beyond social connection, such as information collection and distribution, collaboration, business, and more. The internet does a whole lot of good and bad in those fields, as well, but I digress.

The conclusion I came to after our seminar and further reflection is to agree with Hannah—that the internet is neither a friend nor foe, but simply a tool. An incredibly powerful tool, that is, with which we have the capability to reach unimaginable heights… if we don’t destroy ourselves first.

The God of the Internet and Ownership

According to Wikipedia, Jon Postel was known as the “god” of the internet. Our guest in seminar last week, Professor Jonathan Zittrain, talked about him extensively, but until then I had never heard of him (I hope I’m not alone). But how can this be!? It’s hard to wrap my mind around the fact that I have used the internet for hours a day for most of my life and had no knowledge of one of the most important figures in the history of the internet until a week ago.

What struck me most about learning about Postel is that he was a single person with so much influence over  the internet. I may not know exactly what the extent of the internet was during his lifetime, and I know it has grown and developed since his passing in 1998, but it is mind-blowing to reconcile my image of the internet as this huge, seemingly ungoverned space with that of one person that users depended on for IP address allocation and, as Professor Zittrain discussed extensively, root-zone management in DNS. It makes the internet far more personal— which was likely fitting in Postel’s time. Nevertheless, it also demonstrates the risks of having one person take on so much responsibility independently—when Postel died suddenly of heart complications during surgery, others had to scramble to pick up the pieces and standardize the work he had been doing.

To switch gears a bit, Professor Zittrain also discussed the move from unowned to owned when it comes to the internet. He clarified his statement by bringing up Internet applications, which are a form of ownership because when using an app the user can only access the internet in  the ways through which its designer intended. And apps are becoming increasingly prevalent, leading to a gradual shift from browsers themselves to apps. I worry that this threatens or at least limits free speech—At least, that it limits the accessibility and frequency with which users will be able to access the free internet. Apps lead to increased curation of the internet, which can be very effective when they are used for specific purposes. But if we go too far in this direction they may become like blinkers, blocking any information that the user doesn’t purposefully seek out as blinkers block out any peripheral vision.

Sources

https://en.wikipedia.org/wiki/Jon_Postel

https://en.wikipedia.org/wiki/Internet_Assigned_Numbers_Authority

 

 

Ignorance is bliss…until something goes wrong

This week in seminar, we had the pleasure of being joined by Dr. Michael Sulmeyer, director of the Cyber Security Project at the Harvard Kennedy School. In the past, he’s also served as the Director for Plans and Operations for Cyber Policy in the Office of the Secretary of Defense. Clearly, he is an expert on cyber security and he offered eye-opening insights into the world of cybersecurity and cyber warfare.

One thing that struck me most about this field is how uncharted cyberspace is. Dr. Sulmeyer described how the field of public service in cyber warfare is a very impactful one to enter, because of how limited our understanding and known capabilities are. In one of our readings for the week, Florian Egloff makes a clarifying analogy between cybersecurity and historical maritime warfare;  he compares contemporary conflicts in cyberspace to efforts “to capture problems of state action in a historically largely ungoverned space—the sea—in which quasi-state and non-state actors exerted significant influence on state interests and relations.” It’s a bit frightening to consider how precarious cybersecurity is, as demonstrated by this metaphor. Although Dr. Sulmeyer assured us that the most important cyber institutions are heavily safeguarded, like the military’s network/U.S. Department of Defense and our nuclear plant networks, the fact that our knowledge is so limited presents a very real potential for harm by those with malicious intents.

What was nearly as striking was that most of the public—myself included until this week—is unaware of cyber warfare and the fact that it “happens all the time,” as Dr. Sulmeyer said during seminar. He also pointed out that even when cyber warfare makes international news, as it did when North Korean hackers attacked Sony Pictures and leaked private data after the release of “The Interview,” the public seemed relatively unconcerned by the fact that Sony could be hacked. Instead, people were taken by salacious emails that were circulated and, understandably, terrorist threats directed at theaters that planned to screen the movie. We seem not to comprehend the gravity of this demonstration that security can be compromised to an unknowable degree. Sure, it’s easy to count on people like Dr. Sulmeyer in the Pentagon to sort all this out for us, but I worry that if something goes wrong, the common person will be completely immobilized. He or she won’t even know what hit them.

 

Sources

https://www.belfercenter.org/person/michael-sulmeyer

https://en.wikipedia.org/wiki/Sony_Pictures_hack

 

A Case for Crowd-Sourcing

In our seminar on Monday, Professor Smith asked us each to name the news sources we trust the most. Aside from our parents, which most of us trusted (or have trusted at some point!), many of us listed crowd-sourcing sites such as Quora and Reddit. Robert and Jacob highlighted the stock market as a more “objective” representation of public opinion because people literally have to back their beliefs with money. As Professor Waldo pointed out in his latest blog post, surprisingly few of us—myself included—named legitimate news corporations. Assuming this wasn’t due to the tone set by the first few people who answered, but actually a widespread lack of trust in news corporations and even experts, I have been reflecting on why crowd-sourced sites were the first to come to our minds.

To draw again from his blog post, Professor Waldo argues that society has gone too far in mistrusting experts. He writes that, “…I find that there are people who simply know more than others, and are better able to solve certain problems. I trust climate scientists more than, say, Senators on the subject of climate change… It doesn’t mean that these people know about everything, or even that they are right in everything they say about their particular subject. But they are more likely to be right than someone randomly picked.” I am convinced by this argument and do not intend to disagree in this post. Indeed, it seems best to get the facts from those who are most qualified to present them. And although news corporations will present somewhat biased news, through cross-referencing and critical analysis, one is more likely to glean accurate news from them than through a random post by John Doe on an obscure subreddit. One cannot deny the training that journalists undergo to present a well-rounded story.

Nevertheless, crowd-sourcing has immense value in two ways. For one, it represents the idea that the common person holds institutions accountable, even beyond journalism. Edward Snowden is an example; After working as an employee of the NSA, he leaked classified documents to expose global surveillance programs that overstepped citizens’ privacy. If we put all our trust in establishments and experts, what’s to prevent us from being taken advantage of? This power of accountability is crucial to democracy.

Second, when it comes to the news itself, crowd-sourcing has value not so much in presenting or gathering facts—admittedly established institutions are more qualified to do that—but in engaging with them. A relatively intellectual thread on Quora will be a debate among people who have done their research and back up their claims with citations of legitimate news sources that I can go and verify for myself. This kind of a thread serves as a curation of perspectives with analysis by people who have far less of a stake in the news than those whose careers are built around the industry. This is why, ultimately, when asked what sources I trust the most, I thought of crowd-sourced sites. When done correctly, they wrap together verified facts, persuasive commentary, and, most importantly,  multiple perspectives. Of course, this assumes that one can determine what claims are well-supported and factual, and admittedly that is not always easy.

Ultimately, trusting experts and institutions alone excludes the public from having a voice in the very matters that concern them. Sure, there will always be internet trolls and ignorant posts to sort through. But their net value far exceeds this trouble of filtering.

Defining Ourselves

Last week, my friends and I had one of those classic late-night, deep philosophical dorm room conversations that left us all mind-blown and starry-eyed. And it was all prompted by some thoughts I came away with after our seminar on the Intelligence Singularity on Monday.

I had been reflecting on the fact someone mentioned in class that a couple billionaires plan to pay to be preserved so that, if and when the technology exists, they can have their minds transferred to automated platforms and in a sense live forever (you can read more here). I asked my friends if they would be willing to do this, were the technology offered to them.

I was surprised to hear that very few of them would. Even with the guarantee that their family would also be automated in this hypothetical situation, they didn’t believe one would be human anymore. I wondered if the way I live my life today would be different without the subconscious urgency from the fact that our time here is limited. Another one of my friends told us that he believes that death shouldn’t be feared: “When your time has come, there’s nothing you can or should do about it,” he argued. He reckoned that those billionaires were motivated ultimately by an unnecessary fear of death.

I had difficulty agreeing with that argument, but tried to wrap my mind around what life would be if I were just a network within a machine. Would it be comparable or even tolerable  without the senses and mechanics of life in my body? How linked is our conception of our lives and our consciousness with being in our body?

Of course, this all gets at what it means to be human—a question we’ve tackled previously in this course as we’ve tried to determine what the true Turing Test, or, benchmark for Artificial Intelligence should be. One of our proposed definitions was whether you can be in love with AI—the 2013 movie Her comes to mind, in which the main character falls in love with his operating system and later finds that she is simultaneously talking to and in love with thousands of other humans around the world. Is that really love? And if it is, is that a good benchmark, or can AI achieve superior intelligence without emotional intelligence?

Ultimately, these are a jumble of questions that we may very well never have a concrete answer to. But as Professor Smith wrote in a recent post, worrying about the future may in some ways not be as important as the present. It seems that one of the main outcomes of considering the future is  pushing the boundaries to understand how we define ourselves.

Is Privacy Overrated?

In last week’s seminar, we discussed the internet of things and a future(/present?) in which an unprecedented amount of data is collected on us. Although the sensor-filled world envisioned by the prescient Embedded, Everywhere report has instead been realized through the capabilities of our smartphones, the reality of the amount of information that is collected on each one of us is immense. This, naturally, brings up privacy concerns.

One of our readings noted something along the lines of the fact that people are bothered by infringements on privacy taken by new technology until they grow accustomed and even dependent on the ease that it brings. For example, during opening days I spent some time looking at 5’x7′ rugs from various sellers for our common room. Soon enough I started to see ads for rugs everywhere; Indeed, to this day rugs will pop up in my Facebook ad bar. Sure, this is a bit “creepy,” but isn’t targeted advertising better for everyone? It connects sellers with interested buyers and helps show buyers many options from different suppliers, saving everyone time in the end. And yes, I ended up buying one of the rugs that popped up on Facebook.

In addition, it’s not like humans are sitting with these masses of data and helping direct these targeted ads and more. It’s an automated process, and I don’t think many people actually view most of the data. Personally, I don’t have much of an issue with that. Considering the enormity of the amount of data that has been collected on each of us, I find that there is a sense of anonymity in the crowd. That is, it seems unlikely that most of us will be tracked by someone who has access to this data.

We discussed this in class too—the idea that most of us are “safe” because we’re just not interesting enough. Admittedly, for someone like Professor Smith as Dean of the Faculty of Arts and Sciences, this doesn’t apply. Certain types of data will be interesting enough for people to dig up.

In addition, although it may be true that for most of us this data collection is harmless, there is the question of our right to privacy. Perhaps there should be an explicit way of opting into this data collection. But I doubt many people would consciously be willing to forfeit their privacy, and then we would lose out on the benefits of the way technology harnesses this data altogether.

A World Without Jobs?

In our fourth seminar, we finally launched into the unknown and discussed the future of the internet and its impact on the economy. We discussed how blue and pink collar jobs, like barista, bus driver, factory worker, and sales associate may be the first to become obsolete as automation and AI replace relatively mundane jobs. Machines will likely be much better at these jobs, too, especially with big data that enables sites to tailor the experience/shopping recommendations to the consumer. My natural response to this, as is for many, was concern–automation will leave millions jobless!

Admittedly, there are strong reasons that this will likely not be the case, at least not any time soon. Just looking back historically, there have been many advances in technology that have caused people to worry about jobs becoming dispensable. Think of jobs that no longer exist like switchboard operators, milkmen, cobblers, etc.. According to this website, there was once even a job of being a “knocker-upper” who would knock on people’s windows in the morning so that they would wake up for work on time. Yes, your profession could be that of a human alarm clock! Yet, despite the loss of these jobs, technology has created new fields of work–an obvious example being the job of a computer programmer.

The author of this WIRED article that we were asked to read for our fourth seminar is also skeptical of the idea of losing all or most jobs to automation. Apparently, unemployment rates are currently lower than 5%. In addition, we are less productive than we were during World War 2, which would not make sense if jobs had become automatized by machines that presumably would be more efficient than human workers. The article also included the quirky statistic that in 2016, the U.S. spent almost six times more on pets than on robots.

But let’s say we put all of this data aside and legitimately consider a world in which few people work because everything is mechanized and programmed. Would this really be so bad? What if utilities and automated services were so abundant that they could be offered to everyone, like water at a public drinking fountain? Why would people need to work jobs to make money if everything was made available in this way? Do you need jobs and an economy if the economic “pie” of resources is large enough for everyone to get their share?  This makes Elon Musk’s suggestion of the necessity of a universal income in the future quite believable. In short, if essentially everyone was jobless, we would have to reimagine our economic system. It’s not the idea of universal joblessness that is so worrying to me as much as the thought of the gradual process of getting there, and the plight of those who would be unemployed early on before a new system were to be put in place.

There is also, of course, the philosophical side of what purpose people will find in life without needing to make a living. For many, jobs provide a social community, a structured routine, and even meaningful work. Without that, will humans find somewhere else to direct their time? Only time will tell–if we ever get to that point.

Technology’s Role in Education

In our third seminar of the semester, we discussed how the internet began to expand to reach more mainstream consumers throughout the 70s and 80s and the important choice to build a simple, dumb network with smart endpoints. This decision made it more feasible to increase the scale of the internet because the endpoints (or, hosts) would check if packets had been corrupted when being transferred and correct them, so there was no point for the network itself to do the same. As we all get into the swing of things in the next school year, this discussion of internet expansion had me start thinking about the role the internet and technology does and will play in our education.  Will students eventually go to school on the internet without leaving the comfort of our home? Will handwriting notes become a relic of the past?

Many professors here at Harvard seem to be limiting the use of technology in their courses as they realize its detrimental side effects. Although it has become widely practiced to record lectures so that students can view them on their own time and not bother showing up in person, no lectures are recorded in Economics 10a and lecture attendance is mandatory. Professor Mankiw explained during shopping week that the social aspect of being in a lecture together was conducive to an effective learning environment. Professor Malan has decided to take a similar approach with CS50: “Unlike last year, students are encouraged to attend all lectures in person this year; students with conflicts may watch later online” (http://docs.cs50.net/2017/fall/syllabus/cs50.html). Now that it is no longer a given, educators are beginning to increasingly value the face-to-face interaction between students in a classroom environment that develops collaborative and social skills and helps students learn and analyze the material together.

Another surprising decision made in two of my courses, Economics 10a and USW 35, has been a no-laptop policy. Course Heads Professor Mankiw and Professor Merseth have similar reasons; firstly, that laptops can be a distraction to both the user and students sitting nearby. A study by the University of Michigan found that there is a “significant, negative relationship between in-class laptop use and course grade” and that “higher levels of laptop use were associated with lower student-reported levels of attention, lecture clarity, and understanding of the course material.” The same study also reported that, by a narrow margin, students are slightly more distracted by their peers’ screens than their own. The study did note, however, that learning was enhanced when laptop use was specifically integrated into the class (https://teachingcenter.wustl.edu/2015/08/laptop-use-effects-learning-attention/).

Both professors sited a second reason to restrict laptop use in class: handwriting notes leads to greater comprehension than typing notes. An NPR report explains the research-backed philosophy behind this: When students type notes, they tend to take down the professor’s words or slides verbatim. Meanwhile, since a student can’t possibly handwrite notes as quickly, he or she is forced to paraphrase and write down selective information. This process of filtering and personalizing information leads students to interact with the material more and enhances comprehension. In the same study, students  were asked to type paraphrased notes to see if their comprehension would be comparable to handwritten notes. Even then, they could not fully overcome the urge to record the class verbatim, and the students that paraphrased less performed worse on tests. (http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away).

So,  it seems that physical classrooms and notebooks are not yet obsolete. Although the internet is an incredible tool that, when used specifically, can boost classroom learning, it cannot yet serve as a substitute for the learning we get through real, physical socialization and handwritten note taking. So for now, I think it’s safe to venture that even if an omniscient dictator could digitize our education system with the wave of a wand, she wouldn’t.

 

« Older posts