What’s In a Name?: Navigating the Internet with a Real Name

My more sensible fellow interns post here under their real names, whereas my WordPress username is a funny, nonsensical “kurquoise.” (Look up, it says “Posted by kurquoise” in light green letters.) This pseudonym is a holdover from when we were first launching the blog, and out of convenience, it has just stuck around, but it alludes to an interesting point.

As the Internet has “grown up” over the past few years, one of the major trends has been a move away from anonymity toward the credibility of real names. That’s not to say anonymous and pseudonymous corners of the web are not still thriving, but simply that there’s a greater movement toward aggregating our various online avatars into one coherent identity. More concretely, what do I mean why this? Services like Friend Connect or Facebook Connect or FriendFeed or DISQUS — essentially services that pull information from across the Internet and feed it into one name. Of course, there’s no digital Big Brother forcing to use a real name, but if you’re going through all that trouble to aggregate everything – from blog entries to Flickr comments – aren’t you constructing digital identity so complete as to almost mirror your nondigital one? In the world of social media, your identity – your real name – has value. I mean this completely uncynically: your name is a brand.

Facebook was one of the first social networks to capitalize on the credibility of real names, and it succeeded precisely because of this. The barriers to online interaction were drastically lowered; you no longer needed to exchange an email or a phone number – just a name was enough. Facebook is a closed system though, and Facebook Connect is an attempt to extend it over the rest of the Internet. But people are already blogging, tweeting, and commenting under their real names – building a digital network that has real value. Using a real name undoubtedly adds credibility to that they do.

Nikki and Diana have both written fantastic posts here on the motivations, strategies, and sometimes of using your real name online. Unlike Nikki and Diana, I don’t own my own domain name (alas, there is a Sarah Zhang more famous and accomplished than me), and I use my real name sparingly online. I’ve often been on the verge of purchasing my own domain or simply keeping a blog under my own name, but there’s something that has been holding me back.

Part of my reluctance to define myself online is related to me grappling with my own shifting identities as a young adult. Forgive me for being existential here, but how do I tell others who I am when I’m not even completely sure of it myself? One of the first legitimate Google hits on my real name is a bio page for an internship I did the summer after my freshman year. The bio, which I wrote as a freshman (less than a year ago), is now completely outdated, listing an different major, activities, etc. In the same vein, my Xanga posts from middle school or Livejournal posts from high school and the other various “blogs” I kept throughout the years reflect a very different person from who I am today. My attitudes and interests change, and I don’t necessarily want my teenage self to exist as a digital representation of myself or future employers.

When are teenagers ready to manage their reputations? It’s a tricky question because my interactions with the Internet, even posting anonymously or pseudonymously, have shaped a large part of who I am. How do you feel about using your real name online? Would you have entrusted your teenage self with your real name?

-Sarah Zhang

Viewfinders: Digital Natives and the Documentation Impulse

Last week, Sarah wrote about experiencing President Obama’s inauguration online. “I wasn’t standing in the middle of an animated crowd,” she wrote, “but watching a stream of my friend’s statuses placed me amidst an equally excited digital crowd.” For Sarah, watching the inauguration online was a fitting end to an ambitiously digital presidential campaign and transition; a way to experience a historic moment in the digital company of friends. The screengrab she posted, of CNN’s live stream of the inauguration buffeted by friends’ Facebook status updates, tells one side of the story—the side where college students sit in their dorm rooms, watching up-to-the-minute footage; absorbing history as it happens, in a strange combination of visceral and vicarious experience.

But what about the digital natives who were “really” there? How did they traverse the liminal zone between their screens and the scenes around them?

Two indelible images from Inauguration Day involved young people with cameras, capturing history for instant posterity. It’s no surprise that digital natives would use technology to mediate their experiences, but seeing the mediation in action is confronting, inspiring, and somewhat curious.

As President Obama took the oath slightly after noon on January 20, cameras were evident everywhere in the audience. But right there, at the very front, Malia Obama foreshadowed what it will be like to have two have two tech-savvy young girls in the White House. Even as her father was making history, Malia was focused on her tiny pastel camera, documenting the moment for herself.

Incidentally, the photograph comes from an article on GearLive, in which the reporter revealed that Malia had been using a Kodak EasyShare M893 IS 8.1MP digital camera. Within minutes of Malia taking her own photographs, photographs of her with her camera were showing up on gadget blogs, complete with exorbitantly specific model information. A blog post from ABC News noted that “the first daughter is certainly not the first to document her experience at 1600 Pennsylvania Ave”; other White House residents have also followed the impulse. But the instantaneity of the Inauguration photography cycle served underlined the ubiquity of the impulse, and, more importantly, its ubiquitous expression. When even the president’s daughter experiences her father’s inauguration “through the viewfinder,” this perspective transcends lark and becomes a force in its own right.

That force was in action again that evening, at a series of presidential balls. The Youth Ball saw the First Couple dancing “old school” to an instrumental version of “At Last”, but the crowd seemed not to see it at all. Unlike the sweet promise of Malia’s individual documenting impulse, there starts to be something almost sinister about the sea of outstretched camera phones that greeted the couple.

The Obamas at the Youth Ball

When I mentioned this curiosity to a friend this past weekend, he pointed me toward a blog post by Joanne McNeil over at Tomorrow Museum, exploring the meaning of this impenetrable mass of lenses and light. At one point, she asks “But what kinds of things don’t we photograph?” and suggests that

“You probably didn’t take a photo (you forgot to, didn’t think of it) during nearly all of your happiest memories. Why would you want to interrupt a blissful moment? Distancing yourself from the action taking place and denying yourself the opportunity to experience it with your full attention?”

No matter the meaning of the scene, though, the fact of it is remarkable enough. While millions of people were at home watching history happen on their screens, thousands of digital natives were right there, in the moment—watching history through their screens, too.

Lan Houses and Internet access!

The last time I traveled to Boston, I decided to leave my laptop in Montreal as 1. I was only visiting the city for three days and 2. I expected there to be Internet access at every corner. This proved to be an unwise decision as Boston, unlike Brazil, does not have what we call “Lan Houses” or cyber cafes which provide Internet services on an hourly basis. Under the assumption that I would find access to the Internet at every corner the same way I do in Brazil, I traveled to Boston without worrying about carrying a computer with me. However, this proved to be an unwise decision. I walked several blocks only to discover that the way for me to have access to the Internet would be by going to a public library.
When I finally arrived, there was a long waiting list on which I had to sign up. As it was almost closing time, I was warned that I would probably not get online that day.

This raised some questions in my mind, questions that have already been present in my previous posts. Although Digital Natives live in a world characterized by ubiquitous computing, there is a whole other universe that is characterized by the absence of technological tools that will enable them to reach the world in a deeper way. Therefore, it seems imperative to think of mechanisms that will enable all Digital Natives to be digitally included, otherwise the gap between these two realities will only grow wider.

In Brazil, there are various initiativeswhich have been implemented to investigate how cities can create Internet access points on a low budget. A person can access the Internet in São Paulo through either Lan Houses, the state program Acessa São Paulo (which offers computer access for free), or even through McDonald’s restaurants which also offer access to computers for a low price.

When I was in Boston and in Montreal, I always had the impression that everyone was wired. Computers of any sort could be seen anywhere. My question is: Is it difficult to access computers and the Internet in your country? If so, how is this issue being solved?

– andre valle

Inauguration Day Online

On Tuesday morning at 11:45, I ran out of my last final exam and plopped myself down in front of the nearest screen, determined not to miss a moment of Barack Obama’s inauguration. Televisions are harder to find around campus these days, but all I needed was a laptop with Internet access, and nearly everyone in the dining hall was congregated around one or another.

I was only one of millions who found themselves in front of a computer rather than a TV (or in DC in person.) According to Akamai, who handles one-fifth of the world’s Internet traffic, Obama’s inauguration set a new record in the number of simultaneous data streams, which mostly carry live video: seven million data streams with a max of 2 terabits per second. (via xconomy and VentureBeat)

The Google Blog looks as some search data from this and previous inaugurations tell a story of how far the Internet has evolved in the past 8 years:

During the last nine years, the growth of the Internet has changed the way the world seeks information. From President Bush’s first inaugural address in 2001 to his second in 2005, the number of inauguration-related searches increased by more than a factor of ten. From 2005 to today’s address, the number grew even more. Few of the 2001 queries requested “video,” and none requested streaming. By 2005, a few queries such as inauguration audio and streaming video of inauguration appeared. Today, technology has become so prevalant that queries such as YouTube live inauguration, live blogging inauguration, inaugural podcast, and Obama inaugural speech mp3 formed one-third of all inauguration-related queries.

And if the overall query volume at Google is any indication of online activity, there is also has a fascinating graph on search patterns during Obama’s speech. It seems like as Obama was giving his speech, people on the Internet actually stopped to listen and watch:

It’s more than fitting that Obama’s inauguration would make waves around the Internet, as a kind of capstone to how well his campaign had leveraged the power of the Internet during the election. But watching the inauguration wasn’t all that we were doing. I was impressed by how many websites had pulled out the stops for their inauguration coverage. Bits at the New York Times had list of the digital spaces the inauguration would watched and discussed. I watched the speech on CNN’s website, and when the video site first popped up, I was surprised to see not just a live stream from DC, but my Facebook friends smiling at me too:

I wasn’t standing in the middle of an animated crowd, but watching a stream of my friend’s statuses placed me amidst an equally excited digital crowd. It reminded me of watching the debates while perusing the streams on election.twitter, and unsurprisingly, Twitter too was a flurry of activity on Tuesday.

What do these changes mean for the Obama administration? For digital natives who are participating in this new world of politics? I don’t have any solid answers – if you have any insights, share in the comments! — but I would like to point to one thing: all the chatter surrounding the new White House website and blog. The simple fact that we care about whitehouse.gov is amazing enough. I can’t think of the last time I went the site before Tuesday’s redesign, and now we’re even analyzing the website’s robot.txt file. “Change has come to America” announces the White House website banner – true, where change will lead us remains to be seen.
-Sarah Zhang

iPhone on the Brain: Technology and the Extended Mind

Like Diana, I too am in the middle finals week. But as a science major, I am mired exams instead of papers, and my brain has been clutter of symbols and numbers — amino acid structures, Fourier series, formulas galore! With this memorization frenzy, the Extended Mind hypothesis is sounding mighty attractive.

David Chalmers and Andy Clark’s paper on The Extended Mind was first published in 1998, but a more recent interview in The Philosopher’s Magazine where Chalmers alludes to the iPhone has brought their ideas into discussion again. The Extended Mind essentially states that the technology we utilize can be seen as extensions of out minds. In Chalmers’ own words:

A whole lot of my cognitive activities and my brain functions have now been uploaded into my iPhone. It stores a whole lot of my beliefs, phone numbers, addresses, whatever. It acts as my memory for these things. It’s always there when I need it…I have a list of all of my favorite dishes at the restaurant we go to all the time in Canberra. I say, OK, what are we going to order? Well, I’ll pull up the iPhone – these are the dishes we like here. It’s the repository of my desires, my plans. There’s a calendar, there’s an iPhone calculator, and so on. It’s even got a little decision maker that comes up, yes or no.

Of course, it’s not only trendy gadgets made by Apple that can become part of our minds. My humble non-touchscreen cell phone has freed my actual brain from memorizing phone numbers. Perhaps a little sadly, I’ve often referred to my own Facebook profile when asked about my favorite bands or movies. Even the paper notebook where I scribbled my math notes can be thought of as an extension of my mind. (Try that argument during an exam!)

When I shut down my personal blog during freshman year of college – goodbye to high school rambling – I made the leap to a less ambitious enterprise, a tumblr. In another way though, this was more ambitious because in the description I settled upon, my proclaimed goal was “Translating electrical impulses and molecular movements of the brain into words, images, and hypertext. Brain splatter, in byte-sized chunks!” I wanted to record the transient thoughts in my head – how successful I have been is debatable.

But it’s the effects of an extended digital mind that fascinates me. Through my delicious account, Google Reader, and tumblr, I’ve essentially outsourced the archives of my mind to an easily searchable, electronic database. This may sound a little cyborgian, but it’s also totally exploded the number of things I can “think” about. The infallible ability to search and find – no digital tip of the tongue– makes these archives seemingly more powerful than my brain. As technology becomes increasingly good at predicting what I like and making recommendations, it is more than just an archive.

At the same time, I think there is still value to memorization, if not necessarily the brute force kind. Just as the power of search eliminates the serendipity of a library or bookstore, a search engine can’t make the initially random but ultimately meaningful connections that our brains do. It can’t synthesize multiple streams of information or make metaphors between unrelated concepts. (In the hours pondering physics problem, I’ve come up with way too many metaphors of physical laws describing social interactions.) Technology can augment our minds, but as it stands now, certainly not replace it.

Hat tip to Mind Hacks and The Frontal Cortex

Further reading:
How Google is Making Us Smarter – Discover Magazine

-Sarah Zhang

Unfriending, Pt. 2: Social Networks vs. Real Life

A few weeks ago, before winter break and also before certain unfortunate events took place, I wrote a post about “unfriending.” To my delight, this post elicited a number of extremely thoughtful comments. Since I think these comments touch on a lot of wider issues, I wanted to take a moment to address some common themes.

Reactions ranged from “social networks are too different from real life” to “social networks are too similar to real life” to “people should just get real lives.” The funny part is, each of those reactions has some element of truth. Taking them one by one:

1) Social networks are too different from real life.

The first comment came from Ben Turner, who noted that “It seems as though social networking sites shy away from providing real approximations of people’s real relationships with each other.” He then proposed two possible axes along which social networks deviated from the way real life works. The first was the idea that social networks establish strong binary labels, and in fact require them to organize their databases, in a way that real life just doesn’t. You’re not required to “confirm Jacob as a friend” before you can approach Jacob on a street corner to say hello. On some social networks, though, that’s exactly what happens. Likewise, the way you feel about someone changes dynamically over time; on a social network, you make a decision once (“I will let this person into my information sphere”) and then to change that decision requires a radical act. Ben’s suggestion was that, if we saw such simplistic binary representations of relationships and acquaintanceships in real life, things would get ugly very quickly. Somehow, social networks manage to get away with it…but that’s not to say that things don’t sometimes get ugly.

The second axis identified by Ben was the idea that social networks are actually invested in discouraging negative experiences, because negative social experiences cast a pall over the social network itself. (If a messy fight with one of my friends plays out over, say, Twitter, I might be disinclined to use Twitter for a while.) Especially for sites that use your friends’ updates to provide a constant stream of information (see: Facebook’s news feed, Twitter’s home page), the more “friends” the sites can draw from the more information they can stream. And the more a site updates, the more people click “refresh,” with the nice side effect that the ads reload too, and oh! There’s another ad impression, which is definitely good for the bottom line of the social network. This leads to social networks being motivated to establish a low barrier to entry for “friendship,” leading to superficial “friending” and a loose net of online “friends” that may have little to no correlation with any real-world “friends.” For the same reason, most sites don’t notify you when you’ve been unfriended…you just have to go searching for clues yourself. The problem is that, even if the site doesn’t notify you, the concept of “friendship” is so embedded in the way a site works that the fact of unfriendship can’t easily be hidden. Social networks, according to Ben’s analysis, frustrate us because they seem to claim to approximate real life, and yet miss it by a long shot because of the way they’re structured.

2) Social networks are too similar to real life.

In the second comment, Britta Bohlinger suggested that “Now, what we seem to witness online in these days is perhaps nothing more dramatic than what happens offline…And yes, no matter how old you are: if things go wrong or you want to move on, unfriending might be a very healthy thing to do. It implies a moment of thinking, a rather conscious decision.” I definitely agree with this. Maybe what hurts about unfriending on social networks is that it so often does mirror what happens in real life. In the recent Whopper Sacrifice shenanigan on Facebook, where users could unfriend 10 friends (who would then be notified accordingly) and earn a Whopper, the ploy worked precisely because it dealt playfully with traditional expectations. If one of your friends got notice that she had been sacrificed for a Whopper, any sort of grave emotional reaction on her part would probably seem outsized. And likely, you’d add her back at the first opportunity.

In some ways, it’s the fact that these sites don’t notify you when you’re unfriended that makes the practice of unfriending so hurtful. Unfriending is a form of non-communication, which kind of precludes the possibility that it would be done playfully. Since it’s so secretive by design, any discovery of an unfriending has the attendant string of betrayal. “You mean they didn’t even talk to me about it?” With the Sacrifice scheme, since the program let your friend know she’d been unfriended—in a somewhat ludicrous way—it became just another form of communication, out of the realm of passive aggression and betrayal and into the realm of teasing. In real life, people’s intentions and decisions aren’t always on constant display the way they are online, manifested in the user interfaces of social networks. But it wouldn’t hurt to be unfriended on social networks if it didn’t hurt to lose a friendship in real life.

3) People should just get real lives.

A little while later, b cut to the chase. “Being unfriended is like being dropped after having sex on the first date. What were you thinking in the first place? Post less and grow up more.” To which Ryan responded “I agree that removing a friend can be touchy and a sensitive thing, but its necessary sometimes. I had to cancel my time on twitter because of the time I was wasting following people’s tweets. I think that an occasional text message or phone call, or simply catching up with someone when I see them around, is the only way to really go for me. Too much to do.”

Are kids (and adults) overly obsessed with social networks to the detriment of their real-world social lives? This is a question I get asked a lot. Usually, my answer is “in some cases yes, but basically no.”

It’s definitely possible to become too preoccupied with the minute vagaries of friendships and acquaintanceships as represented on social networks. But in the end, they’re just communication mediums like any other. Text messages, phone calls, and Facebook messages are all ways of getting information across distance. And while each has its nuances, no one of them is inherently superior to the others. The difference with social networks might be primarily that they so transparently reveal the social graph, and your place within it. Seeing a visual representation of where you stand in relation to other people can spark our desire to be genuinely recognized, sought-after, admired, or just paid-attention-to. I strongly believe in the importance of facetime. But I also believe that vilifying one medium and elevating another might ultimately distract from the real issue. So many of my online friendships have transitioned into some of my richest real-life friendships; the challenge lies in making the transition from one realm to the other, safely and thoughtfully.

Because here’s the secret: there’s not “fake life” and “real life.” There’s just real life, running through multiple channels. We can switch between them to accomplish different things, but they’re all part of the same whole. Balancing kindness to others with the imperative for self-preservation is hard no matter what, and the answers are never easy. Understanding the ways that all these channels collide and collude is one of the great challenges of the digital age. Fortunately, the same truth that makes social networks compelling and frustrating is the truth that will save us from them and save them for us: as social creatures, we’re all obsessed with our social networks—online or off. It’s how we deal with that obsession, and integrate it into our lives in healthy ways, that matters.

Navigating Playgrounds of Choice: Working With Digital Distraction

It’s that time again: finals. While most colleges in the U.S. finished finals before winter break, Harvard’s a little slow. Though calendar reform is on its way, we have one last year of January finals.

As I’ve tried to focus on writing three separate papers over the past week, I’ve realized, once again, how distracting the Internet can be. I wrote a few months ago about the risks of “information overwhelm,” and I think that’s relevant here, too. I mentioned that “My friends and I often joke about the peril of Wikipedia—you fact-check one tiny thing, and before you know it you’re down the rabbit-hole.” As my paper-writing nights stretched to 4, 5, and 6 a.m. the rabbit-holes got ever more enticing.

I’ve developed a few strategies to help myself focus even in the face of difficult assignments and the infinite allure of the Internet. And so I was particularly happy to read, today, Cory Doctorow’s latest column on “Writing in the Age of Distraction.” Though Doctorow is focused on writing major things, like articles and novels, his strategies work just as well for calculus homework or chemistry problem sets.

My favorite out of the strategies he mentions is his suggestion to use text editors rather than word processing programs. He writes,

Kill your word-processor
Word, Google Office and OpenOffice all come with a bewildering array of typesetting and automation settings that you can play with forever. Forget it. All that stuff is distraction, and the last thing you want is your tool second-guessing you, “correcting” your spelling, criticizing your sentence structure, and so on. The programmers who wrote your word processor type all day long, every day, and they have the power to buy or acquire any tool they can imagine for entering text into a computer. They don’t write their software with Word. They use a text-editor, like vi, Emacs, TextPad, BBEdit, Gedit, or any of a host of editors. These are some of the most venerable, reliable, powerful tools in the history of software (since they’re at the core of all other software) and they have almost no distracting features — but they do have powerful search-and-replace functions. Best of all, the humble .txt file can be read by practically every application on your computer, can be pasted directly into an email, and can’t transmit a virus.

Ever since I started using OS X’s TextEdit program—it comes with the computer out of the box!—I’ve been really happy with how my writing has changed. Free from the distracting options of MS Word, I’m able to focus on the real work: writing. Not font-fiddling; not margin-adjusting. Writing.

I use TextEdit in concert with another strategy: timers. I’ll set a timer on my desktop, and negotiate with myself to work only on a given document for a certain segment of time. It might be 10 minutes, it might be 30, but no matter how long or short the time segment, something amazing happens reliably: I’m further along at the end of it than I was at the beginning.

The Internet is an infinite playground of choice. In order to focus, sometimes the best strategy is the simplest: remove a few choices. If all I can do is write, then all I do is write.

Cory Doctorow’s other suggestions for combating distraction are equally great. My favorite part about his column is that Doctorow loves the Internet, too. He writes that “the Internet has been very good to me. It’s informed my creativity and aesthetics, it’s benefited me professionally and personally, and for every moment it steals, it gives back a hundred delights. I’d no sooner give it up than I’d give up fiction or any other pleasurable vice.” Internet distraction isn’t an evil to be stamped out. It’s an environmental factor to be dealt with. Strategies like Doctorow’s can help us deal.

The Internet as a City: Thoughts on the Connected Brain

With finals coming up all too soon, I’ve been barricading myself in my room trying to study. As successfully as I am able to limit myself to a physical space though, there’s a 13 in laptop screen in front of me lending access to a universe of infinite distraction online.

In one distracted online spurt, I came across this unexpectedly relevant article about the effects of overstimulation on the brain. Jonah Lehrer’s article, entitled “How the City Hurts Your Brain,” uses the urban setting to explore the brain’s cognitive functions in a dense, stimuli-filled environment. But isn’t the Internet a lot like a city? Vast expanses to explore, anonymity, a nebulous web of connections, and of course, the many possible distractions. Take this quote from Lehrer’s article and replace “flashing neon sign” with “flashing pop-up ad” and “cellphone conversations” with “IM conversation” – the analogy holds remarkably well.

A city is so overstuffed with stimuli that we need to constantly redirect our attention so that we aren’t distracted by irrelevant things, like a flashing neon sign or the cellphone conversation of a nearby passenger on the bus.

Of course, I’m hardly the first to point out a connection between the Internet and the city. In effect, digital natives are like urban dwellers, having to process and navigate a maze of information in a daily basis. What kind of effect does this have on our brains?

But the density of city life doesn’t just make it harder to focus: It also interferes with our self-control. In that stroll down Newbury, the brain is also assaulted with temptations…Resisting these temptations requires us to flex the prefrontal cortex, a nub of brain just behind the eyes. Unfortunately, this is the same brain area that’s responsible for directed attention, which means that it’s already been depleted from walking around the city. As a result, it’s less able to exert self-control, which means we’re more likely to splurge on the latte and those shoes we don’t really need.…Related research has demonstrated that increased “cognitive load” — like the mental demands of being in a city — makes people more likely to choose chocolate cake instead of fruit salad, or indulge in a unhealthy snack. This is the one-two punch of city life: It subverts our ability to resist temptation even as it surrounds us with it, from fast-food outlets to fancy clothing stores. The end result is too many calories and too much credit card debt.

Again, replace the material temptations of chocolate cake or high-heeled shoes in the above quote with “YouTube videos, inbox unread counts, or Twitter.” I think it’s especially interesting to examine the effects of digital overstimulation on the brain – indulge me here, I am a neurobio major – especially the brains of young digital natives, whose brains are perhaps, literally, being shaped by the time we spend on the Internet.

Dr. Gary Small, author of the book iBrain: Surviving the Technological Alteration of the Modern Mind, calls the mental stress of dealing with digital distractions “techno brain burnout.” What are the neurobiological effects of this?

Under this kind of stress, our brains instinctively signal the adrenal gland to secrete cortisol and adrenaline. In the short run, these stress hormones boost energy levels and augment memory, but over time they actually impair cognition, lead to depression, and alter the neural circuitry in the
hippocampus, amygdala and prefrontal cortex—the brain regions that control mood and thought. Chronic and prolonged techno-brain burnout can even reshape the underlying brain structure.

But the prospects need not be so sobering. Returning to our metaphor of the Internet as a city, a first visit to New York City is utterly disorientating – the cars, the people, the constant cacophony – but give newcomers a few months, they’ll be able to navigate the city like any seasoned urban dweller. And as digital natives, haven’t we essentially grown up in the “city”? Small also cites another study that suggests we can successfully adapt to the demands of the Internet/city.

According to cognitive psychologist Pam Briggs of Northumbria University in England, Web surfers looking for facts on health spend two seconds or less on any particular site before moving on to the next one. She found that when study subjects did stop and focus on a particular
site, that site contained data relevant to the search, whereas those they skipped over contained almost nothing relevant to the search. This study indicates that our brains learn to swiftly focus attention, analyze information and almost instantaneously decide on a go or no-go action. Rather than simply catching “digital ADD,” many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed concentration.

Perhaps it’s apt to call digital natives savvy navigators of the web. Like navigating a large city, the ability to sift through volumes of information and pick out the most salient pieces requires the convergence of many streams of thought as well as quick but informed decision-making. And even in providing distractions, the Internet and the city both expose us to a broad swath of otherwise unavailable intellectual and cultural opportunities.

Related: While Googling some keywords in writing this post, I came across Steven Johnson’s excellent TED Talk entitled, “The Web and the City.” The talk was originally given in 2003, but was only recently posted online. The points he makes about the emergent properties of the web and the city are a still valid, but it’s also interesting to see just how far the Internet community has evolved in only five years.

– Sarah Zhang

Laptopless, or: Adventures Without Milo

On Sunday morning, I woke up, blinked blearily, and opened my laptop. Milo—a 12″ PowerBook G4, from way back in mid-2005—has been known to be ornery, but he usually gets his act together after a few minutes of beach-ball death-spinning. Sunday, though, he hung for even longer than usual; impatiently and trustingly, I pressed the power button to turn the computer off, then pressed to turn it on again. And that is when my computer finally bit the dust.

There’s nothing like becoming laptopless during Paper-Writing Season. With a term paper due on Tuesday and no computer on which to write it, I felt bewildered and bereft. Even my well-intentioned plans to cheer myself up hit a series of formidable dead ends. Download an episode of Gossip Girl on iTunes and watch it in my dorm room? Can’t, need a computer. Listen to some music on last.fm? Can’t, need a computer. I spent most of Sunday trying to offload files from my ailing laptop and wondering what to do next. After a few rounds of increasingly drastic salvage attempts, I determined that Milo was really, truly a goner.

Over the next 48 hours, I would realize two things. One: I’m pretty dependent on computers. Two: computers are everywhere.

My dependence on computers is not particularly unusual, at least for a college student. As my failed cheer-up ideas demonstrate, I depend on my laptop because it’s my primary conduit to both work and entertainment. Though I spent plenty of time offline (though the definition of “plenty” is, perhaps, up for grabs), most of my activities depend on a computer, or the internet, at least peripherally. When I realized that I would have to write my paper on litigious women in colonial Latin America, laptop or not, it wasn’t just word processing I knew I’d miss. Most of my readings for the class were in PDF form, scattered across my hard drive. Though I was able to recover them in time to finish up my research for the paper, doing so involved lugging a gigantic external hard drive from library to computer lab to dorm room, and back again. I probably could have used a thumb drive, it’s true. But it didn’t occur to me in time, because that’s just not a situation I ever face—a disconnect between my place of work and my source of information.

In the library and computer lab, though, I acquainted myself with a fact of college life: because we depend so heavily on computers, there have to be contingency plans. Computers break all the time. Working in the personal computer clinic at school over the past year and a half, failed hard drives and corrupt software have become part of the regular pulse of my daily life. So there are computers everywhere, because no one can do their work if they don’t have their tools. I camped out in a miniature computer lab on the fourth floor of my dorm for, by my count, 7 hours on Tuesday morning. The night before, I’d settled down in the library with a loaner laptop for no less than 8 hours; the library keeps them in metal file drawers behind the front desk, ready to be loaned out for library use to temporarily laptopless students. Since I could only borrow the laptop for three hours at a time, I’d trundle down the stairs every few hours, a mess of documents still open on the desktop, and lift up the laptop so its bar code could be scanned again.

In the past year, I’ve thought a lot about computer dependence. I even gave a presentation this summer—stick figures drawn in autoshapes—regarding the pervasive role of computing in the lives of college students. We use our cell phones as alarm clocks; we get out our laptops even before we get out of bed in the morning. I knew all of this, in large part because I experience it every day. Even yet, I’ve never understood the pervasive role of computing in my life better than when, all of a sudden, it stopped pervading.

Yesterday, I got a new laptop. Milo is gone for good; I think it was probably his time. Part of me already misses him. He was a big part of my life.