You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Our Corporate Lords

I don’t know about you, but I am amazed at the amount of mainstream media attention on topics related to our class. I can’t keep up with all the stories, and it feels significantly greater than at this time last year (or any other point in my lifetime). This is just an observation. I don’t know what it means.

As you might expect from this observation, I had a hard time picking a single topic for this week’s blog post. In the end, I thought I might try to meld ideas from the last two weeks: big data from the growing emphasis on the Internet of Things and the public and private sectors’ obsession with AI everything. I hope the following makes sense and it causes you to think. If it does, I’d love to hear your reactions, since I certainly don’t have the answers.

I’ll start with a Wired article I came across this week, although the article was written last February. It’s contains an interview with Yuval Harari, who had just finished his book titled Homo Deus: A Brief History of Tomorrow. I haven’t had a chance to read the book, but as the interview points out, it touches upon the topics of the last two weeks: artificial intelligence, big data, algorithms and hardware, and extending human capabilities through computational and networking enhancements (i.e., what Harari refers to as techno-humanism).

Techno-humanism is one of the two “new religions” that Harari believes will emerge going forward. The other is dataism, a fascinatingly complex topic. One example of dataism, mentioned in class and in the article, is Google Maps (or Waze), which amplify our ability to get from point A to point B while also reducing the need to navigate between the same two points using older technologies (e.g., AAA maps, compasses, or the stars). The question is not whether this advance is a good thing or a bad thing, but whether the quality-of-life improvements it brings outweighs our increasing dependence upon technology.

Dataism is not a new topic. I can remember reading about it in 2013 in an editorial by David Brooks. David begins the article talking about our increasing desire to measure everything we do, the assumption that analyses based on data are objective decisions free from bias and ideology, and dataism as a reliable way to foretell the future. Today, we see even more of the first (e.g., biosensors and health apps) while we’ve learned the second was a flawed assumption. And interestingly, the first and the third are leading Alphabet’s Sidewalk Labs to undertake the latest effort in planned cities.

The announcement about Toronto’s waterfront feels, for me, like technology corporations, especially those betting on AI, are quickly replacing religion and government as the biggest influence on our day-to-day lives. For many of us, the extensive and detailed view these corporations have into our online and physical lives is greater than anything the U.S. government gleams or that I’ve told my minister and the people my family regularly sees through our church.

So, connecting this train of thought back to the topic that consumed most of our class last week, I guess I’m less worried about the possibility of an AI singularity than I am about the creeping change described above. My view is certainly colored by my belief that we are quite good at creating things for specific tasks, but I’ve seen us be less successful (less successful than evolution) at creating technologies for general tasks. Just because we can construct lots of something, it doesn’t mean that those lots of something can work together to be more than the sum of its parts. Lots of cells do not make a human, which is a masterpiece of differentiation, coordination, and resilience.

In conclusion, I think our irrelevance to machines is farther off than we think, but the threat of other people/corporations manipulating our everyday lives to their benefit is closer than we think. While I think about the future, I’m going to keep an eye on the very real present.

1 Comment

  1. cindizzle4

    October 23, 2017 @ 2:30 am

    1

    I have some thoughts about the mainstream media attention on topics related to our class. I am pessimistic about how media affects people’s opinions, because it seems to take advantage of people’s ignorance and manipulate them into having the seemingly mainstream, “correct” opinion.

    I’ve heard about Harari’s book and I read stuff about Yuval Harari’s ideas in high school for a project I did on finding happiness. What I took away from his arguments was that happiness is affected by subjective expectations and objective conditions. When expectations are lower, it is much easier to feel happy. I found his arguments very intriguing and compelling, and in relation to technology, when we become accustomed/dependent on technology, we build higher expectations for ourselves in terms of being efficient and productive. We stop appreciating amazing inventions like the internet and WiFi and instead get annoyed when the network is lagging for a few extra seconds. Although technology opens up possibilities for, perhaps, a greater/higher sense of happiness, it doesn’t mean that it WILL make people happier; I believe that since it has opened up so many new possibilities, people are often less happy because they get too caught up on what they don’t have.

    I totally agree that the threat of corporations manipulating our everyday lives is very pressing and it is important to focus on the present of technology. While the future of technology and the singularity is important to think about, it can be easy to overlook the present when everyone is so fearful or invested in the future.

Leave a Comment

Log in