You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Social Media and Congress

Apologies for the late post; it’s been a crazy week to say the least. But at least I got to have lunch with Prof. Waldo!

I wrote last week about online voting, so I’ll focus on something else this week. In the past few days we’ve seen multiple big internet company executives testify in front of congress. Facebook, Twitter, and Google sent lawyers to speak about Russian meddling in the 2016 presidential election.

A WIRED article summarizes the most revealing pieces of testimony in the hearing, I really recommend you all read it. Here is their introduction to the current situation:

“Russians have been conducting information warfare for decades,” said Democratic Sen. Mark Warner in his opening remarks. “But what is new is the advent of social-media tools with the power to magnify propaganda and fake news on a scale that was unimaginable back in the days of the Berlin Wall. Today’s tools seem almost purpose-built for Russian disinformation techniques.”

The hearing revealed new and startling insight into the ways in which Russians pitted Americans against each other, and reinforced the notion that social-media ads are only a portion of the threat from foreign actors. Senators also forced the tech execs to explain how they police content on their platforms in different parts of the world.

Much of the hearing consisted of congresspeople telling executives that it’s their responsibility to get misinformation on their platforms under control; Sen. Diane Feinstein said:

“You’ve created these platforms, and now, they’re being misused, and you have to be the ones to do something about it. Or we will.”

What do you think? Should platforms be held responsible for misinformation campaigns, or is it a violation of free speech? Should platforms be punished for working with foreign governments, and how? See you Monday.

2 Comments

  1. Mike Smith

    November 4, 2017 @ 7:57 pm

    1

    I’ve read some of the testimony. I was struck during it by something you mentioned in your post: “Facebook, Twitter, and Google sent lawyers to speak about Russian meddling in the 2016 presidential election.” In other words, these technology companies sent lawyers to talk to Congress, who themselves are mostly trained as lawyers. Lawyers speaking to lawyers about technology. Maybe this is the right thing to happen in a Senate hearing, but I worry it doesn’t get us much farther than Senator Feinstein’s comment. (I will note that Senator Warner is probably better informed than most senators given his business background. He was trained as a lawyer and has some technology experience, which is better than none, but is it enough?)

    You’ve asked interesting questions, and I’ll give you my opinion. It’s worth what you’re paying for it. Every piece of technology I know has good and bad uses. I don’t think you can hold a technology company responsible for the bad uses, if it created the technology for the good purposes (i.e., it actually successfully sells the technology in a manner that achieves good). The courts have held technology companies responsible for successfully selling technology for the bad purposes even though good purposes also were possible (e.g., Napster). And I’d expect such technology companies to be honest about the potential bad uses of which it knows (but not responsible for things it didn’t imagine). By these principles then, should the platforms in this case be held responsible? I’ll focus on Twitter and Facebook in my answer, since I’m not as familiar with the Google case. I think these two get dangerously close to, if not over, the principled line I’ve drawn. The companies are not just a wire over which bits flow. They’ve built specific types of communities and done so with the intention of making them an integral part of how we live our lives. Although I’m not a lawyer, it seems to me that that opens one up to a greater range of negligence claims.

  2. Jim Waldo

    November 4, 2017 @ 8:13 pm

    2

    Great post with great questions; I’ll enter into the fray with a slightly different viewpoint.

    I agree that the creation of these communication platforms puts some responsibility on the tech companies to do something about the fake news and information warfare that the platforms made possible. But do we really want Facebook, Google, and Twitter to decide what can and can’t be said? Who can and can’t advertise? What is a proper use of their technology, and what is not? I’m not much happier with that position than I am with the one that says that they are just platforms and not responsible for any of the content.

    One case in point– last week, people were outraged when they found that Google was scanning entries in Google Docs and freezing some accounts when it was determined that the content was objectionable. This was discovered when a bug caused some innocuous accounts to be frozen. But people were outraged that Google was scanning these documents.

    We can’t have it both ways. We can’t say that Google has to honor our privacy, but police the use of their platform. I think this is a much thornier problem than most people think, and the solution isn’t at all clear to me.

Leave a Comment

Log in