You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

To Theo, No. 3

In principle, I agree with Theo’s statement that we have to regulate all or none of online speech.  A patchwork of regulation won’t work—things will inevitably slip through the cracks, and we’ll end up with a system with as much potential for abuse as we have now.  However, I’m not really certain how one goes about regulating online speech better than the way we have now.  As Section 230 stands, you can’t hold websites responsible for the content users post—they can’t be treated as “publisher[s] or speaker[s]” of content provided by people other than the site owner.  So the result of that legislation is that websites don’t have to worry about libel as long as they aren’t posting the libel themselves.  That means that those who do want to take users to task for libel means they have to go after the individual users themselves.  And why that’s all very good for keeping websites free and open, it’s not so good for those affected by libel.

Why?  Let’s look at the infamous AutoAdmit debacle, where two female Yale Law students had their entire reputations defamed online by scores of online anonymous commenters.  On that website, the two students were defamed at least as much as Perez’s contractor, with false claims of sexual promiscuity and venereal disease being bandied about freely.  Because of Section 230, the founders of the site are largely immune from any legal obligations, allowing them to keep up the libelous material for as long as they want.  The two law students have absolutely no bargaining power with the website—they can only target the individual posters, a task made rather difficult by the website’s lack of IP logging.  In this case, it seems clear to me that Section 230 proved immensely detrimental to the two students’ attempts to defend their reputations—they couldn’t get the website to take down the offending material, and suffered the consequences of it.  For them, Section 230 was the enemy, a wall that kept them from achieving a just objective.

And yet, for most of us, Section 230 is a shield.  It prevents disgruntled, powerful individuals from taking down speech they don’t like.  It’s an essential part of keeping the Internet open, of ensuring continued freedom of the Web. It’s why the Internet is what the Internet is—without the immunity promised to websites by it, how many websites would dare to allow untrammeled user content?  There’d be significantly more self-censorship, individual sites taking it upon themselves to regulate the comments users posted.  For some sites, like YouTube, Reddit, etc., that’d be an impossible task.  But, even with all these benefits, the downsides are clear: defending yourself against all of that anonymous speech is nigh impossible; if you’ve got a dedicated group of malicious individuals, there’s not a whole lot you can do to stop them.  And now we’re talking about regulating the entirety of Internet speech in regards to libel—the question is: how?

If the current system of individual takedown notices, individual suits against individual users, isn’t working (and it isn’t), where do we go after?  If it’s impossible for YouTube to regulate its own site, how is the government supposed to regulate the entire Internet?  With the current infrastructure we have, the government is no more equipped to regulate online speech than it is equipped to regulate offline speech; like Theo said, the Internet is just too big.  There’s too much content for efficient regulation, so we’re left the system we have today: one where individual lawsuits are costly, ineffective, and rare.  In the majority of cases, the individual whose reputation is damaged doesn’t have the resources to go after the person doing the damage; instead, they have to just sit there and take it.

For many people, that’s not enough.  There has to be another way, right?  And yet I can’t see a way that won’t either drastically tip the scales in favor of greater freedom or more restrictions.  If we force websites to self-regulate, we risk an era of self-censorship, of a Web without any kind of inflammatory comments, a Web without freedom.  If the government to regulate it, it’ll fail, unless such drastic actions like SOPA are taken.  Or, we can simply throw up our hands and declare all online speech to be immune, the “nothing” option, which will simply exacerbate the problems already in existence.  Given these options, I’m forced to conclude that the only way to continue regulating speech on the Internet is the system we have now—patchwork, largely ineffective, more a Band-Aid than a cure, but a better alternative than other options.

Leave a Comment

Log in