In response to John Post 4

I agree with John in much of what he just posted. Freedom of speech policy regarding the internet is immensely complicated because it is absolutely imperative that websites like Reddit, Youtube, and even Facebook are not held entirely responsible for the content that users post on their sites. It would be impossible for them to actually vet every single post or video that is posted to the site and potentially catastrophic to their functionality. Users do not want their content to be edited by some arbitrary editor who works for the website that they are using. They want to maintain the integrity of their individual posts and wholly reject any sort of editing. Imagine if Facebook was required to actually look through each individual post to check the veracity of each thing that was written. Section 230 is essential. However, that does not mean that we cannot encourage websites to make every effort to effectively watch what people post. For instance, Youtube does use computer software to take down copyrighted material that is illegally posted on its site. In addition, the site makes every effort to accommodate to users who believe that their content has been violated by another user. Youtube has received criticism for their policy of removing content prematurely, but that is to be expected with their high volume of content.

While libelous material is very different from copyrighted material, perhaps websites like Yelp could implement a policy similar to Youtube’s. For example, in the Perez case, if Yelp notified Dietz about her review, and Dietz was able to submit a formal request to Yelp claiming that the review contained false facts, perhaps then Yelp would be able to properly evaluate the claim and take down the post if it was in fact false. Currently, these websites have the option to do so, but very little incentive. While Section 230 is necessary to help these websites, perhaps its scope does in fact need to be limited so that certain types of websites are incentivized a little to police their own content.

Perhaps then, user-contributed information websites should be divided into two large categories: those that host information on user’s pages, or on subject’s pages. For example, Reddit and Facebook posts and profiles are created by users, and then comments are all directed onto the pages for those posts/profiles. Facebook profiles act as the container for all information ┬ápertaining to that user; Reddit posts store all comments regarding their content, but the initial poster or user has complete control over what appears on their profile/post. On the other hand, Yelp or IMDB reviews function somewhat differently. They allow viewers to post on the pages of their subjects, but do not allow the subjects to have any impact on the content. Yelp reviews are, and must be, not editable by the business that is getting reviewed. This is essential because it must be possible for readers to post negative reviews when they choose to, but it is also dangerous. Because the only party in this case that has editing power is the host itself, Yelp or IMDB, the burden falls upon them to edit when necessary. Until now, they have shirked this responsibility, claiming that they cannot edit user’s posts in order to preserve the integrity of their site.

The burden of editing needs to be reinforced for these types of sites. I am not suggesting that websites should be required to sift through each individual post before posting it, but it should be possible to report false claims that appear on the yelp page of your business and yelp should be inclined to act on the behalf of the business. In our legal system, the traditional axiom is that we would rather let ten guilty people go free then let one innocent be convicted. This ideal should be extended to review policy; we should place the value of removing libelous reviews far higher than the danger of accidentally removing actually legitimate posts. If a user’s post is deleted because their material was believed to be false, they might be upset, but chances are they will be largely unaffected by it. On the other hand, one false review can be so caustic to a business that it could ruin it. Review sites should make it very possible for businesses to report false statements, and should evaluate these reports critically, but with an inclination to help the business.

If a user is posting their comment under their own name or account, on their own profile or blog post, then the website that hosts it should not have to be involved in the process, as is already customary. The legal system can still get involved, but the host should be free from involvement. The difference is that when the comment appears on the page of someone else, who has no ability to remove a possibly defamatory statement, someone needs to be responsible for removing it. Short of going to court, the only possibility that I can see is the website itself. They must be incentivized to be responsible.

Leave a Comment

Log in