In the process of writing these blog posts, there are a few criterions which I have noticed which I believe should be considered when making legislation related to regulation of speech on the internet.

My opinion is that the law should provide basic guideline and some concrete rules, but should allow each case to be individually considered. As we have seen in how the American section 230 has been applied, a single concrete rule to all situations just doesn’t apply. Thus, I believe the law should be left unclear and made with some margin for consideration. Because the Internet grows rapidly, like we have seen in how SNS services have become popular in just a few years, we need to understand that in the realm, it is difficult to predict what happened even in the next year. Rapid change in technology could easily provide new methods for people to show their opinion or thoughts on the internet. Thus, instead of changing the law every few years to accommodate new technology and forms of communication on the internet, it is realistic to keep the law vague and force judges to make decisions on a case to case basis. Once some cases are dealt with, the court would then be able to make decision based on the precedent examples.

However, I do not intend to say that the law should be left completely vague. I do believe that there should be a law made especially for speech on the Internet, and that it should provide some insight as to how such cases should be ruled. For example, general guidelines should be provided for judges to consider, such as the size and number of users on the website, the content of the website, how the website is administered etc. Other factors, such as whether the owner of the website tried to prevent such criminal acts by taking preventive measures should also be a factor. The law might make it mandatory for large websites and web services that collect user generated content to provide a “delete policy” and post it on the website, for example.

One point that the law definitely has to make clear is the relationship between the user and the owner of the website. That is, does the owner have total control over what is posted on the website (and thus, responsibility to delete posts that seem to have problems), or should the owners be free from such responsibilities (and immune from being sued). The owner cannot always be held responsible for all posts on a website, and thus, the general direction should be towards decreasing the responsibility of internet service providers (in contrasts to owners of traditional methods of communication, such as the press and newspapers). However, it would be too strong to say that service providers are immune from all responsibilities, as it would make them loose any incentive to keep the website free from infringing material or defamatory content. As a basic rule, I think we should assume that the owners of a web service own and administer that service because they think the service will benefit the users and not be harmful for society. To this extent, it is natural to assume that the owners of web services will take action to minimize the negative effects that their service may have on the internet community.

One idea would be to make it mandatory for web services to show who is in charge of maintaining quality of the content of the service. If the scale is too large for one individual, as it is in 2ch, the owner should do the best he can to find few other members (volunteers) who can take responsibility over some section of the web. Although this mandatory display of responsibility probably would not be followed by all websites on the internet, larger websites will always have a large number of access and thus have the social pressure to correctly compile and display one. In the case that trouble occurs on a website, the court could make the owners and people responsible immue only if they have the list compiled, and have followed reasonable steps to prevent the problems. This system would provide larger websites (and thus websites which have large possibility for problems to occur) to take more preventive measures to block content that is infringing the rights of others.

In terms of actually implementing the law, there maybe cultural factors that should be considered when implement these laws in individual countries. The cultural factors I am talking about are not as much about the difference in nationality or geographical location as it is about the culture of the internet community that uses the website. The difference between the Japanese 2ch and the English AutoAdmit is not the difference of nationality, but about how the user community is committed to using that website. Websites that have a strong base of users who are committed to using that service (in other words, websites with “geeky” users) would have to curtail delete policy and rules that meet the needs of their community and culture. Hence, it is crucial that the court take these aspects in to consideration when making final judgments (for example, the culture of 2ch makes it harder for its owner to prevent infringing posts from being made than AutoAdmit).

In conclusion, I hope that future laws will be made to protect free speech, and the right of others, by not being too specific about the details in the law. By looking at past cases related to 2ch, and the Japanese laws on speech on the web, I believe I have obtained some insight as to how such problems are dealt in Japan, and how such problems could be solved.

During the process of making this blog, I strongly felt that current information of such issues in Japan is not sufficiently covered in English reading material. To this extent, it is my hope that more research will be done with focus on comparison between several countries, as the internet itself is greatly a borderless issue.

Comments are closed.