Peer-produced “Code” (the book)
ø
Fascinating: Wiki company JotSpot announced that it is working together with Larry Lessig on a peer-produced update of his 99-classic Code and Other Laws of Cyberspace. The project overview reads as follows:
obtaining a better understanding of the information society and law’s role in it.
ø
Fascinating: Wiki company JotSpot announced that it is working together with Larry Lessig on a peer-produced update of his 99-classic Code and Other Laws of Cyberspace. The project overview reads as follows:
David Weinberger and John Palfrey, among others, have posted impressive general (as opposed to specific) disclosure statements on their weblogs. Currently, I think that’s a good way to address some of the credibility issues related to weblogs. Probably I should follow suit, although this blog (and blogger) is certainly much less of interest than the two mentioned above.
In any event, let me play devil’s advocate for a moment: What’s down the road if we take general (as opposed to specific, case-by-case) disclosure as an approach seriously and compare it to areas of practice where we’ve been working with somewhat similar approaches? Do we face a future where disclosure statements (only imagine such statements from some of our highly networked colleagues!) get as long and complicated as package inserts of drugs, end user license agreements, or terms of services? Will we one day click on “I agree” boxes to accept disclosure statements before we read a blog? Or will we build aggregators collecting and analyzing disclosure profiles of bloggers, where one can check boxes to exclude, for instance, RSS from a philosopher’s blog who does consulting work beside? If the importance of disclosure statements increases under such a scenario, are we likely to see in the long run (as in traditional media law) legislation and regulation establishing disclosure rules and/or standards?
ø
A French Court of Appeals ruled in favor of a student — sued by the movie industry — who downloaded copyrighted movies from the Internet, burned them onto CD ROMs, and watched them with one or two friends. (The student admitted that one third of the content of his 488 CDs-collection was downloaded from the Internet.)
The Montpellier Court applied a provision of the French Intellectual Property Act, which, in essence, states that authors, once a work has been released, may not prohibit private and non-commercial performances carried out within the family circle, and cannot control the making of copies for strictly private use of the copier and not intended for collective use. [Thanks to C�dric Manara for the pointer and the translation via cyberlaw list.]
As far as I can tell, there was no circumvention of technological protection measures involved. In any event, a case to be included in a potential update of this report.
ø
Interesting post and pointers by CyberBug on Human Rights and the Internet — more to come, stay tuned. Check it out.
EDRI-gram, a bi-weekly newsletter about digital civil rights in Europe, draws our attention to an earlier report by German online newsletter Heise, which reported a couple of days ago that all major search engines in Germany (Google, Lycos Europe, MSN Deutschland, AOL Deutschland, Yahoo, T-Online, and t-info) have reached an agreement to filter harmful-to-minors content which will make it much more difficult for German users to access such content. For this purpose, the search engines agreed to establish and run a self-regulatory organization that will block websites considered to be harmful based on a list of URLs provided by a government agency in charge with media content classification. According to the Heise report, the search engines take these steps because they fear that European legislators might become active if the harmful-to-minors-problem isn’t addressed by the industry itself.
Among many interesting details: (1) The search engines are not allowed to make public which sites are filtered. (2) It seems unclear how content considered to be harmful to minors can be searched and accessed by adults under the regime. Again, clash of cultures. For a much earlier (2002) analysis of Google content filtering in Germany, see this report by Professor Jonathan Zittrain and former Berkmaniac Ben Edelman.
ø
As some of you might know by now, I’m co-teaching with John Palfrey a course called Internet & Society: Technologies and Politics of Control at Harvard Extension School. On tonight’s menu is a rather indigestive topic: harmful speech on the Net. John has the lead, and he starts where last class ended: The shift from consumers to active users/creators — a shift many of us think is great. Tonight, however, John takes a different route and focuses on the down- and dark side of the new information environment.
The starting place is the fact that Internet speech is different. John makes three points:
* Net creates potential for aggregation of data where none was possible/economically feasible before
* Internet has made it easier (although it might get much more difficult in the future) to speak anonymously.
* Access becomes possible over great distance at any time – speech that is posted here can be heard around the world.
John now describes the growth of online communities back in the times (i.e. mid 90ies) where ISP offered not only access to the Internet, but were in the business of creating online communities and providing content for their users. The communities were idea and issues focused, town-meeting like with a benevolent dictator style government (aka ability to exclude/terminate access.)
Fade. John tells us the story of (Ken) Zeran v. American Online (In this context, we briefly discuss CDA sec. 230) and Jake Baker to illustrate how things, at some point, turned from good to tricky. The in-class discussion is now on how the results of the two cases can (if at all) be reconciled.Break.
Back to second half of class 6.
Uups, we get a cold call from JP. He asks his TAs and co-teacher how First Amendment and equivalents work in the U.S., Canada, and the EU. Tim gives us a great 1-minute overview of First Amendment law in the U.S. and makes clear that it is primarily a right that protects against governmental viewpoint-censorship. Courts, at the outset of a case, have to make a judgment what standard of review applies to restrictions on free speech – strict scrutiny (e.g. political speech/content-based restriction) or lower standards of review.
Susie talks about Canadian law that is similar re: state-actor requirement. Presumption: Free Speech, everything is protected. Only exemption: violent action (expression through action). But: Government is allowed to restrict fundamental rights if seems legitimate in a democratic society. There’s a five-step-balancing test that looks, inter alia, into individual rights and state interests. Approach is different in Canada, but principles similar.
I now talk a bit about European approaches. My point is, I guess, that the U.S. approach to Internet-harmful speech regulation is, roughly, more speech, i.e. a pro-speech approach. Europe has taken an alternative approach, i.e. an anti-hate approach. Measures have been taken at the national level (e.g. German Penal Code), but also at the level of international law (e.g. European Convention on Human Rights, and, Internet-specific, Convention on Cybercrime, Additional Protocol.) How can we explain these different approaches? There are many elements, e.g. historical facts (e.g. Nazi propaganda in Europe vs. imprisonment of American during WW I for criticizing US participation in war); political/cultural system (i.e. relativistic conception of democracy in the US); trust/distrust in courts; different interpretations of individualism.
John now presents a couple of examples of what we can find on the Net – a slide entitled “The Good, The Bad, and the Ugly (literally). The list includes Nuremberg Files, Babes of the Web, free porn, free music, how to make a bomb, how to make/grow various drugs, etc. — The point is certainly that there are downsides to the shift from passive receivers to active users and creators, respectively.
John now introduces a new set of themes, asking what we cannot avoid on the web: SPAM (potentially a form of protected commercial speech), pornography, Viruses/Affects of viruses, advertising. Possible solutions: The resurgence of online communities reformulated around social networks, ability to exclude, feeling of living room rather than information bazaar. Social software such as Friendster, The Facebook. Technological approaches such as filtering, Pop-up blockers, SPAM blockers, family-friendly user agreements.
We end with a list of issues on the current agenda (“What are the problems?):
* US: 1996 – today: Protecting children online
* Companies: trade secrets (Apple, Diebold)
* Protecting citizens from seeing harmful information: religious; moral (porn); politics; drugs/alcohol; women’s issues.
Finally, John presents not-yet-released Berkman research (sorry, can’t blog: censorship) and, in different context, circulates the Grokster amicus submitted by HLS faculty members.
Interesting class, thanks to all.. Have a safe ride home.
ø
Fascinating report and commentary. Here’s the announcement:
I’m sure we’ll hear much more about this here at the Berkman Center in the weeks to come.
ø
Rik Lambers has published on INDICARE a short-but-nice overview of the main arguments against the implementation of the broadcast flag — including some interesting observations on the possible export of the U.S. approach to other countries and continents.
ø
The EFF has made availabe, among others, the Grokster amicus brief submitted by Harvard Law School faculty members and Berkman Center directors Professor William Fisher, Professor Jonathan Zittrain, and John Palfrey.
ø
I’m co-teaching with John Palfrey a course at Harvard Extension School called Internet & Society: The Technologies and Politics of Control. Tonight, we will be discussing how digitization in tandem with the emergence of electronic communication networks such as the Internet have changed the ways in which we use media. More specifically, we will look at the shift from passive receivers of information to active users and creators. This class will be more of a conversation rather than a lecture.
As my students must have realized by now, I have a bias to think and — worse — talk in abstract concepts (call it the European blind-spot). Tonight, however, I won’t talk much about theoretical frameworks, promised. Rather, I would like to present a couple of examples illustrating the above-mentioned shift from passive receivers to active users and discuss them in an open format. While looking at the examples, please keep the following questions in mind:
Okay, that being said, here are the examples that we will use in class tonight. Please note that I provide positive examples, nice stories, but — of course — also at least problematic examples, some of which you might find disturbing. (Again, we’ll discuss these examples in class and provide enough context to make sense of these illustrations; however, I want to include the examples here so that our distance students can easily access them.)
(1) Research and Knowledge
(2) News Reporting & Journalism
(3) Entertainment
(4) Social & Corporate Criticism
(5) Commerce
We’ll end the class with some big-picture-questions, including:
The teaching team is looking forward to discussing these and other questions with you tonight.