New Paper from Odlyzko

Don’t have time to comment right now, but here’s a new paper (via Politech) from the always fascinating Andrew Odlyzko.  It’s entitled “Pricing and Architecture of the Internet: Historical Perspectives from Telecommunications and Transportation.”  The basic theme: price discrimination hasn’t happened in telecom, but it has in transportation; price discrim doesn’t fit with Internet’s e2e architecture, but there are some economic arguments in favor of it; incentives to price discriminate will continue to rise, threatening e2e, but there is reason to believe that e2e will survive.

John Perry Barlow Has a Blog

Nuff said. (via DTM)

P2P Infringement, 512(i), and Verizon

On the pho list today, there was a discussion about how Verizon will affect how colleges (and other ISPs) treat notices about P2P infringement – not subpoenas, but 512(c)(3)(a) notice-and-takedown-like letters.  It’s somewhat unclear, but my answer is that it generally won’t.


In reaching its interpretation of 512(h), Verizon resolved that notice-and-takedown cannot apply to P2P infringement because ISPs “can not remove or disable one user’s access to infringing material resident on another user’s computer because [ISPs do] not control the content on [their] subscribers’ computers” (10). This point might seem obvious since 512(c)(3)(a) only applies to 512(b)-(d) providers, not to 512(a) ISP routing. However, because leaving a file in a P2P shared folder has a permanence that ostensibly (though not actually) mimics putting a file on the ISP’s servers, one could argue that a takedown can still occur; in that case, an ISP could still, as the RIAA argued, disconnect Internet access entirely to prevent access to the file.  To the contrary, the court affirmed that there’s a distinction in the statute between files residing on an ISPs’ computers and those that reside on a user’s computers, and that the statute treats disconnecting Internet access as a separate, unrelated remedy.


Many universities (e.g., Brown) have ignored these distinctions, immediately disconnecting Internet access until a student verifies that a file has been removed from a shared folder.  So, doesn’t the court’s ruling render these policies unnecessary?


Unnecessary for 512(a), certainly, but for the 512(i) repeat infringer clause, that’s a matter of interpretation.  At most, Verizon would allow ISPs to forgo the temporary disconnection of service when infringement has first been alleged; however, given that 512(i) is still in play here, I doubt many ISPs – universities in particular – will change their policies.  Notice-and-takedown-letters can still be relevant for 512(i).  Temporarily disconnecting access precludes someone from becoming a repeat infringer, by ensuring that the file is not downloaded again and more files are not added to the shared folders – at least, so the argument would go. In this way, temporary disconnection acts as a precautionary measure. Bad interpretation or not, 512(i) is so vague and universities are so worried about students’ infringing that, as a practical matter, I don’t think P2P policies will change much.  And, if they don’t, 512(i) will allow for a de facto application of 512(c)(3)(a) to 512(a) providers.


If I’m wrong, I’ll be pleased.  But I doubt Verizon fixes this particular problem.  Even if it does, this does point out to how the decision generally does not fix 512’s many problems.  Subpoenas can still be sent to 512(b)-(d) providers, notice and takedown is unchanged generally – the list goes on.

Two Cool Items (warning: semi-vanity)

1.  CS Monitor article about Berklee Shares – I’m quoted somewhere along the way.


2.  Hope we get to read their blog, because this law school class apparently will be reading ours.

More on Verizon

News.com has a nice recap; Donna, as usual, has everything you’ll need.  More assorted thoughts:


Declan notes that an appeal might be difficult and unlikely, and Cary Sherman plays down how significant this will be to the RIAA.  Somehow, I’m not quite convinced that there won’t be further legal action here.  Maybe rather than an appeal, the RIAA will try to get another forum, but, as Ernest argues, that might be tricky.


I wonder how this will change the RIAA’s strategy.  Sure, they could go through with John Doe suits, but the pace of such suits and subpoenas will not be nearly as fast.  Will that make the suits less of a deterrent for downloaders?  Is the RIAA really willing to let these suits drag on?  Are they willing to go through with suits against more 12 year olds?


Another interesting twist: if you can’t force Verizon to hand over the name, then you can’t force an anonymizing proxy to do it either, right? 


It’s worth reading all of the District Court’s opinion, too, but, if you don’t have time, just check out the RIAA’s brag page.  The key part of the lower court’s ruling was Bates’ view that 512(h) includes all service providers as defined in 512(k)(1)(B), which is sufficiently broad.  That’s the premise from which everything else followed, and that same reasoning is what the appeals court called “silly.”


Again, I find it fascinating when opinions contrast in this way – when they see the same issue clearly, unambiguously, but oppositely.  Judge Bates, just like Ginsburg, claims to stick to the statute’s text and go no further, yet their opinions are night and day. 


BTW, returning to the other side of my post below, Judge Bates, too, could be seen as interpreting the statute to achieve a particular result.  Bates is incredibly dismissive of Verizon’s arguments and goes out of his way to say that the subpoena process will actually be good for users.  


Maybe neither of them are being result-oriented, maybe both are, maybe one is and one isn’t – that’s not what I’m really getting at.  What interests me is how the timeline lines up with the shift in interpretations.  It’s just a correlation, but it’s interesting.  What does it mean for there to be a trend of the law beyond trends in analytic and interpretive methods?


This is a similar question to the one I confronted when Judge Bates’ ruling on the constitutional issues came down, followed by Grokster:



“[The day of the Grokster decision] began with Frank wondering whether Judge Bates understood or cared about how digitization has impacted copyright. I’ve often wondered if progress in the copyfight would require waiting for a generation of judges and politicians that grew up with widespread use of personal computers and the Internet.  We need judges who have enough technical understanding to tackle these tricky issues and who understand that the copyfight has broader implications for speech, privacy, and innovation.”

Verizon Wins

Verizon is victorious.  Some quick thoughts here, and more above:


The first important thing to note is that the judgment was reached on statutory, not constitutional, grounds.  The latter were not discussed by the court, so they’re still live issues if the judgment were reversed en banc or by the Supreme Court.


Next, let’s remember (roughly) the timeline of this case.  The District Court made its judgment before the RIAA had started subpeonaing people, and the court seemed very unconcerned with the potential for abuse.  The appeals court rejected the stay before the RIAA had really gotten going. But at oral arguments, when the RIAA had started, the court seemed concerned.


That concern doesn’t translate into this judgment.  If anything, the potential for abusing the statute would probably be more present in their minds when considering the constitutional issues.  However, though Judge Ginsburg (the former Reagan nominee to the SC, right?) says that this judgment is all based on the reading of the statute, I get the feeling that there might be something else going on here. 


Maybe I’m getting that feeling just because I’m always suspicious of opinions that say that the text is unambiguous and, if you just follow the text strictly, the answer is obvious. Interpretations are rarely that easy. I agree with the court’s reading of the statute, but I’m not sure it’s as unambiguous as Ginsburg makes it out to be.  I’m going to have to go back and read the briefs and the district court’s opinion for comparison.


I’m not saying that this opinion is based on politics – it isn’t. But politics might have helped point in the right direction.


Finally, what happens next?  We should expect an appeal, but, in the meantime, that won’t do the RIAA much good.  This gives another push to Congress to step in.  If they open up 512 to debate, it will be on far different terms than when the statute was first passed.  I bet, if they open that up, it’ll be in the context of a broader rethinking of what they’re going to do to fix copyright’s current problems.  Those broader problems won’t necessarily be dealt with, but they’ll probably be considered and discussed, perhaps  in a more serious way than the P2P hearings that have been happening over the last several months.

Quick Hits

1. Looks like Loudeye is trying to becoming the U.S. version of OD2. OD2 has, so far as I know, been very successful, dominating the European digital music market.  But what will Loudeye contribute to a market that already has plenty of other services?  What will Loudeye add aside from a Walmart/Coca-Cola/Virgin Records sticker on top of an iTunes look alike?


2.  A couple of weeks ago, I mentioned Mark Lemley and R. Anthony Reese’s Stopping Digital Copyright Infringement Without Impeding Innovation.  They suggest three approaches for doing just that.  Though they say all the approaches should be used together, I doubt that could be the case.


The first approach I have not run across before.  Here’s how it’d work: say you’re a centralized P2P service.  You can elect to let an arbitration panel handle cases of alleged infringement.  Whatever the panel rules, you have to enforce. The panel would mainly decide cases that are straightforward infringements.


This sounds sorta like an extension of the 9th circuit’s original ruling in Napster.  The court said that, when Napster received knowledge of an infringement, it had to block that infringement. At the same time, it repeated that that blocking must only be done within the system’s architecture. It recognized that Napster could not read the content of files and only had access to the file titles listed in its search index.  Why, then, was Napster shut down? Because, during the remedial stage, the district court basically forced Napster to anything in its power, including adding filters that could read the MP3 files, to block all infringement. Napster could not block all infringement given the architecture of its system.  On appeal, the 9th circuit did not disturb the district court’s ruling.


Lemley and Reese’s arbitration system would only lead to blocking of particular infringement as sorted out in the arbitration panel.  Furthermore, it would make sure that service providers do not have to judge infringement themselves, so cases that might not be fair use can be sorted out by some neutral third party.


But this seems like a solution in search of a problem to some extent.  Yes, we’d be able to preserve innovation in centralized P2P tools and such. But we wouldn’t stop infringement on services that have no centralized control and can’t block infringers.  So, while this prong of attack seems helpful, I doubt it’ll ultimately be that useful.


To remedy this, the authors suggest using something like Netanel’s system.  But their implementation seems impractical – they would make it opt-in for services; decentralized P2P would still preserve its immunity under Sony.  Moreover, because the service would only apply to a narrow class of technologies, the levy would have to be high, thus forfeiting the benefit of the way Netanel (and Fisher) design these tax schemes.


So we’re left with one final prong: civil and criminal penalties for direct infringers.  My point here is not to discuss whether that should be pursued. Rather, I want to point out that the multipronged attack discussed in the article won’t really be that useful in stopping infringement. I agree that we should find a solution that does not impede innovation, but, as far as policy solutions go, it seems more likely that we will either have to fully pursue an ACS or rely on something direct infringement suits.


3.  So, has anyone sorted out what this Phillips DRM is all about? Here’s what I don’t get: if you know how to unlock the DRM, can’t you create a generic decryption utility?  What can Phillips do to make sure that you only implement it as part of a software or portable player?  And if Phillips does force you to do so, is this DRM really “open”? Yes, they’re going to license it to whoever, but there will still be many restrictions on that license.


4.  From pho: Kevin Doran has an awesome digital media news site.

Look Over There!

I am busy gearing up for winter break (which, sadly, occurs BEFORE finals for me).  Donna has everything you need for now.  One link that is not to be missed: Napsterization.org, Mary Hodder’s newest incarnation (with, it appears, a little help from Eddan Katz). And check out Berkman’s recent ICANN research.

New/Old Music, DRM, & P2P Model

There are a couple of interesting elements in this LA Times article (via biplog) (see also News.com) about the Content Reference Forum.

1. This sounds like mediAgora with a couple of limbs chopped off. In mediAgora, there’s no DRM, and sharing of the actual files is allowed. It also sounds a lot like Altnet, as mentioned in the article, except in Altnet you share the file. Here, it sounds like there’s still a bunch of DRM and you’re downloading from a central server, even though you might find the references on P2P. It takes advantage of P2P inasmuch as Napster 2.0 allowing you to look at other people’s songs is P2P. The Forum’s white paper seems to think of P2P as a marketing rather than a distribution tool.

2. This model seems to add in flexibility, in that you can get a file that suits your system. But think about it: how does change the situation of a Linux user if he cannot play MS DRM or Apple FairPlay files? And, isn’t it a little funny that flexibility in this context means that some central server must have or be able to refer you to 12 different versions of a song to fit all the different compatibilty requriements.

And isn’t this the trusted computing dream? Take this example from the Forum’s white paper:

“For example if Joe’s friend Pierre in France clicks on the content reference, Pierre’s identity, location(France) and possibly hardware & software configuration will be sent to the Reference Service along with the CR.”

Now go back and read Seth Schoen’s piece on trusted computing and remote attestation.

What does this all mean? I don’t really know. I’m glad that they’re thinking about P2P and how to take advantage of it. But this doesn’t really sound like a genuine step in that direction.

Diebold Report on KPFT

I think I’m very briefly quoted in a report by Pacifica Radio’s Houston affiliate, KPFT, regarding Diebold. I say I think because I haven’t actually been able to download the damn thing. If you’d like to try to do so, just click this link.

What Constitutes A “Digital Copyright” Solution?

Mary Hodder makes some interesting criticisms of Creative Commons licenses and ACSs.  I’ve heard variants on both accounts before, but I’m still not quite sure I get the concerns.  Mainly, the concern seems to be that these particular tools do not fit the need for “digital copyright.” A related concern is that these tools are “band-aids” – temporary solutions to complex problems – that will be counterproductive. 


I view CC and ACS as direct attempts to make copyright conform to the Internet.  ACS does not fight the Internet and the benefits of costless copying and distribution; instead, it tries to build around that while remunerating artists.   CC obviously takes a different tact, but the problems addressed are still roughly the same – it tries to shift the balance of rights such that people can have “true sharing” to a substantial extent.  Neither tool necessitates a specific way of dealing with derivative works, but both point the way towards more balanced directions (Fisher would allow derivatives but split compensation; for CC, no-derivs is only an option).


I suppose “band-aids” like CC could help entrench the current system by making people think that it solves all of copyright’s problems. But isn’t it more likely that CC’s “some rights reserved” approach will demonstrate to people the need for reform? Doesn’t it give people a temporary solution while pointing the way towards a new copyright? Criticisms that CC is “a second-best solution” seem to amount to complaining about it being a license and not a law.


That’s not really so with the ACS, which is often proposed as a new legal regime for copyright. Unlike with CC, the intent is for ASCs to actually be a “complete overhaul” of copyright – Fisher’s model is a significant restructuring of copyright, attempting to “reconcile how to give creators the right amount of … compensation … and the public (and other creators) the right amount of access to the public domain, sharing, fair use” et al. 


Calling ACS band-aid thus seems to mean something different here. I do recognize that the model, if mandated by the government, could prove inflexible and ineffective, and in that sense it might be a kluge.  So, to an extent, I agree that patience and more consideration is needed. But, at some point, we can’t keep waiting for the solution, and we must attempt to make that overhaul within current constraints. 


Bottom line: what, more specifically, makes these “analog solutions”?  What makes the CC type of “band aid” ultimately bad?  And when is it time to say that we need to attempt a complete overhaul like ACS even if it might end up being merely a temporary and inefficient fix?

Christian Science Monitor Tech Blog

Am I the last person to find out about this?  Technorati doesn’t think so, so I figured it was worth a post.

ACS Meeting Report, Pt 1

Still quite busy, but wanted to start to formulate my thoughts on the Development of a Alternative Compensation System (ACS) for  Digital Media in a Global Environment meeting. It’s going to take some time, because it was quite overwhelming – between dinner at pho and the 9 hour conference, all the incredible participants with a multitude of viewpoints (lawyers, artists, computer scientists, e-experts, market researchers, and on and on, from the US and numerous other countries), it was a lot of ACS.  And, as conference scribe, I spent most of my time just trying to digest what people were saying without interpreting or responding to it. I’m hoping someone posts some notes from it, so I can try to remember all the interesting points that caught my attention.  Professor Felten has a nice description of the discussion’s general flow.


I came out of the meeting a lot more interested in what comes in-between where we are now and a mandatory compulsory licensing model – that is, something like Professor Fisher’s voluntary co-op idea.  Right now, there is little chance the record industry will voluntarily offer blanket licenses.  Digital music services will continue to evolve, but it will be a long time before they attempt to realize the potential of the Internet and create a model that does not depend on controlling all copying and distribution.  So someone outside the current industry would have to step up to demonstrate the model’s potential.  The value in that demonstration, whether it leads to a government-mandated or market model, is significant.


Who could be such an intermediary?  As Professor Felten suggests, ISPs and universities seem like a fit, and they’re the most likely to eventually get some record industry interest (though I have leveled criticisms at university involvement before).  Another option could be a tech company oriented towards music and/or searching for content – someone who already has some infrastructure and a related business interest.  A third option could be a group interested in promoting and sustaining culture and creativity – someone who perhaps has a connection to the artistic community already and enough resources to take the project on.


I’m not sure what it will take to get a service off the ground or whether it will definitely be able to compete with free any better. What I do know is there’s a far better chance of this helping as compared to, say, Coca-Cola opening up yet another iTunes look-alike (they’re doing so in the U.K.).


Even though people were quite intrigued by this voluntary model, there was a sense that it would still require something like Professor Nesson’s technodefense as well as heightened enforcement against direct infringers.  Slowly and quietly, support at least for the latter option is coming out.  In the meeting’s discussion, this was rarely made explicit, but, between breaks, I heard several people support it.


I’m looking forward to what Donna has to say on the subject, because I’m not so convinced that direct infringer suits are really a solution, even if we limited the disproportionate penalties and privacy invasions of the current system.  A part of me looks at my limited experience speaking to students about their P2P file-sharing habits – many people I talk to have stopped or significantly cut back on file-sharing.  Even some who know that the RIAA is only going after uploaders refrain from downloading.  And then I think, if we were to get a no DRM, no technology mandates, no P2P infringement future, aren’t the lawsuits worth the cost?  At the same time, I read articles about Earthstation 5 and question whether the lawsuits are a futile exercise. 


That’s why, despite the many problems with a mandatory model that were discussed in the morning session, I still feel like we’re headed in that direction.  The problems make me very nervous about that direction, but they don’t persuade me that the mandatory model should be off the table or that it won’t happen.


One key thread that came up during both this conference and the Gartner/Berkman conference is that norms will factor in to a large extent.  The voluntary co-op model would comport with people’s norms, in that it would not allow people to download and upload content on P2P, as long as it was authorized by that voluntary ACS.  However, that is not a guarantee that it can compete with free.  People will have to want to buy into legit services and see artists get paid.  Even in the mandatory model, norms are needed against cheating the system in order to reduce that problem.  The effect of this norm won’t have be comprehensive, but it will have to affect many more than it does today.  Allowing people to copy and distribute content, ease of use, and special features can help compensate for weak norms and shift them.


So I began to think, what role, if any, could the government have in creating these norms? I’m not talking about education campaigns.  They’d need something that would be pull more than push.  Could the government create rewards for compliance?  Such a policy would distort the market, encouraging over-consumption of copyrighted goods.  But, in an already distorted market, would that be worth it?  I’m not trying to make a concrete policy suggestion here; rather, I want to point to a broader possibility: the government can do things other than create a mandatory ACS or harsher penalties for infringement in order to promote legitimate digital media services and, perhaps, particular types of models like ACSes.  Such solutions can focus on the consumer side or on the copyright holder side (for example, prohibiting or providing disincentives for using DRM).


More on all this later.  But, before I go, let me make one footnote: it’s very important to separate conceptually the ACS models found in Fisher’s book and Netanel’s article from other sorts of digital music services.  Napster 2.0 is not an ACS.  ACSes do not try to stop P2Pization of content acquired through the system; they are blanket licenses that remove restrictions on digital media use.  Napster is a DRMed streaming service. 

Alternative Compensation System Meeting

Tomorrow, I will be at the Berkman Center’s Development of a Alternative Compensation System (ACS) for  Digital Media in a Global Environment meeting.  I’m not sure why the Berkman Center hasn’t posted a participants list, so I’ll ask permission and try to post a list later.  Perhaps I’ll also be able to post some sort of write-up, but not quite sure about those constraints yet either. In any case, more on this later.

JHU Continues to Prohibit Posting of Diebold Memos

According to Asheesh Laroia, JHU never received a C+D regarding the Diebold memos.  Yet, JHU disconnected access to the files. Even after Asheesh told the University that Diebold had folded, the University still refuses to let him post the memos.  In a recent email, the University said that it “cannot allow its resources to be used in violation of copyright law, whether or not the holder of the copyright (in this case Diebold) plans to prosecute.”


Outrageous.  Expect more info on this later.

« Previous PageNext Page »