Sue the Services, Not the People?

(Update: I’ve now linked to a copy of the article here at my site (reprinted with permission). Also, see Lawmeme for more here and here and Professor Solum’s post – see my additional comments at the very end of this post.)


Today’s Wall Street Journal features an article by Professor Douglas Lichtman, in which he argues that the RIAA should be allowed to sue only the P2P services, not the individuals. He argues that suing individuals is inequitable, randomly penalizing certain people hefty sums while leaving others untouched and only benefitting implicated copyright owners. Though that inequity is tolerable in some cases, it is not when assigning indirect liability would be more efficient. Lichtman suggests that services like KaZaA and Morpheus should be liable; read an expanded view of his indirect liability approach here.


Lichtman’s plan is not unlike Judge Posner’s argument in Aimster.  Under a negligence rule (which apparently is quite common in tort law), technology creators should be expected to restrain infringement when doing so is inexpensive, the level of infringement is high, and legitimate use would not be adversely affected.  It boils down to an economics argument, weighing costs and benefits of uses in relation to how costly changing the technology would be. Unlike Aimster (where the issue wasn’t relevant), Lichtman would force copyright holders to choose between suing direct or indirect infringers, even mandating they only sue indirect infringers in some circumstances.


Though I do like his idea about choosing who to sue, I’m not too fond of the negligence rule approach because it creates uncertainty, curbing technological innovation. If a judge were to take the general purpose functionality of these P2P programs seriously when considering future benefits, we’d end up at something much closer to the Sony rule, which I assume is not what Lichtman’s trying to do with the negligence rule.  By focusing more on present benefits, the rule will harm service providers in their infancy, which will lack a substantial userbase and lack non-infringing uses that, to a judge, are significant. This will become particularly problematic when judges analyze whether altering the system would be disproportionately costly. Surely, the cost seems much lower when the service is new and lacks a full complement of non-infringing users. The key, then, is for content industries to target new technologies before they have a chance to seem valuable and worth protecting.


Under this rule, every technology creator has to know specifically how the service should be used for legitimate purposes and design around those specific purposes, because putting out general purpose technologies will leave a company open to huge damages.  In turn, users won’t be able to come up with new, innovative, legitimate uses of new technology because they’ll be strictly cabined within the uses the technology creator was thinking of. That, too, will hinder technology creation. (I suppose this is an extension of my general thinking of DRM and fair use, too. Sure, DRM can do a decent job of allowing for certain specified fair uses. But as far as allowing for user spontaneity, forget about it – we can get closer, but it’ll never capture what we have now.)


In an email conversation with Professor Lichtman, he suggested that this chilling effect is tolerable if we can ensure that new technologies actually “[get] out of the womb and onto the radar screen.” So, if you argue that we must protect P2P because of its substantial non-infringing uses, then what’s the problem with the negligence rule as long as we aren’t so harsh that we ignore the value of protecting legitimate uses? The uncertainty problem decreases if we can still protect those legitimate uses while filtering out the infringing ones. If the legitimate uses truly are substantial, they can stand for themselves; they don’t need to (and shouldn’t be allowed to) stand on the shoulders of infringement. Moreover, once the idea is out there, Lichtman argues, people will be able to continue to create versions that allow new legitimate uses.


That’s somewhat true, but I see a few problems.


Certainly, the creation of new legitimate uses won’t completely stop. But the amount of people innovating will greatly decrease, because it will be limited to people who have created a given technology or have the means to create additional features. In that way, the value of general purpose software – both to producer and consumer – is diminished.


Even if P2P itself can survive, many other technologies won’t be created because it won’t be worth the trouble to think of how to satisfy every possible content owner who might have a problem with one’s technology. And to the extent that adding a legitimate purpose risks allowing some more illegitimate uses, people will be hesitant to stray outside the lines of what’s already accepted.


In addition, this rule only works optimally when it is clearly possible to restrain infringement at very low cost and without much reduction in legitimate uses. P2P might provide such an example because, from what I know, it might be possible to filter certain files without significant cost to P2P provider. It would undershoot (missing certain misspellings and purposely altered files) and overshoot (hindering certain fair uses using copyrighted files), but it could be done with some accuracy, and debating how bad the under/overshooting is would be a slightly narrower debate that could lead to some constructive solutions.


Regardless, the problem comes when the cost of restraining infringement isn’t as clear and you have to distinguish P2P from other systems. Let’s compare Morpheus to AIM and Google. Morpheus and AIM both allow 1-to-1 file transfers, but only Morpheus allows for searching of indexes, so let’s say Morpheus has to filter but AIM only has to terminate people who are repeatedly infringing. Now, Google does do indexing, but they don’t do the same sort of file transfers and filtering for all sorts of files would be difficult because Google only indexes pages, not individual media files. So let’s say Google has to do notice and takedown but Morpheus still has to filter.


If we don’t draw these distinctions, then we really will cripple general purpose software.


If we do draw these distinctions, we still create more uncertainty. As every distinction becomes more and more fine-grained, technology creators will not know precisely how they should go about restraining infringement in their new creations. They won’t know what’s enough, so they’ll have to continually stay on the side of caution, eliminating more features from their programs that could be used for infringing purposes. This fine grained analysis is better than nothing (and, as I’ve said, would be helpful in the remedy stage using the Grokster analysis), but when we’re talking about having people redesign their software and create monitoring functions that they don’t currently have, it could get very messy.


Yes, I know that a lot of the law has to do with balancing tests like this. All balancing tests create uncertainty. But, with technology changing rapidly, it seems like there will be far less certainty than in other contexts. I’m not sure judges are in a particularly good position to make this sort of analysis with each changing technology.


Lichtman does recognize the value of how Sony dealt with the uncertainty problem and suggests a partial solution: safe harbors. But that is a very limited solution that would likely encourage people to remain conservative in their technological designs.  The distinctions I made above are incredibly difficult to design safe harbors around, particularly when you’re doing so without knowing what technology will develop in the future.  Again, better than nothing, but not really adequate.


Furthermore, Lichtman suggests that in some situations, a regime like the AHRA’s might be useful and cites Neil Netanel’s plan. Here, I can agree with him.


I must concede that Lichtman does have some things right here – it’s a far clearer statement of this approach than Posner’s. He’s right that, if you can ensure that the technology gets out of the womb, having a narrower debate about what manageable solutions to infringement can be expected might lead to some constructive solutions. The narrower the cost-benefit analysis gets, the more likely it is to be accurate. Then again, the narrower we get, the more we might lose sight of potential uses we haven’t even imagined. So, I’m still rather skeptical of this approach.


Update: Professor Solum has commented on the article here. He points out how the P2P genie might be out of the bottle, because people can continue to use current versions of Gnutella clients (like Morpheus) forever. You can sue the creators and make them change new versions (or, for those who have such a feature, auto-update clients), but that will only have a limited impact. Of course, that’s not necessarily relevant to whether this is a good liability standard. It is, however, relevant to the overall digital music debate. Professor Lichtman thinks that if we imposed the negligence standard and had more options like iTunes, we could see a large degree of change. If Solum’s right that the genie’s out of the bottle and we then infer that people will never have enough incentive to switch to iTunes-like services, then we should start considering more drastic solutions. If that takes the form of compulsory licensing, it seems Lichtman wouldn’t be entirely opposed – it just wouldn’t be his first solution.

Comments are closed.