Cyberlaw Instructors File Comment in WIPO AI and IP Debate

Artificial intelligence is making real waves. With machine-learning programs teaching themselves to walk, beating humans at their own games, and even generating convincing Rembrandt lookalikes, law and policymakers are looking to the horizon to figure out what the present-day renaissance of AI spells for the future of intellectual property. To that end, Jessica Fjeld and Mason Kortz of the Cyberlaw Clinic just responded (pdf) to a call for comments by the World Intellectual Property Organization (WIPO) on an issues paper regarding AI and its implications for IP. The comment focuses primarily on patent, copyright, and the policy implications of AI.

In the patent realm, what happens in the event of liability, whether that’s an AI-generated invention infringing on existing patents or human-created works infringing on AI-generated inventions? Should the law recognize AIs as patent owners, and if so, what mechanisms should the law consider for enforcing their rights? Recognizing AIs as inventors has broader implications for patent law as a whole. For example, patent law uses a test of “non-obviousness” to determine whether to grant a patent for an invention. The comment considers how the decision to treat AIs as inventors would affect that standard—and whether patent law should create separate legal tests of non-obviousness for human versus AI inventors.

Regarding copyright, the comment calls for policymakers to give greater emphasis to questions surrounding the creative status of AI-generated works. These include whether a given creator’s humanity (or lack thereof) should affect their eligibility for moral rights. The comment also considers the rise of “deep fakes” and calls for greater emphasis on the best approach to regulating AI-assisted fabrications. Copyright already offers a means of enforcement by giving standing to sue to people falsely represented by deep fakes, but it’s worth considering whether alternative mechanisms—including the possibility of creating a new and distinct set of laws for deep fakes—would both be more effective and better address the underlying policy interest in fighting disinformation.

Andrew Mettry (HLS JD ‘21) and Jonathan Iwry (HLS JD ‘20) — both students in the Spring 2020 Cyberlaw Clinic — drafted the comment, working with Jessica Fjeld and Mason Kortz on the Clinic team.

This entry was posted in Uncategorized. Bookmark the permalink.