Artificial Intelligence and Law

Daniel Seng (National University of Singapore)

Associate Professor of Law
Daniel Seng teaches and researches on information technology law and infocommunications law. Between 2001 and 2003, he was concurrently the Director of Research, Technology Law Development Group at the Singapore Academy of Law. He graduated with firsts from NUS and Oxford, where he received the Rupert Cross Prize in 1994. He received his doctoral degree from Stanford Law School, where he used machine learning, natural language processing and big data techniques to conduct research on copyright takedown notices. While he was at Stanford, he was a non-residential fellow with the Center for Legal Infomatics (CodeX).

November 26-30, 2018

600 Renwen Building
Renmin University of China
Time: 2-5 pm

Lecture 1: AI and Creativity (November 26)

Rules that recognize creations and inventions give their authors and inventors special privileges. As machines progress beyond their use as tools in the creativity process because of improvements in AI techniques and algorithms, and as they actually take on greater and more significant roles, how, if at all, should society deal with such creations? Does AI upend humanistic concepts of creativity and discovery, or does it give us greater insights into ourselves as a creative species?

Readings:
Naruto v Slater (9th Cir, Jul. 12, 2017)
Christie’s Sells AI-Generated Art for $432,500 As Controversy Swirls Over Creators’ Use of Copied Code (Oct. 29, 2018)

Discussion 1 (November 27) – **500 Renwen Building

Lecture 2: AI, Personal Data and Property (November 28)

The use of the ad driven business model on the Internet has led to companies and institutions collecting and using increasingly large amounts of information about individuals for their private gain. Improvements in AI algorithms has made it increasingly easy to build extremely accurate profiles about individuals and predict their likes and dislikes. But should such data be owned by companies as their private property, or should this data be seen as belonging to individuals? If the future is a data-driven society, how would we as a society feel if companies and institutions control data about ourselves? What is the most optimal treatment for personal property in society, given that personal data also inherit characteristics of public and communal property?

Readings:
Scholars Have Data on Millions of Facebook Users. Who’s Guarding It (May 6, 2018)

Discussion 2 (November 29)

Lecture 3: AI, Personality and Liability (November 30)

If an autonomous vehicle causes an accident or kills a pedestrian, it has been argued that AI machines cannot be liable as they are not moral agents and have no human values embedded in them. But is it even true that machines are amoral or can be amoral? Recent research convincingly suggests otherwise. Yet, the other extreme is to ascribe AI machines with “personality” to resolve liability issues. Arguably a more introspective approach is necessary, one that explores the roots of liability and the role of laws and rules to resolve issues of liability.

Readings:
Teaching robots right from wrong (June 2017)
Who Humans Would Rather Kill in Autonomous Car Crashes (Oct 24, 2018)

This course is free but we require all participants to register ahead of time. If you would like to register please email the following address:

renmin.philosophy@gmail.com

In your email be sure to include your: (i) full name, (ii) program of study (B.A., M.A., Ph.D.), and (iii) institution.

We have funds to support travel and accommodation fees for select students in China who reside outside of Beijing. Please specify if you would like to be considered for a travel / accommodation grant and include an explanation of how this short-course is related to and will benefit your current research along with your most recent CV.

The deadline for travel grants is November 1, 2018.