How to Distinguish AI from Evil? Exclusive Interview with a Distinguished Research Fellow from the Institute of Law at Academia Sinica

From image-based license plate recognition in parking lots to real-time chatbots for online Q&A, from smart healthcare systems that identify lesions in X-rays to self-driving cars that can automatically follow other vehicles, AI technology has been rapidly advancing in various fields in recent years. While these new technologies are constantly changing society and daily life at an unprecedented pace, many people are concerned about the potential threats posed by AI and the ethical and legal issues it raises. These issues may be beyond the scope of existing laws and regulations. How should the law and public authorities intervene to ensure responsible technological development and prevent serious social problems? In this interview, we speak with Dr. Li Jianliang, a distinguished research fellow at the Institute of Law, Academia Sinica, to discuss the potential concerns arising from the rise of AI technology from a legal perspective.

AI Technology: A Double-Edged Sword Hinged on “Human” Factors

“As a technology that benefits humanity, the law should assist and amplify its positive effects rather than solely focusing on prevention.” Diverging from the impression of law emphasizing “prevention,” since 2018, Li Jianliang, a researcher dedicated to studying “legal issues in artificial intelligence,” has emphasized the immense potential of new technology when discussing AI topics. Li Jianliang believes that AI is essentially a new technology, neutral in nature, and when properly utilized, it brings unprecedented benefits. For instance, self-driving cars eliminate issues like drunk driving and driver fatigue while exhibiting heightened responsiveness and road awareness, showcasing significant positive impacts.

Of course, one of the crucial functions of the law remains regulation and prevention. When addressing AI issues, Li Jianliang highlights that AI technology is no different from any other emerging technology. Most instances of technology-related wrongdoing are essentially transformed manifestations of preexisting issues. “New technology may amplify preexisting malevolent behavior, making individuals more susceptible to harm, but it is still caused by the users.” Li Jianliang observes that deepfake technology, for example, was used by a certain internet celebrity to create illicit videos, but similar content already existed before the advent of deepfakes. The only difference is that the related offenses utilized AI technology, enjoying its faster and less detectable characteristics, consequently garnering more attention once exposed.

“However, fundamentally, these are crimes that existing laws are already addressing.” Li Jianliang believes that not all crimes or issues related to new technology require the formulation of new regulations; many existing laws are sufficient. Nonetheless, as human society gradually digitizes, law enforcement also requires significant changes to adapt to the dual crimes occurring in both the digital and physical realms, ultimately moving toward a “redwood forest society.”

The AI Technology Explosion: Challenging Various Fields to Rethink Ethics and Legal Standards

Of course, with the explosion of AI technology, the legal field is also facing evident challenges. Currently, there is no specific legislation globally governing the development of AI technology. According to Li Jianliang, there are at least two significant difficulties in legislating and regulating AI in practice. Firstly, AI covers a wide range of fields, including healthcare, autonomous vehicles, robotics, social media, and more, making it challenging to have a single set of regulations that fully covers all aspects. Secondly, defining the main objects and scope of AI technology and how to define them without jeopardizing individual freedoms is another issue. Li Jianliang gives an example of social media platforms using AI algorithms to manipulate the information users receive. As such technology advances, there is a possibility for further manipulation of users’ thoughts and behavior for malicious purposes. AI applications like these create significant inequalities and require legal and governmental intervention. However, determining the appropriate level and timing of intervention without unduly restricting the development of related technologies necessitates extensive discussion and research.

 

Considering the broad scope of AI technology, Li Jianliang points out that ultimately, it may not be a single specialized law that regulates AI, but rather various fields developing their own restrictions and requirements. This process is expected to be gradual until the legal framework is fully established. Meanwhile, ethical guidelines for AI development become even more crucial. In 2019, the Ministry of Science and Technology (now restructured as the National Science and Technology Commission) unveiled the “Artificial Intelligence Research and Development Guidelines” with eight target directions, serving as an official reference for AI research. However, these guidelines provide general directions, and each field needs further exploration to implement the specifics.

 

In response, Li Jianliang proposes a regulatory approach using a problem-oriented mindset. Discussions can focus on identifying conditions that pose a threat to fundamental principles and establishing regulations accordingly. For example, when requiring that AI technology should not violate “human dignity,” discussions should address situations where AI technology may endanger “human dignity” and establish corresponding limitations and safeguards. Li Jianliang emphasizes the need to combine various principles with the characteristics of each field to determine what can be considered trustworthy.

 

In addition to legislative efforts, ongoing discussions and continuous improvement in various fields will be the only path for humanity to responsibly harness the benefits of AI technology in the future.

Source:

Interview with Li Jianliang, Associate Research Fellow at the Institute of Legal Studies, Academia Sinica.

Artificial Intelligence Research and Development Guidelines.

Full article from Sci-Tech Vista by Chen Ting-wei.

How to Distinguish AI from Evil? Exclusive Interview with a Distinguished Research Fellow from the Institute of Law at Academia Sinica
Tagged on:     
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.