Abstract:Artificial Intelligence (AI) logic formalizes the reasoning of intelligent agents.With the rise of artificial intelligence and machine learning, questions are raised about how to describe the reasoning patterns of autonomous robots and other intelligent systems, especially about their capacities to reason in social interaction scenarios. When the intelligent system of a self-driving car is required to reason according to traffic rules in a social context, it can make judgments about whether or not its planned driving path will work. Reasoning, or how to balance their actions, desires,and goals with the legal and ethical norms of society has become a new challenge for the development of intelligent systems, and therefore, a new research program of social AI logic is required to respond to such demands for social reasoning in intelligent agents. In this paper, we discuss how an argumentation-based AI logic could be employed to formalize important aspects of social reasoning. Besides reasoning about the knowledge and actions of individual agents, social AI logic can also reason about social dependence among multiple agents with the rights,,and permissions of the agents. We discuss four aspects of social AI logic. First, how rights represent relations between obligations and permissions of intelligent agents. Second, how to argue about the right-to-know, a central issue in the recent discussion of privacy and ethics. Third, how a wide variety of conflicts among intelligent agents can be identified and (sometimes) resolved by comparing formal arguments, includingfallacious arguments, which can also be represented and reasoned.. Fourth, how to argue about free movement for intelligent agents. Examples from social, legal,and ethical reasoning are used to highlight the challenges in developing social AI logic. The discussions of the four challenges leads to a research program for an argumentation-based social AI logic, contributing to the future development of AI logic. This paper also has two technical contributions. First, we propose a systematic way to express epistemic rights in terms of a modal language of AI logic, including actions, obligations, permissions, and knowledge. The modal language is essential for the formal representation of social reasoning. It provides a simple and compact method to formalize the possible information used by the intelligent system, via axioms and inferential rules possibly employed in their reasoning. For instance, the language can illustrate the reliable differences and interactions between ‘permission to know’ and ‘permission to do’, which is the key to understanding the debate regarding privacy in law and ethics. Based on the modal representations, we further develop a feasible argumentation framework to reason about the various facets of social dependence. It is a deontic epistemic action logic (DEAL) for AI to address social reasoning. The second technical contributionis to model the reasoning among social intelligent agents,and to argue that argumentation-based AI logic is an efficient formal tool of representation.. It can clearly present the conflicts among the complex and complicated social phenomena, for example, pointing out in which situation one’s sensitive and classified information shall be known without violating ethical norms of the law. We further demonstrate how argumentation-based AI logic (DEAL) naturally explains each solution of solving social conflicts, by picking out the decisive elements during the argument procedure. A conclusion for an agent to reason some social phenomenon might then need to be withdrawn when the hypothetical information or the inferential rules are defeated. The case for protecting an individual’s privacy can similarly be dismissed when there is another stronger and more prudent legal or ethical reason that attacks its background information or the reasoning procedure. An example of this might be to protect public safety. The DEAL logic can be viewed as a computational and logical system to model explainable AI for social reasoning.
董惠敏 [卢森堡]雷卡·马尔科维奇 [卢森堡]里昂·范德拓. 面向社会推理的人工智能逻辑[J]. 浙江大学学报(人文社会科学版), 2020, 6(5): 20-.
Huimin Dong Réka Markovich Leendert van der Torre. Developing AI Logic for Social Reasoning. JOURNAL OF ZHEJIANG UNIVERSITY, 2020, 6(5): 20-.