This article needs additional citations for verification .(March 2022) |
Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.
In artificial intelligence and law, legal norms are considered in computational tools to automatically reason upon them. In multi-agent systems (MAS), a branch of artificial intelligence (AI), a norm is a guide for the common conduct of agents, thereby easing their decision-making, coordination and organization.
Since most problems concerning regulation of the interaction of autonomous agents are linked to issues traditionally addressed by legal studies, and since law is the most pervasive and developed normative system, efforts to account for norms in artificial intelligence and law and in normative multi-agent systems often overlap.
With the arrival of computer applications into the legal domain, and especially artificial intelligence applied to it, logic has been used as the major tool to formalize legal reasoning and has been developed in many directions, ranging from deontic logics to formal systems of argumentation.
The knowledge base of legal reasoning systems usually includes legal norms (such as governmental regulations and contracts), and as a consequence, legal rules are the focus of knowledge representation and reasoning approaches to automatize and solve complex legal tasks. Legal norms are typically represented into a logic-based formalism, such a deontic logic.
Artificial intelligence and law applications using an explicit representation of norms range from checking the compliance of business processes and the automatic execution of smart contracts to legal expert systems advising people on legal matters.
Norms in multi-agent systems may appear with different degrees of explicitness ranging from fully unambiguous written prescriptions to implicit unwritten norms or tacit emerging patterns. Computer scientists’ studies mirror this polarity. Explicit norms are typically investigated in formal logics (e.g. deontic logics and argumentation) to represent and reason upon them, leading eventually to architecture for cognitive agents, while implicit norms are accounted as patterns emerging from repeated interactions amongst agents (typically reinforced learning agents). Explicit and implicit norms can be used together to coordinate agents. [1]
Explicit norms are typically represented as a deontic statement that aims at regulating the life of software agents and the interactions among them. It can be an obligation, a permission or a prohibition, and is often represented with some dialect or extension of Deontic logic. At the opposite, implicit norms are social norms that are not written, and they usually emerge from the repetitive interactions of agents.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In philosophy, practical reason is the use of reason to decide how to act. It contrasts with theoretical reason, often called speculative reason, the use of reason to decide what to follow. For example, agents use practical reason to decide whether to build a telescope, but theoretical reason to decide which of two theories of light and optics is the best.
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
Deontic logic is the field of philosophical logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. It can be used to formalize imperative logic, or directive modality in natural languages. Typically, a deontic logic uses OA to mean it is obligatory that A, and PA to mean it is permitted that A, which is defined as .
A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine.
Robert Anthony Kowalski is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.
Legal informatics is an area within information science.
The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans and executing those plans. A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
In philosophical logic, defeasible reasoning is a kind of reasoning that is rationally compelling, though not deductively valid. It usually occurs when a rule is given, but there may be specific exceptions to the rule, or subclasses that are subject to a different rule. Defeasibility is found in literatures that are concerned with argument and the process of argument, or heuristic reasoning.
In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. They may be simple or complex — a thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.
Keith Leonard Clark is an Emeritus Professor in the Department of Computing at Imperial College London, England.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.
The Knowledge Engineering and Machine Learning group (KEMLg) is a research group belonging to the Technical University of Catalonia (UPC) – BarcelonaTech. It was founded by Prof. Ulises Cortés. The group has been active in the Artificial Intelligence field since 1986.
Franciscus Petrus Maria (Frank) Dignum is a Dutch computer scientist. He is currently a Professor of Socially-Aware AI at Umeå University and an associate professor at the Department of Information and Computing Sciences of the Utrecht University. Dignum is best known from his work on software agents, multi-agent systems and fundamental aspects of social agents.
Leendert (Leon) van der Torre is a professor of computer science at the University of Luxembourg and head of the Individual and Collective Reasoning (ICR) group, part of the Computer Science and Communication (CSC) Research Unit. Leon van der Torre is a prolific researcher in deontic logic and multi-agent systems, a member of the Ethics Advisory Committee of the University of Luxembourg and founder of the CSC Robotic research laboratory. Since March 2016 he is the head of the Computer Science and Communication (CSC) Research Unit.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Argument technology is a sub-field of artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates. In the 1980s and 1990s, philosophical theories of arguments in general, and argumentation theory in particular, were leveraged to handle key computational challenges, such as modeling non-monotonic and defeasible reasoning and designing robust coordination protocols for multi-agent systems. At the same time, mechanisms for computing semantics of Argumentation frameworks were introduced as a way of providing a calculus of opposition for computing what it is reasonable to believe in the context of conflicting arguments.