This article needs additional citations for verification .(February 2023) |
Part of a series on |
Artificial intelligence |
---|
Part of a series on |
Multi-agent systems |
---|
Multi-agent simulation |
Agent-oriented programming |
Related |
Distributed artificial intelligence (DAI) also called Decentralized Artificial Intelligence [1] is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
Multi-agent systems and distributed problem solving are the two main DAI approaches. There are numerous applications and tools.
Distributed Artificial Intelligence (DAI) is an approach to solving complex learning, planning, and decision-making problems. It is embarrassingly parallel, thus able to exploit large scale computation and spatial distribution of computing resources. These properties allow it to solve problems that require the processing of very large data sets. DAI systems consist of autonomous learning processing nodes (agents), that are distributed, often at a very large scale. DAI nodes can act independently, and partial solutions are integrated by communication between nodes, often asynchronously. By virtue of their scale, DAI systems are robust and elastic, and by necessity, loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in the problem definition or underlying data sets due to the scale and difficulty in redeployment.
DAI systems do not require all the relevant data to be aggregated in a single location, in contrast to monolithic or centralized Artificial Intelligence systems which have tightly coupled and geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or hashed impressions of very large datasets. In addition, the source dataset may change or be updated during the course of the execution of a DAI system.
In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents. [2] Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into multi-agent systems and distributed problem solving. [3] In multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized.
The objectives of Distributed Artificial Intelligence are to solve the reasoning, planning, learning and perception problems of artificial intelligence, especially if they require large data, by distributing the problem to autonomous processing nodes (agents). To reach the objective, DAI requires:
There are many reasons for wanting to distribute intelligence or cope with multi-agent systems. Mainstream problems in DAI research include the following:
Two types of DAI has emerged:
DAI can apply a bottom-up approach to AI, similar to the subsumption architecture as well as the traditional top-down approach of AI. In addition, DAI can also be a vehicle for emergence.
The challenges in Distributed AI are:
Areas where DAI have been applied are:
DAI integration in tools has included:
Notion of Agents: Agents can be described as distinct entities with standard boundaries and interfaces designed for problem solving.
Notion of Multi-Agents: Multi-Agent system is defined as a network of agents which are loosely coupled working as a single entity like society for problem solving that an individual agent cannot solve.
The key concept used in DPS and MABS is the abstraction called software agents. An agent is a virtual (or physical) autonomous entity that has an understanding of its environment and acts upon it. An agent is usually able to communicate with other agents in the same system to achieve a common goal, that one agent alone could not achieve. This communication system uses an agent communication language.
A first classification that is useful is to divide agents into:
Well-recognized agent architectures that describe how an agent is internally structured are:
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency.
Collaborative intelligence characterizes multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem solving network. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generation social networks to support collaborative intelligence, crowdsourcing individual expertise, preferences, and unique contributions in a problem solving process.
Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University.
A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning. With advancements in Large language model (LLMs), LLM-based multi-agent systems have emerged as a new area of research, enabling more sophisticated interactions and coordination among agents.
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
In computer science multi-agent planning involves coordinating the resources and activities of multiple agents.
In intelligence and artificial intelligence, an intelligent agent (IA) is an agent that perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge.
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning.
Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
Grammar systems theory is a field of theoretical computer science that studies systems of finite collections of formal grammars generating a formal language. Each grammar works on a string, a so-called sequential form that represents an environment. Grammar systems can thus be used as a formalization of decentralized or distributed systems of agents in artificial intelligence.
In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.
Deliberative agent is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions are made via symbolic reasoning".
Shlomo Zilberstein is an Israeli-American computer scientist. He is a Professor of Computer Science and Associate Dean for Research and Engagement in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. He graduated with a B.A. in Computer Science summa cum laude from Technion – Israel Institute of Technology in 1982, and received a Ph.D. in Computer Science from University of California at Berkeley in 1993, advised by Stuart J. Russell. He is known for his contributions to artificial intelligence, anytime algorithms, multi-agent systems, and automated planning and scheduling algorithms, notably within the context of Markov decision processes (MDPs), Partially Observable MDPs (POMDPs), and Decentralized POMDPs (Dec-POMDPs).
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Collaborative Control Theory (CCT) is a collection of principles and models for supporting the effective design of collaborative e-Work systems. Beyond human collaboration, advances in information and communications technologies, artificial intelligence, multi-agent systems, and cyber physical systems have enabled cyber-supported collaboration in highly distributed organizations of people, robots, and autonomous systems. The fundamental premise of CCT is: without effective augmented collaboration by cyber support, working in parallel to and in anticipation of human interactions, the potential of emerging activities such as e-Commerce, virtual manufacturing, telerobotics, remote surgery, building automation, smart grids, cyber-physical infrastructure, precision agriculture, and intelligent transportation systems cannot be fully and safely materialized. CCT addresses the challenges and emerging solutions of such cyber-collaborative systems, with emphasis on issues of computer-supported and communication-enabled integration, coordination and augmented collaboration. CCT is composed of eight design principles: (1) Collaboration Requirement Planning (CRP); (2) e-Work Parallelism (EWP); (3) Keep It Simple, System (KISS); (4) Conflict/Error Detection and Prevention (CEDP); (5) Fault Tolerance by Teaming (FTT); (6) Association/Dissociation (AD); (7) Dynamic Lines of Collaboration (DLOC); and (8) Best Matching (BM).
Federated learning is a machine learning technique focusing on settings in which multiple entities collaboratively train a model while ensuring that their data remains decentralized. This stands in contrast to machine learning settings in which data is centrally stored. One of the primary defining characteristics of federated learning is data heterogeneity. Due to the decentralized nature of the clients' data, there is no guarantee that data samples held by each client are independently and identically distributed.
Parallel Intelligence in distributed artificial intelligence (AI) systems allows multiple agents to work concurrently on a problem, enhancing the speed and efficacy of the solution.