In artificial intelligence, artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.
The field of artificial immune systems (AIS) is concerned with abstracting the structure and function of the immune system to computational systems, and investigating the application of these systems towards solving computational problems from mathematics, engineering, and information technology. AIS is a sub-field of biologically inspired computing, and natural computation, with interests in machine learning and belonging to the broader field of artificial intelligence.
Artificial immune systems (AIS) are adaptive systems, inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving. [1]
AIS is distinct from computational immunology and theoretical biology that are concerned with simulating immunology using computational and mathematical models towards better understanding the immune system, although such models initiated the field of AIS and continue to provide a fertile ground for inspiration. Finally, the field of AIS is not concerned with the investigation of the immune system as a substrate for computation, unlike other fields such as DNA computing.
AIS emerged in the mid-1980s with articles authored by Farmer, Packard and Perelson (1986) and Bersini and Varela (1990) on immune networks. However, it was only in the mid-1990s that AIS became a field in its own right. Forrest et al. (on negative selection) and Kephart et al. [2] published their first papers on AIS in 1994, and Dasgupta conducted extensive studies on Negative Selection Algorithms. Hunt and Cooke started the works on Immune Network models in 1995; Timmis and Neal continued this work and made some improvements. De Castro & Von Zuben's and Nicosia & Cutello's work (on clonal selection) became notable in 2002. The first book on Artificial Immune Systems was edited by Dasgupta in 1999.
Currently, new ideas along AIS lines, such as danger theory and algorithms inspired by the innate immune system, are also being explored. Although some believe that these new ideas do not yet offer any truly 'new' abstract, over and above existing AIS algorithms. This, however, is hotly debated, and the debate provides one of the main driving forces for AIS development at the moment. Other recent developments involve the exploration of degeneracy in AIS models, [3] [4] which is motivated by its hypothesized role in open ended learning and evolution. [5] [6]
Originally AIS set out to find efficient abstractions of processes found in the immune system but, more recently, it is becoming interested in modelling the biological processes and in applying immune algorithms to bioinformatics problems.
In 2008, Dasgupta and Nino [7] published a textbook on immunological computation which presents a compendium of up-to-date work related to immunity-based techniques and describes a wide variety of applications.
The common techniques are inspired by specific immunological theories that explain the function and behavior of the mammalian adaptive immune system.
{{cite book}}
: |journal=
ignored (help)Neural Darwinism is a biological, and more specifically Darwinian and selectionist, approach to understanding global brain function, originally proposed by American biologist, researcher and Nobel-Prize recipient Gerald Maurice Edelman. Edelman's 1987 book Neural Darwinism introduced the public to the theory of neuronal group selection (TNGS), a theory that attempts to explain global brain function.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation with a fixed topology.
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
In immunology, clonal selection theory explains the functions of cells of the immune system (lymphocytes) in response to specific antigens invading the body. The concept was introduced by Australian doctor Frank Macfarlane Burnet in 1957, in an attempt to explain the great diversity of antibodies formed during initiation of the immune response. The theory has become the widely accepted model for how the human immune system responds to infection and how certain types of B and T lymphocytes are selected for destruction of specific antigens.
Vasant G. Honavar is an Indian-American computer scientist, and artificial intelligence, machine learning, big data, data science, causal inference, knowledge representation, bioinformatics and health informatics researcher and professor.
The following outline is provided as an overview of and topical guide to artificial intelligence:
In artificial immune systems, clonal selection algorithms are a class of algorithms inspired by the clonal selection theory of acquired immunity that explains how B and T lymphocytes improve their response to antigens over time called affinity maturation. These algorithms focus on the Darwinian attributes of the theory where selection is inspired by the affinity of antigen-antibody interactions, reproduction is inspired by cell division, and variation is inspired by somatic hypermutation. Clonal selection algorithms are most commonly applied to optimization and pattern recognition domains, some of which resemble parallel hill climbing and the genetic algorithm without the recombination operator.
The immune network theory is a theory of how the adaptive immune system works, that has been developed since 1974 mainly by Niels Jerne and Geoffrey W. Hoffmann. The theory states that the immune system is an interacting network of lymphocytes and molecules that have variable (V) regions. These V regions bind not only to things that are foreign to the vertebrate, but also to other V regions within the system. The immune system is therefore seen as a network, with the components connected to each other by V-V interactions.
Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.
Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Rule-based machine learning (RBML) is a term in computer science intended to encompass any machine learning method that identifies, learns, or evolves 'rules' to store, manipulate or apply. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system.
Professor Emma Hart, FRSE is an English computer scientist known for her work in artificial immune systems (AIS), evolutionary computation and optimisation. She is a professor of computational intelligence at Edinburgh Napier University, editor-in-chief of the Journal of Evolutionary Computation, and D. Coordinator of the Future & Emerging Technologies (FET) Proactive Initiative, Fundamentals of Collective Adaptive Systems.
Soft computing is an umbrella term used to describe types of algorithms that produce approximate solutions to unsolvable high-level problems in computer science. Typically, traditional hard-computing algorithms heavily rely on concrete data and mathematical models to produce solutions to problems. Soft computing was coined in the late 20th century. During this period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a computational paradigm that entertains the uncertainties in data by using levels of truth rather than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced by human brain functions. Finally, evolutionary computation is a term to describe groups of algorithm that mimic natural processes such as evolution and natural selection.
Atulya K. Nagar is a mathematical physicist, academic and author. He holds the Foundation Chair as Professor of Mathematics and is the Pro-Vice-Chancellor for Research at Liverpool Hope University.