Artificial immune system

Last updated

In artificial intelligence, artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.

Contents

Definition

The field of artificial immune systems (AIS) is concerned with abstracting the structure and function of the immune system to computational systems, and investigating the application of these systems towards solving computational problems from mathematics, engineering, and information technology. AIS is a sub-field of biologically inspired computing, and natural computation, with interests in machine learning and belonging to the broader field of artificial intelligence.

Artificial immune systems (AIS) are adaptive systems, inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving. [1]

AIS is distinct from computational immunology and theoretical biology that are concerned with simulating immunology using computational and mathematical models towards better understanding the immune system, although such models initiated the field of AIS and continue to provide a fertile ground for inspiration. Finally, the field of AIS is not concerned with the investigation of the immune system as a substrate for computation, unlike other fields such as DNA computing.

History

AIS emerged in the mid-1980s with articles authored by Farmer, Packard and Perelson (1986) and Bersini and Varela (1990) on immune networks. However, it was only in the mid-1990s that AIS became a field in its own right. Forrest et al. (on negative selection) and Kephart et al. [2] published their first papers on AIS in 1994, and Dasgupta conducted extensive studies on Negative Selection Algorithms. Hunt and Cooke started the works on Immune Network models in 1995; Timmis and Neal continued this work and made some improvements. De Castro & Von Zuben's and Nicosia & Cutello's work (on clonal selection) became notable in 2002. The first book on Artificial Immune Systems was edited by Dasgupta in 1999.

Currently, new ideas along AIS lines, such as danger theory and algorithms inspired by the innate immune system, are also being explored. Although some believe that these new ideas do not yet offer any truly 'new' abstract, over and above existing AIS algorithms. This, however, is hotly debated, and the debate provides one of the main driving forces for AIS development at the moment. Other recent developments involve the exploration of degeneracy in AIS models, [3] [4] which is motivated by its hypothesized role in open ended learning and evolution. [5] [6]

Originally AIS set out to find efficient abstractions of processes found in the immune system but, more recently, it is becoming interested in modelling the biological processes and in applying immune algorithms to bioinformatics problems.

In 2008, Dasgupta and Nino [7] published a textbook on immunological computation which presents a compendium of up-to-date work related to immunity-based techniques and describes a wide variety of applications.

Techniques

The common techniques are inspired by specific immunological theories that explain the function and behavior of the mammalian adaptive immune system.

See also

Notes

  1. de Castro, Leandro N.; Timmis, Jonathan (2002). Artificial Immune Systems: A New Computational Intelligence Approach. Springer. pp. 57–58. ISBN   978-1-85233-594-6.
  2. Kephart, J. O. (1994). "A biologically inspired immune system for computers". Proceedings of Artificial Life IV: The Fourth International Workshop on the Synthesis and Simulation of Living Systems. MIT Press. pp. 130–139.
  3. Andrews and Timmis (2006). "A Computational Model of Degeneracy in a Lymph Node". Artificial Immune Systems. Lecture Notes in Computer Science. Vol. 4163. pp. 164–177. doi:10.1007/11823940_13. ISBN   978-3-540-37749-8. S2CID   2539900.
  4. Mendao; et al. (2007). "The Immune System in Pieces: Computational Lessons from Degeneracy in the Immune System". 2007 IEEE Symposium on Foundations of Computational Intelligence. pp. 394–400. doi:10.1109/FOCI.2007.371502. ISBN   978-1-4244-0703-3. S2CID   5370645.{{cite book}}: |journal= ignored (help)
  5. Edelman and Gally (2001). "Degeneracy and complexity in biological systems". Proceedings of the National Academy of Sciences of the United States of America. 98 (24): 13763–13768. Bibcode:2001PNAS...9813763E. doi: 10.1073/pnas.231499798 . PMC   61115 . PMID   11698650.
  6. Whitacre (2010). "Degeneracy: a link between evolvability, robustness and complexity in biological systems". Theoretical Biology and Medical Modelling. 7 (6): 6. doi: 10.1186/1742-4682-7-6 . PMC   2830971 . PMID   20167097.
  7. Dasgupta, Dipankar; Nino, Fernando (2008). Immunological Computation: Theory and Applications. CRC Press. p. 296. ISBN   978-1-4200-6545-9.
  8. de Castro, L. N.; Von Zuben, F. J. (2002). "Learning and Optimization Using the Clonal Selection Principle" (PDF). IEEE Transactions on Evolutionary Computation. 6 (3): 239–251. doi:10.1109/tevc.2002.1011539.
  9. Forrest, S.; Perelson, A.S.; Allen, L.; Cherukuri, R. (1994). "Self-nonself discrimination in a computer" (PDF). Proceedings of the 1994 IEEE Symposium on Research in Security and Privacy. Los Alamitos, CA. pp. 202–212.
  10. Timmis, J.; Neal, M.; Hunt, J. (2000). "An artificial immune system for data analysis" (PDF). BioSystems. 55 (1): 143–150. Bibcode:2000BiSys..55..143T. doi:10.1016/S0303-2647(99)00092-1. PMID   10745118.
  11. Greensmith, J.; Aickelin, U. (2009). "Artificial Dendritic Cells: Multi-faceted Perspectives". Human-Centric Information Processing Through Granular Modelling (PDF). Studies in Computational Intelligence. Vol. 182. pp. 375–395. CiteSeerX   10.1.1.193.1544 . doi:10.1007/978-3-540-92916-1_16. ISBN   978-3-540-92915-4. S2CID   11661259. Archived from the original (PDF) on 2011-08-09. Retrieved 2009-06-19.

Related Research Articles

<span class="mw-page-title-main">Neural Darwinism</span> Theory in neurology

Neural Darwinism is a biological, and more specifically Darwinian and selectionist, approach to understanding global brain function, originally proposed by American biologist, researcher and Nobel-Prize recipient Gerald Maurice Edelman. Edelman's 1987 book Neural Darwinism introduced the public to the theory of neuronal group selection (TNGS), a theory that attempts to explain global brain function.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation with a fixed topology.

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

<span class="mw-page-title-main">Clonal selection</span> Model of the immune system response to infection

In immunology, clonal selection theory explains the functions of cells of the immune system (lymphocytes) in response to specific antigens invading the body. The concept was introduced by Australian doctor Frank Macfarlane Burnet in 1957, in an attempt to explain the great diversity of antibodies formed during initiation of the immune response. The theory has become the widely accepted model for how the human immune system responds to infection and how certain types of B and T lymphocytes are selected for destruction of specific antigens.

Vasant G. Honavar is an Indian-American computer scientist, and artificial intelligence, machine learning, big data, data science, causal inference, knowledge representation, bioinformatics and health informatics researcher and professor.

The following outline is provided as an overview of and topical guide to artificial intelligence:

In artificial immune systems, clonal selection algorithms are a class of algorithms inspired by the clonal selection theory of acquired immunity that explains how B and T lymphocytes improve their response to antigens over time called affinity maturation. These algorithms focus on the Darwinian attributes of the theory where selection is inspired by the affinity of antigen-antibody interactions, reproduction is inspired by cell division, and variation is inspired by somatic hypermutation. Clonal selection algorithms are most commonly applied to optimization and pattern recognition domains, some of which resemble parallel hill climbing and the genetic algorithm without the recombination operator.

The immune network theory is a theory of how the adaptive immune system works, that has been developed since 1974 mainly by Niels Jerne and Geoffrey W. Hoffmann. The theory states that the immune system is an interacting network of lymphocytes and molecules that have variable (V) regions. These V regions bind not only to things that are foreign to the vertebrate, but also to other V regions within the system. The immune system is therefore seen as a network, with the components connected to each other by V-V interactions.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Rule-based machine learning (RBML) is a term in computer science intended to encompass any machine learning method that identifies, learns, or evolves 'rules' to store, manipulate or apply. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system.

<span class="mw-page-title-main">Emma Hart (computer scientist)</span> English computer scientist

Professor Emma Hart, FRSE is an English computer scientist known for her work in artificial immune systems (AIS), evolutionary computation and optimisation. She is a professor of computational intelligence at Edinburgh Napier University, editor-in-chief of the Journal of Evolutionary Computation, and D. Coordinator of the Future & Emerging Technologies (FET) Proactive Initiative, Fundamentals of Collective Adaptive Systems.

Soft computing is an umbrella term used to describe types of algorithms that produce approximate solutions to unsolvable high-level problems in computer science. Typically, traditional hard-computing algorithms heavily rely on concrete data and mathematical models to produce solutions to problems. Soft computing was coined in the late 20th century. During this period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a computational paradigm that entertains the uncertainties in data by using levels of truth rather than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced by human brain functions. Finally, evolutionary computation is a term to describe groups of algorithm that mimic natural processes such as evolution and natural selection.

<span class="mw-page-title-main">Atulya Nagar</span>

Atulya K. Nagar is a mathematical physicist, academic and author. He holds the Foundation Chair as Professor of Mathematics and is the Pro-Vice-Chancellor for Research at Liverpool Hope University.

References