Adaptive system

Last updated

An adaptive system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole that together are able to respond to environmental changes or changes in the interacting parts, in a way analogous to either continuous physiological homeostasis or evolutionary adaptation in biology. Feedback loops represent a key feature of adaptive systems, such as ecosystems and individual organisms; or in the human world, communities, organizations, and families. Adaptive systems can be organized into a hierarchy.

Contents

Artificial adaptive systems include robots with control systems that utilize negative feedback to maintain desired states.

The law of adaptation

The law of adaptation may be stated informally as:

Every adaptive system converges to a state in which all kind of stimulation ceases. [1]

Formally, the law can be defined as follows:

Given a system , we say that a physical event is a stimulus for the system if and only if the probability that the system suffers a change or be perturbed (in its elements or in its processes) when the event occurs is strictly greater than the prior probability that suffers a change independently of :

Let be an arbitrary system subject to changes in time and let be an arbitrary event that is a stimulus for the system : we say that is an adaptive system if and only if when t tends to infinity the probability that the system change its behavior in a time step given the event is equal to the probability that the system change its behavior independently of the occurrence of the event . In mathematical terms:

  1. -
  2. -

Thus, for each instant will exist a temporal interval such that:

Benefit of self-adjusting systems

In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of self-adjusting systems is its “adaptation to the edge of chaos” or ability to avoid chaos. Practically speaking, by heading to the edge of chaos without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications. [2] Physicists have shown that adaptation to the edge of chaos occurs in almost all systems with feedback. [3]

Hierarchy of adaptations: Practopoiesis

The feedback loops and poietic interaction in hierarchical adaptations. Practopoietic cycle of causation.gif
The feedback loops and poietic interaction in hierarchical adaptations.

A groundbreaking theory of practopoiesis explains how various types of adaptations interact in a living system? Practopoiesis, [4] a term due to its originator Danko Nikolić, [5] is a reference to a hierarchy of adaptation mechanisms answering this question. The adaptive hierarchy forms a kind of a self-adjusting system in which autopoiesis of the entire organism or a cell occurs through a hierarchy of allopoietic interactions among components. [6] This is possible because the components are organized into a poietic hierarchy: adaptive actions of one component result in creation of another component. The theory proposes that living systems exhibit a hierarchy of a total of four such adaptive poietic operations:

 evolution  (i)  gene expression  (ii) non gene-involving homeostatic mechanisms (anapoiesis) (iii) final cell function (iv)

As the hierarchy evolves towards higher levels of organization, the speed of adaptation increases. Evolution is the slowest; gene expression is faster; and so on. The final cell function is the fastest. Ultimately, practopoiesis challenges current neuroscience doctrine by asserting that mental operations primarily occur at the homeostatic, anapoietic level (iii) i.e., that minds and thought emerge from fast homeostatic mechanisms poietically controlling the cell function. This contrasts the widespread assumption that thinking is synonymous with computations executed at the level of neural activity (i.e., with the 'final cell function' at level iv).

Sharov proposed that only Eukaryote cells can achieve all four levels of organization. [7]

Each slower level contains knowledge that is more general than the faster level; for example, genes contain more general knowledge than anapoietic mechanisms, which in turn contain more general knowledge than cell functions. This hierarchy of knowledge enables the anapoietic level to implement concepts, which are the most fundamental ingredients of a mind. Activation of concepts through anapoiesis is suggested to underlie ideasthesia. Practopoiesis also has implications for understanding the limitations of Deep Learning. [8]

Empirical tests of practopoiesis require learning on double-loop tasks: One needs to assess how the learning capability adapts over time, i.e., how the system learns to learn (adapts its adapting skills). [9] [10]

It has been proposed that anapoiesis is implemented in the brain by metabotropic receptors and G protein-gated ion channels. [11] These membrane proteins are suggested to transiently select subnetworks and by doing so, give raise to cognition.

See also

Notes

  1. José Antonio Martín H., Javier de Lope and Darío Maravall: "Adaptation, Anticipation and Rationality in Natural and Artificial Systems: Computational Paradigms Mimicking Nature" Natural Computing, December, 2009. Vol. 8(4), pp. 757-775. doi
  2. Hübler, A. & Wotherspoon, T.: "Self-Adjusting Systems Avoid Chaos". Complexity. 14(4), 8 – 11. 2008
  3. Wotherspoon, T.; Hubler, A. (2009). "Adaptation to the edge of chaos with random-wavelet feedback". J Phys Chem A. 113 (1): 19–22. Bibcode:2009JPCA..113...19W. doi:10.1021/jp804420g. PMID   19072712.
  4. "Practopoiesis".
  5. "Danko Nikolić (Max Planck Institute for Brain Research, Frankfurt am Main) on ResearchGate - Expertise: Artificial Intelligence, Quantitative Psychology, Cognitive Psychology". Archived from the original on 2015-07-23.
  6. Danko Nikolić (2015). "Practopoiesis: Or how life fosters a mind". Journal of Theoretical Biology. 373: 40–61. arXiv: 1402.5332 . Bibcode:2015JThBi.373...40N. doi:10.1016/j.jtbi.2015.03.003. PMID   25791287. S2CID   12680941.
  7. Sharov, A. A. (2018). "Mind, agency, and biosemiotics." Journal of Cognitive Science, 19(2), 195-228.
  8. Nikolić, D. (2017). "Why deep neural nets cannot ever match biological intelligence and what to do about it?" International Journal of Automation and Computing, 14(5), 532-541.
  9. El Hady, A. (2016). Closed loop neuroscience. Academic Press.
  10. Dong, X., Du, X., & Bao, M. (2020). "Repeated contrast adaptation does not cause habituation of the adapter." Frontiers in Human Neuroscience, 14, 569. (https://www.frontiersin.org/articles/10.3389/fnhum.2020.589634/full)
  11. Nikolić, D. (2023). Where is the mind within the brain? Transient selection of subnetworks by metabotropic receptors and G protein-gated ion channels. Computational Biology and Chemistry, 107820.

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent ought to take actions in a dynamic environment in order to maximize the cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

<span class="mw-page-title-main">Quantum superposition</span> Principle of quantum mechanics

Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödinger equation is a linear differential equation in time and position. More precisely, the state of a system is given by a linear combination of all the eigenfunctions of the Schrödinger equation governing that system.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Gene regulatory network</span> Collection of molecular regulators

A generegulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo).

In computer programming, gene expression programming (GEP) is an evolutionary algorithm that creates computer programs or models. These computer programs are complex tree structures that learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linear chromosomes of fixed length. Thus, GEP is a genotype–phenotype system, benefiting from a simple genome to keep and transmit the genetic information and a complex phenotype to explore the environment and adapt to it.

Global workspace theory (GWT) is a framework for thinking about consciousness proposed by cognitive scientists Bernard Baars and Stan Franklin in the late 1980s. It was developed to qualitatively explain a large set of matched pairs of conscious and unconscious processes. GWT has been influential in modeling consciousness and higher-order cognition as emerging from competition and integrated flows of information across widespread, parallel neural processes.

A recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to the uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled.

Neural adaptation or sensory adaptation is a gradual decrease over time in the responsiveness of the sensory system to a constant stimulus. It is usually experienced as a change in the stimulus. For example, if a hand is rested on a table, the table's surface is immediately felt against the skin. Subsequently, however, the sensation of the table surface against the skin gradually diminishes until it is virtually unnoticeable. The sensory neurons that initially respond are no longer stimulated to respond; this is an example of neural adaptation.

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.

The good regulator is a theorem conceived by Roger C. Conant and W. Ross Ashby that is central to cybernetics. Originally stated that "every good regulator of a system must be a model of that system", but more accurately, every good regulator must contain a model of the system. That is, any regulator that is maximally simple among optimal regulators must behave as an image of that system under a homomorphism; while the authors sometimes say 'isomorphism', the mapping they construct is only a homomorphism.

<span class="mw-page-title-main">Biological neuron model</span> Mathematical descriptions of the properties of certain cells in the nervous system

Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.

Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS). This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of the mind's mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

There are many types of artificial neural networks (ANN).

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model with the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.

In philosophy, downward causation is a causal relationship from higher levels of a system to lower-level parts of that system: for example, mental events acting to cause physical events. The term was originally coined in 1974 by the philosopher and social scientist Donald T. Campbell.

Information fluctuation complexity is an information-theoretic quantity defined as the fluctuation of information about entropy. It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. It was introduced in a 1993 paper by Bates and Shepard.

In mathematics and mathematical biology, the Mackey–Glass equations, named after Michael Mackey and Leon Glass, refer to a family of delay differential equations whose behaviour manages to mimic both healthy and pathological behaviour in certain biological contexts, controlled by the equation's parameters. Originally, they were used to model the variation in the relative quantity of mature cells in the blood. The equations are defined as:

References