Evolving classification function

Last updated

Evolving classification functions (ECF), evolving classifier functions or evolving classifiers are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments.


See also

Dynamic Evolving Neuro-Fuzzy Inference Systems (DENFIS)
Evolving Fuzzy Neural Networks (EFuNN)
Evolving Self-Organising Maps

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains.

Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1.

<span class="mw-page-title-main">Self-organizing map</span> Machine learning technique useful for dimensionality reduction

A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.

Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials. The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression.

In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms.

<span class="mw-page-title-main">Christof Koch</span> American neurophysiologist

Christof Koch is a German-American neurophysiologist and computational neuroscientist best known for his work on the neural basis of consciousness. He is the president and chief scientist of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

<span class="mw-page-title-main">Neuro-fuzzy</span>

In the field of artificial intelligence, neuro-fuzzy refers to combinations of artificial neural networks and fuzzy logic.

In predictive analytics and machine learning, concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.

Meta learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017 the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Continuous spatial automata, unlike cellular automata, have a continuum of locations, while the state of a location still is any of a finite number of real numbers. Time can also be continuous, and in this case the state evolves according to differential equations.

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.

<span class="mw-page-title-main">Kunihiko Fukushima</span> Japanese computer scientist

Kunihiko Fukushima is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a Senior Research Scientist at the Fuzzy Logic Systems Institute in Fukuoka, Japan.

In computer science, an evolving intelligent system is a fuzzy logic system which improves the own performance by evolving rules. The technique is known from machine learning, in which external patterns are learned by an algorithm. Fuzzy logic based machine learning works with neuro-fuzzy systems.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. A learning rule may accept existing conditions of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

<span class="mw-page-title-main">Outline of machine learning</span> Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.

Plamen P. Angelov is a computer scientist. He is a professor and chair in Intelligent Systems and Director of Research at the School of Computing and Communications of Lancaster University, Lancaster, United Kingdom. He is founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) research centre. Angelov was Vice President of the International Neural Networks Society of which he is now Governor-at-large. He is the founder of the Intelligent Systems Research group and the Data Science group at the School of Computing and Communications. Prof. Angelov was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to neuro-fuzzy and autonomous learning systems. He is also a Fellow of ELLIS and the IET. Dr. Angelov is a founding co-Editor-in-chief of the Evolving Systems journal since 2009 as well as associate editor of the IEEE Transactions on Cybernetics, IEEE Transactions on Fuzzy Systems, IEEE Transactions on AI, Complex and Intelligent Systems and other scientific journals. He is recipient of the 2020 Dennis Gabor Award as well as IEEE and INNS awards for Outstanding Contributions, The Engineer 2008 special award and others. Author of over 350 publications including 3 research monographs, 3 granted US patents, over 100 articles in peer reviewed scientific journals, over 150 papers in peer reviewed conference proceedings, etc.

<span class="mw-page-title-main">Charles C. Pugh</span> American mathematician

Charles Chapman Pugh is an American mathematician who researches dynamical systems. Pugh received his PhD under Philip Hartman of Johns Hopkins University in 1965, with the dissertation The Closing Lemma for Dimensions Two and Three. He has since been a professor, now emeritus, at the University of California, Berkeley.

The history of artificial neural networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model for neural networks based on algorithms called threshold logic. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. This work led to work on nerve networks and their link to finite automata.

References

  1. Lemaire, Vincent; Salperwyck, Christophe; Bondu, Alexis (2015). "A Survey on Supervised Classification on Data Streams". Business Intelligence. Lecture Notes in Business Information Processing. Vol. 205. pp. 88–125. doi:10.1007/978-3-319-17551-5_4. ISBN   978-3-319-17550-8. S2CID   26990770.
  2. Angelov, Plamen (2008). "Evolving fuzzy systems". Scholarpedia. 3 (2): 6274. Bibcode:2008SchpJ...3.6274A. doi: 10.4249/scholarpedia.6274 .
  3. Angelov, Plamen (2008). "Evolving fuzzy systems". Scholarpedia. 3 (2): 6274. Bibcode:2008SchpJ...3.6274A. doi: 10.4249/scholarpedia.6274 .
  4. Lughofer, E.; Buchtala, O. (2013). "Reliable All-Pairs Evolving Fuzzy Classifiers". IEEE Transactions on Fuzzy Systems. 21 (4): 625–641. doi:10.1109/TFUZZ.2012.2226892. S2CID   29586197.