Evolving classification function

Last updated

Evolving classification functions (ECF), evolving classifier functions or evolving classifiers are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments.


See also

Dynamic Evolving Neuro-Fuzzy Inference Systems (DENFIS)
Evolving Fuzzy Neural Networks (EFuNN)
Evolving Self-Organising Maps

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1.

<span class="mw-page-title-main">Neuro-fuzzy</span>

In the field of artificial intelligence, the designation neuro-fuzzy refers to combinations of artificial neural networks and fuzzy logic.

In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models.

Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.

Kunihiko Fukushima is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a senior research scientist at the Fuzzy Logic Systems Institute in Fukuoka, Japan.

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

There are many types of artificial neural networks (ANN).

In computer science, an evolving intelligent system is a fuzzy logic system which improves the own performance by evolving rules. The technique is known from machine learning, in which external patterns are learned by an algorithm. Fuzzy logic based machine learning works with neuro-fuzzy systems.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. A learning rule may accept existing conditions of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

<span class="mw-page-title-main">Feature learning</span> Set of learning techniques in machine learning

In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

The following outline is provided as an overview of and topical guide to machine learning:

Plamen P. Angelov is a computer scientist. He is a chair professor in Intelligent Systems and Director of Research at the School of Computing and Communications of Lancaster University, Lancaster, United Kingdom. He is founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) research centre. Angelov was Vice President of the International Neural Networks Society of which he is now Governor-at-large. He is the founder of the Intelligent Systems Research group and the Data Science group at the School of Computing and Communications. He is member of the Board of Governors also of the Systems, Man and Cybernetics Society of the IEEE for two terms (2015-2017) and (2022-2024). Prof. Angelov was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to neuro-fuzzy and autonomous learning systems. He is also a Fellow of ELLIS and the IET. Dr. Angelov is a founding co-Editor-in-chief of the Evolving Systems journal since 2009 as well as associate editor of the IEEE Transactions on Cybernetics, IEEE Transactions on Fuzzy Systems, IEEE Transactions on AI, Complex and Intelligent Systems and other scientific journals. He is recipient of the 2020 Dennis Gabor Award as well as IEEE and INNS awards for Outstanding Contributions, The Engineer 2008 special award and others. Author of over 400 publications including 3 research monographs, 3 granted US patents, over 120 articles in peer reviewed scientific journals, over 160 papers in peer reviewed conference proceedings, etc. These publications were cited over 15000 times, h=index 63. His research contributions are centred around autonomous learning systems, Wiley, 2012, dynamically self-evolving systems and the empirical approach to machine learning, Springer Nature, 2012. Most recently, his research is addressing the problems of interpretability and explainability, xDNN, 2020, catastrophic forgetting, continual learning, ability to adapt, computational and energy costs of deep foundation models and their whole life cycle.

Machine learning in bioinformatics is the application of machine learning algorithms to bioinformatics, including genomics, proteomics, microarrays, systems biology, evolution, and text mining.

Soft computing is an umbrella term used to describe types of algorithms that produce approximate solutions to unsolvable high-level problems in computer science. Typically, traditional hard-computing algorithms heavily rely on concrete data and mathematical models to produce solutions to problems. Soft computing was coined in the late 20th century. During this period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a computational paradigm that entertains the uncertainties in data by using levels of truth rather than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced by human brain functions. Finally, evolutionary computation is a term to describe groups of algorithm that mimic natural processes such as evolution and natural selection.

Javier Andreu-Perez is a British computer scientist and a Senior Lecturer and Chair in Smart Health Technologies at the University of Essex. He is also associate editor-in-chief of Neurocomputing for the area of Deep Learning and Machine Learning. Andreu-Perez research is mainly focused on Human-Centered Artificial Intelligence (HCAI). He also chairs a interdisciplinary lab in this area, HCAI-Essex.

Nikola Kirilov Kasabov also known as Nikola Kirilov Kassabov is a Bulgarian and New Zealand computer scientist, academic and author. He is a professor emeritus of Knowledge Engineering at Auckland University of Technology, Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), George Moore Chair of Data Analytics at Ulster University, as well as visiting professor at both the Institute for Information and Communication Technologies (IICT) at the Bulgarian Academy of Sciences and Dalian University in China. He is also the Founder and Director of Knowledge Engineering Consulting.

References

  1. Lemaire, Vincent; Salperwyck, Christophe; Bondu, Alexis (2015). "A Survey on Supervised Classification on Data Streams". Business Intelligence. Lecture Notes in Business Information Processing. Vol. 205. pp. 88–125. doi:10.1007/978-3-319-17551-5_4. ISBN   978-3-319-17550-8. S2CID   26990770.
  2. Angelov, Plamen (2008). "Evolving fuzzy systems". Scholarpedia. 3 (2): 6274. Bibcode:2008SchpJ...3.6274A. doi: 10.4249/scholarpedia.6274 .
  3. Angelov, Plamen (2008). "Evolving fuzzy systems". Scholarpedia. 3 (2): 6274. Bibcode:2008SchpJ...3.6274A. doi: 10.4249/scholarpedia.6274 .
  4. Lughofer, E.; Buchtala, O. (2013). "Reliable All-Pairs Evolving Fuzzy Classifiers". IEEE Transactions on Fuzzy Systems. 21 (4): 625–641. doi:10.1109/TFUZZ.2012.2226892. S2CID   29586197.