Extreme learning machine

Last updated

Extreme learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need to be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they are random projection but with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model.

Contents

The name "extreme learning machine" (ELM) was given to such models by Guang-Bin Huang who originally proposed for the networks with any type of nonlinear piecewise continuous hidden nodes including biological neurons and different type of mathematical basis functions. [1] [2] The idea for aritifical neural networks goes back to Frank Rosenblatt, who not only published a single layer Perceptron in 1958, [3] but also introduced a multi layer perceptron with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and a learning output layer. [4] [5]

According to some researchers, these models are able to produce good generalization performance and learn thousands of times faster than networks trained using backpropagation. [6] In literature, it also shows that these models can outperform support vector machines in both classification and regression applications. [7] [1] [8]

History

From 2001-2010, ELM research mainly focused on the unified learning framework for "generalized" single-hidden layer feedforward neural networks (SLFNs), including but not limited to sigmoid networks, RBF networks, threshold networks, [9] trigonometric networks, fuzzy inference systems, Fourier series, [10] [11] Laplacian transform, wavelet networks, [12] etc. One significant achievement made in those years is to successfully prove the universal approximation and classification capabilities of ELM in theory. [10] [13] [14]

From 2010 to 2015, ELM research extended to the unified learning framework for kernel learning, SVM and a few typical feature learning methods such as Principal Component Analysis (PCA) and Non-negative Matrix Factorization (NMF). It is shown that SVM actually provides suboptimal solutions compared to ELM, and ELM can provide the whitebox kernel mapping, which is implemented by ELM random feature mapping, instead of the blackbox kernel used in SVM. PCA and NMF can be considered as special cases where linear hidden nodes are used in ELM. [15] [16]

From 2015 to 2017, an increased focus has been placed on hierarchical implementations [17] [18] of ELM. Additionally since 2011, significant biological studies have been made that support certain ELM theories. [19] [20] [21]

From 2017 onwards, to overcome low-convergence problem during training LU decomposition, Hessenberg decomposition and QR decomposition based approaches with regularization have begun to attract attention [22] [23] [24]

In 2017, Google Scholar Blog published a list of "Classic Papers: Articles That Have Stood The Test of Time". [25] Among these are two papers written about ELM which are shown in studies 2 and 7 from the "List of 10 classic AI papers from 2006". [26] [27] [28]

Algorithms

Given a single hidden layer of ELM, suppose that the output function of the -th hidden node is , where and are the parameters of the -th hidden node. The output function of the ELM for single hidden layer feedforward networks (SLFN) with hidden nodes is:

, where is the output weight of the -th hidden node.

is the hidden layer output mapping of ELM. Given training samples, the hidden layer output matrix of ELM is given as:

and is the training data target matrix:

Generally speaking, ELM is a kind of regularization neural networks but with non-tuned hidden layer mappings (formed by either random hidden nodes, kernels or other implementations), its objective function is:

where .

Different combinations of , , and can be used and result in different learning algorithms for regression, classification, sparse coding, compression, feature learning and clustering.

As a special case, a simplest ELM training algorithm learns a model of the form (for single hidden layer sigmoid neural networks):

where W1 is the matrix of input-to-hidden-layer weights, is an activation function, and W2 is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows:

  1. Fill W1 with random values (e.g., Gaussian random noise);
  2. estimate W2 by least-squares fit to a matrix of response variables Y, computed using the pseudoinverse +, given a design matrix X:

Architectures

In most cases, ELM is used as a single hidden layer feedforward network (SLFN) including but not limited to sigmoid networks, RBF networks, threshold networks, fuzzy inference networks, complex neural networks, wavelet networks, Fourier transform, Laplacian transform, etc. Due to its different learning algorithm implementations for regression, classification, sparse coding, compression, feature learning and clustering, multi ELMs have been used to form multi hidden layer networks, deep learning or hierarchical networks. [17] [18] [29]

A hidden node in ELM is a computational element, which need not be considered as classical neuron. A hidden node in ELM can be classical artificial neurons, basis functions, or a subnetwork formed by some hidden nodes. [13]

Theories

Both universal approximation and classification capabilities [7] [1] have been proved for ELM in literature. Especially, Guang-Bin Huang and his team spent almost seven years (2001-2008) on the rigorous proofs of ELM's universal approximation capability. [10] [13] [14]

Universal approximation capability

In theory, any nonconstant piecewise continuous function can be used as activation function in ELM hidden nodes, such an activation function need not be differential. If tuning the parameters of hidden nodes could make SLFNs approximate any target function , then hidden node parameters can be randomly generated according to any continuous distribution probability, and holds with probability one with appropriate output weights .

Classification capability

Given any nonconstant piecewise continuous function as the activation function in SLFNs, if tuning the parameters of hidden nodes can make SLFNs approximate any target function , then SLFNs with random hidden layer mapping can separate arbitrary disjoint regions of any shapes.

Neurons

A wide range of nonlinear piecewise continuous functions can be used in hidden neurons of ELM, for example:

Real domain

Sigmoid function:

Fourier function:

Hardlimit function:

Gaussian function:

Multiquadrics function:

Wavelet: where is a single mother wavelet function.

Complex domain

Circular functions:

Inverse circular functions:

Hyperbolic functions:

Inverse hyperbolic functions:

Reliability

The black-box character of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input. [30] [31] Another approach focuses on the incorporation of continuous constraints into the learning process of ELMs [32] [33] which are derived from prior knowledge about the specific task. This is reasonable, because machine learning solutions have to guarantee a safe operation in many application domains. The mentioned studies revealed that the special form of ELMs, with its functional separation and the linear read-out weights, is particularly well suited for the efficient incorporation of continuous constraints in predefined regions of the input space.

Controversy

There are two main complaints from academic community concerning this work, the first one is about "reinventing and ignoring previous ideas", the second one is about "improper naming and popularizing", as shown in some debates in 2008 and 2015. [34] In particular, it was pointed out in a letter [35] to the editor of IEEE Transactions on Neural Networks that the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers on RBF networks in the late 1980s; Guang-Bin Huang replied by pointing out subtle differences. [36] In a 2015 paper, [1] Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets, [1] including hierarchical structured ELM. [29] In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack." [37] Recent research replaces the random weights with constrained random weights. [7] [38]

Open sources

See also

Related Research Articles

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.

Unsupervised learning is a method in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and then generate imaginative content from it.

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Summary of algorithms for nonlinear dimensionality reduction

Nonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization. This separation is done using the nearest hyperplane that can be written down instantaneously. In the two most important implementations the neighborhood of generalization either varies with the training sample or remains constant. These networks use unary coding for an effective representation of the data sets.

Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.

<span class="mw-page-title-main">Boltzmann machine</span> Type of stochastic recurrent neural network

A Boltzmann machine, named after Ludwig Boltzmann is a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field.

A Hopfield network is a spin glass system used to model neural networks, based on Ernst Ising's work with Wilhelm Lenz on the Ising model of magnetic materials. Hopfield networks were first described with respect to recurrent neural networks by Shun'ichi Amari in 1972 and with respect to biological neural networks by William Little in 1974, and were popularised by John Hopfield in 1982. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory.

In machine learning, backpropagation is a gradient estimation method used to train neural network models. The gradient estimate is used by the optimization algorithm to compute the network parameter updates.

<span class="mw-page-title-main">Feedforward neural network</span> One of two broad types of artificial neural network

A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow. Modern feedforward networks are trained using the backpropagation method and are colloquially referred to as the "vanilla" neural networks.

A multilayer perceptron (MLP) is a name for a modern feedforward artificial neural network, consisting of fully connected neurons with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. It is a misnomer because the original perceptron used a Heaviside step function, instead of a nonlinear kind of activation function.

The softmax function, also known as softargmax or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction.

<span class="mw-page-title-main">Activation function</span> Artificial neural network node function

The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear. Modern activation functions include the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model, the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al, the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model.

Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors. The spaces of multivariate functions that can be implemented by a network are determined by the structure of the network, the set of simple functions, and its multiplicative parameters. A great deal of theoretical work has gone into characterizing these function spaces.

Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. A variant of Hebbian learning, competitive learning works by increasing the specialization of each node in the network. It is well suited to finding clusters within data.

<span class="mw-page-title-main">Restricted Boltzmann machine</span> Class of artificial neural network

A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

Dilution and dropout are regularization techniques for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data. They are an efficient way of performing model averaging with neural networks. Dilution refers to thinning weights, while dropout refers to randomly "dropping out", or omitting, units during the training process of a neural network. Both trigger the same type of regularization.

A Siamese neural network is an artificial neural network that uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints but can be described more technically as a distance function for locality-sensitive hashing.

A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.

A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.

References

  1. 1 2 3 4 5 Huang, Guang-Bin (2015). "What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt's Dream and John von Neumann's Puzzle" (PDF). Cognitive Computation. 7 (3): 263–278. doi:10.1007/s12559-015-9333-0. S2CID   13936498.
  2. Huang, Guang-Bin (2014). "An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels" (PDF). Cognitive Computation. 6 (3): 376–390. doi:10.1007/s12559-014-9255-2. S2CID   7419259.
  3. Rosenblatt, Frank (1958). "The Perceptron: A Probabilistic Model For Information Storage And Organization in the Brain". Psychological Review. 65 (6): 386–408. CiteSeerX   10.1.1.588.3775 . doi:10.1037/h0042519. PMID   13602029. S2CID   12781225.
  4. Rosenblatt, Frank (1962). Principles of Neurodynamics. Spartan, New York.
  5. Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv: 2212.11279 [cs.NE].
  6. Huang, Guang-Bin; Zhu, Qin-Yu; Siew, Chee-Kheong (2006). "Extreme learning machine: theory and applications". Neurocomputing. 70 (1): 489–501. CiteSeerX   10.1.1.217.3692 . doi:10.1016/j.neucom.2005.12.126. S2CID   116858.
  7. 1 2 3 Huang, Guang-Bin; Hongming Zhou; Xiaojian Ding; and Rui Zhang (2012). "Extreme Learning Machine for Regression and Multiclass Classification" (PDF). IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 42 (2): 513–529. CiteSeerX   10.1.1.298.1213 . doi:10.1109/tsmcb.2011.2168604. PMID   21984515. S2CID   15037168.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  8. Huang, Guang-Bin (2014). "An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels" (PDF). Cognitive Computation. 6 (3): 376–390. doi:10.1007/s12559-014-9255-2. S2CID   7419259.
  9. Huang, Guang-Bin, Qin-Yu Zhu, K. Z. Mao, Chee-Kheong Siew, P. Saratchandran, and N. Sundararajan (2006). "Can Threshold Networks Be Trained Directly?" (PDF). IEEE Transactions on Circuits and Systems-II: Express Briefs. 53 (3): 187–191. doi:10.1109/tcsii.2005.857540. S2CID   18076010.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. 1 2 3 Huang, Guang-Bin, Lei Chen, and Chee-Kheong Siew (2006). "Universal Approximation Using Incremental Constructive Feedforward Networks with Random Hidden Nodes" (PDF). IEEE Transactions on Neural Networks. 17 (4): 879–892. doi:10.1109/tnn.2006.875977. PMID   16856652. S2CID   6477031.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. Rahimi, Ali, and Benjamin Recht (2008). "Weighted Sums of Random Kitchen Sinks: Replacing Minimization with Randomization in Learning" (PDF). Advances in Neural Information Processing Systems. 21.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. Cao, Jiuwen, Zhiping Lin, Guang-Bin Huang (2010). "Composite Function Wavelet Neural Networks with Extreme Learning Machine". Neurocomputing. 73 (7–9): 1405–1416. doi:10.1016/j.neucom.2009.12.007.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  13. 1 2 3 Huang, Guang-Bin, Lei Chen (2007). "Convex Incremental Extreme Learning Machine" (PDF). Neurocomputing. 70 (16–18): 3056–3062. doi:10.1016/j.neucom.2007.02.009.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  14. 1 2 Huang, Guang-Bin, and Lei Chen (2008). "Enhanced Random Search Based Incremental Extreme Learning Machine" (PDF). Neurocomputing. 71 (16–18): 3460–3468. CiteSeerX   10.1.1.217.3009 . doi:10.1016/j.neucom.2007.10.008.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  15. He, Qing, Xin Jin, Changying Du, Fuzhen Zhuang, Zhongzhi Shi (2014). "Clustering in Extreme Learning Machine Feature Space" (PDF). Neurocomputing. 128: 88–95. doi:10.1016/j.neucom.2012.12.063. S2CID   30906342.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  16. Kasun, Liyanaarachchi Lekamalage Chamara, Yan Yang, Guang-Bin Huang, and Zhengyou Zhang (2016). "Dimension Reduction With Extreme Learning Machine" (PDF). IEEE Transactions on Image Processing. 25 (8): 3906–3918. Bibcode:2016ITIP...25.3906K. doi:10.1109/tip.2016.2570569. PMID   27214902. S2CID   1803922.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  17. 1 2 Huang, Guang-Bin, Zuo Bai, and Liyanaarachchi Lekamalage Chamara Kasun, and Chi Man Vong (2015). "Local Receptive Fields Based Extreme Learning Machine" (PDF). IEEE Computational Intelligence Magazine. 10 (2): 18–29. doi:10.1109/mci.2015.2405316. S2CID   1417306.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  18. 1 2 Tang, Jiexiong, Chenwei Deng, and Guang-Bin Huang (2016). "Extreme Learning Machine for Multilayer Perceptron" (PDF). IEEE Transactions on Neural Networks and Learning Systems. 27 (4): 809–821. doi:10.1109/tnnls.2015.2424995. PMID   25966483. S2CID   206757279.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  19. Barak, Omri; Rigotti, Mattia; and Fusi, Stefano (2013). "The Sparseness of Mixed Selectivity Neurons Controls the Generalization-Discrimination Trade-off". Journal of Neuroscience. 33 (9): 3844–3856. doi:10.1523/jneurosci.2753-12.2013. PMC   6119179 . PMID   23447596.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  20. Rigotti, Mattia; Barak, Omri; Warden, Melissa R.; Wang, Xiao-Jing; Daw, Nathaniel D.; Miller, Earl K.; and Fusi, Stefano (2013). "The Importance of Mixed Selectivity in Complex Cognitive Tasks". Nature. 497 (7451): 585–590. Bibcode:2013Natur.497..585R. doi:10.1038/nature12160. PMC   4412347 . PMID   23685452.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  21. Fusi, Stefano, Earl K Miller and Mattia Rigotti (2015). "Why Neurons Mix: High Dimensionality for Higher Cognition" (PDF). Current Opinion in Neurobiology. 37: 66–74. doi:10.1016/j.conb.2016.01.010. PMID   26851755. S2CID   13897721.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  22. Kutlu, Yakup Kutlu, Apdullah Yayık, and Esen Yıldırım, and Serdar Yıldırım (2017). "LU triangularization extreme learning machine in EEG cognitive task classification". Neural Computation and Applications. 31 (4): 1117–1126. doi:10.1007/s00521-017-3142-1. S2CID   6572895.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  23. Apdullah Yayık; Yakup Kutlu; Gökhan Altan (12 July 2019). "Regularized HessELM and Inclined Entropy Measurement forCongestive Heart Failure Prediction". arXiv: 1907.05888 [cs.LG].
  24. Altan, Gökhan Altan, Yakup Kutlu, Adnan Özhan Pekmezci and Apdullah Yayık (2018). "Diagnosis of Chronic Obstructive Pulmonary Disease using Deep Extreme Learning Machines with LU Autoencoder Kernel". International Conference on Advanced Technologies.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  25. "Classic Papers: Articles That Have Stood The Test of Time". University of Nottingham. 15 June 2017. Retrieved 21 December 2023.
  26. ""List of 10 classic AI papers from 2006"". 2017. Retrieved 21 December 2023.
  27. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. (December 2006). "Extreme learning machine: theory and applications". Nerocomputing. 70 (1–3): 489–501. doi:10.1016/j.neucom.2005.12.126. ISSN   0925-2312. S2CID   116858 . Retrieved 21 December 2023.
  28. Liang, N.Y.; Huang, G.B.; Saratchandran, P.; Sundararajan, N. (November 2006). "A fast and accurate online sequential learning algorithm for feedforward networks". IEEE Transactions on Neural Networks. 17 (6): 1411–1423. doi:10.1109/TNN.2006.880583. PMID   17131657. S2CID   7028394 . Retrieved 21 December 2023.
  29. 1 2 Zhu, W.; Miao, J.; Qing, L.; Huang, G. B. (2015-07-01). "Hierarchical Extreme Learning Machine for unsupervised representation learning". 2015 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. doi:10.1109/IJCNN.2015.7280669. ISBN   978-1-4799-1960-4. S2CID   14222151.
  30. Neumann, Klaus; Steil, Jochen J. (2011). "Batch intrinsic plasticity for extreme learning machines". Proc. Of International Conference on Artificial Neural Networks: 339–346.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  31. Neumann, Klaus; Steil, Jochen J. (2013). "Optimizing extreme learning machines via ridge regression and batch intrinsic plasticity". Neurocomputing. 102: 23–30. doi:10.1016/j.neucom.2012.01.041.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  32. Neumann, Klaus; Rolf, Matthias; Steil, Jochen J. (2013). "Reliable integration of continuous constraints into extreme learning machines". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 21 (supp02): 35–50. doi:10.1142/S021848851340014X. ISSN   0218-4885.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  33. Neumann, Klaus (2014). Reliability. University Library Bielefeld. pp. 49–74.
  34. "The Official Homepage on Origins of Extreme Learning Machines (ELM)" . Retrieved 15 December 2018.
  35. Wang, Lipo P.; Wan, Chunru R. (2008). "Comments on "The Extreme Learning Machine"". IEEE Trans. Neural Networks. 19 (8): 1494–5, author reply 1495–6. CiteSeerX   10.1.1.217.2330 . doi:10.1109/TNN.2008.2002273. PMID   18701376.
  36. Huang, Guang-Bin (2008). "Reply to "comments on 'the extreme learning machine' "". IEEE Transactions on Neural Networks. 19 (8): 1495–1496. doi:10.1109/tnn.2008.2002275. S2CID   14720232.
  37. Guang-Bin, Huang (2015). "WHO behind the malign and attack on ELM, GOAL of the attack and ESSENCE of ELM" (PDF). www.extreme-learning-machines.org.
  38. Zhu, W.; Miao, J.; Qing, L. (2014-07-01). "Constrained Extreme Learning Machine: A novel highly discriminative random feedforward neural network". 2014 International Joint Conference on Neural Networks (IJCNN). pp. 800–807. doi:10.1109/IJCNN.2014.6889761. ISBN   978-1-4799-1484-5. S2CID   5769519.
  39. Akusok, Anton; Bjork, Kaj-Mikael; Miche, Yoan; Lendasse, Amaury (2015). "High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications". IEEE Access. 3: 1011–1025. Bibcode:2015IEEEA...3.1011A. doi: 10.1109/access.2015.2450498 .{{cite journal}}: CS1 maint: multiple names: authors list (link)