Long short-term memory

Last updated
The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through time. LSTM Cell.svg
The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through time.

Long short-term memory (LSTM) [1] is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem [2] commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "long short-term memory"). [1] The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.

Contents

An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate, [3] and a forget gate. [4] The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.

LSTM has wide applications in classification, [5] [6] data processing, time series analysis tasks, [7] speech recognition, [8] [9] machine translation, [10] [11] speech activity detection, [12] robot control, [13] [14] video games, [15] [16] and healthcare. [17]

Motivation

In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem. [18]

The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information. [4] In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies. [19] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.

Variants

In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one unit of one LSTM cell, but contains LSTM cell's units.

See [20] for an empirical study of 8 architectural variants of LSTM.

LSTM with a forget gate

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are: [1] [4]

where the initial values are and and the operator denotes the Hadamard product (element-wise product). The subscript indexes the time step.

Variables

Letting the superscripts and refer to the number of input features and number of hidden units, respectively:

  • : input vector to the LSTM unit
  • : forget gate's activation vector
  • : input/update gate's activation vector
  • : output gate's activation vector
  • : hidden state vector also known as output vector of the LSTM unit
  • : cell input activation vector
  • : cell state vector
  • , and : weight matrices and bias vector parameters which need to be learned during training

Activation functions

  • : sigmoid function.
  • : hyperbolic tangent function.
  • : hyperbolic tangent function or, as the peephole LSTM paper [21] [22] suggests, .

Peephole LSTM

A peephole LSTM unit with input (i.e.
i
{\displaystyle i}
), output (i.e.
o
{\displaystyle o}
), and forget (i.e.
f
{\displaystyle f}
) gates Peephole Long Short-Term Memory.svg
A peephole LSTM unit with input (i.e. ), output (i.e. ), and forget (i.e. ) gates

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM). [21] [22] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state. [21] is not used, is used instead in most places.

Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. and represent the activations of respectively the input, output and forget gates, at time step .

The 3 exit arrows from the memory cell to the 3 gates and represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell at time step , i.e. the contribution of (and not , as the picture may suggest). In other words, the gates and calculate their activations at time step (i.e., respectively, and ) also considering the activation of the memory cell at time step , i.e. .

The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes .

The little circles containing a symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.

Peephole convolutional LSTM

Peephole convolutional LSTM. [23] The denotes the convolution operator.

Training

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1. [2] [24]

However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.

CTC score function

Many applications use stacks of LSTM RNNs [25] and train them by connectionist temporal classification (CTC) [5] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

Alternatives

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution [7] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).

Applications

Applications of LSTM include:

2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. [50] [51] According to the official blog post, the new model cut transcription errors by 49%. [52]

2016: Google started using an LSTM to suggest messages in the Allo conversation app. [53] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%. [10] [54] [55]

Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype [56] [57] [58] in the iPhone and for Siri. [59] [60]

Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology. [61]

2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks. [11]

Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory". [62]

2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, [15] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. [14] [63]

2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II. [16] [63]

History

Development

Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989), [64] cited by the LSTM paper. [1]

Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method. [2] His supervisor, Jürgen Schmidhuber, considered the thesis highly significant. [65]

An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber, [66] then published in the NIPS 1996 conference. [3]

The most commonly used reference point for LSTM was published in 1997 in the journal Neural Computation. [1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates. [20]

(Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999) [67] introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state. [20] This is the most commonly used version of LSTM nowadays.

(Gers, Schmidhuber, and Cummins, 2000) added peephole connections. [21] [22] Additionally, the output activation function was omitted. [20]

Development of variants

(Graves, Fernandez, Gomez, and Schmidhuber, 2006) [5] introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.

(Graves, Schmidhuber, 2005) [26] published LSTM with full backpropagation through time and bidirectional LSTM.

(Kyunghyun Cho et al., 2014) [68] published a simplified variant of the forget gate LSTM [67] called Gated recurrent unit (GRU).

(Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles [67] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. [69] [70] [71] Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network. [72]

A modern upgrade of LSTM called xLSTM is published by a team leaded by Sepp Hochreiter (Maximilian et al, 2024). [73] [74] One of the 2 blocks (mLSTM) of the architecture are parallelizable like the Transformer architecture, the other ones (sLSTM) allow state tracking.

Applications

2004: First successful application of LSTM to speech Alex Graves et al. [75] [63]

2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models. [21] [63]

Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm). [76]

2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher. [7]

Mayer et al. trained LSTM to control robots. [13]

2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher. [77]

Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology. [37]

2009: Justin Bayer et al. introduced neural architecture search for LSTM. [78] [63]

2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves. [79] One was the most accurate model in the competition and another was the fastest. [80] This was the first time an RNN won international competitions. [63]

2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset. [28]

Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference. [81] [82] [83] Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.

See also

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

Recurrent neural networks (RNNs) are a class of artificial neural network commonly used for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

There are many types of artificial neural networks (ANN).

<span class="mw-page-title-main">Sepp Hochreiter</span> German computer scientist

Josef "Sepp" Hochreiter is a German computer scientist. Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018. In 2017 he became the head of the Linz Institute of Technology (LIT) AI Lab. Hochreiter is also a founding director of the Institute of Advanced Research in Artificial Intelligence (IARAI). Previously, he was at Technische Universität Berlin, at University of Colorado Boulder, and at the Technical University of Munich. He is a chair of the Critical Assessment of Massive Data Analysis (CAMDA) conference.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

In machine learning, the vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, during each training iteration, each neural network weight receives an update proportional to the partial derivative of the loss function with respect to the current weight. The problem is that as the network depth or sequence length increases, the gradient magnitude typically is expected to decrease, slowing the training process. In the worst case, this may completely stop the neural network from further learning. As one example of this problem, traditional activation functions such as the hyperbolic tangent function have gradients in the range [-1,1], and backpropagation computes gradients using the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the early layers in an n-layer network, meaning that the gradient decreases exponentially with n while the early layers train very slowly.

Bidirectional recurrent neural networks (BRNN) connect two hidden layers of opposite directions to the same output. With this form of generative deep learning, the output layer can get information from past (backwards) and future (forward) states simultaneously. Invented in 1997 by Schuster and Paliwal, BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state.

Alex Graves is a computer scientist.

Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a gating mechanism to input or forget certain features, but lacks a context vector or output gate, resulting in fewer parameters than LSTM. GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM. GRUs showed that gating is indeed helpful in general, and Bengio's team came to no concrete conclusion on which of the two gating units was better.

Felix Gers is a professor of computer science at Berlin University of Applied Sciences Berlin. With Jürgen Schmidhuber and Fred Cummins, he introduced the forget gate to the long short-term memory recurrent neural network architecture. This modification of the original architecture has been shown to be crucial to the success of the LSTM at such tasks as speech and handwriting recognition.

Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as LSTM networks to tackle sequence problems where the timing is variable. It can be used for tasks like on-line handwriting recognition or recognizing phonemes in speech audio. CTC refers to the outputs and scoring, and is independent of the underlying neural network structure. It was introduced in 2006.

In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous neural networks. It uses skip connections modulated by learned gating mechanisms to regulate information flow, inspired by long short-term memory (LSTM) recurrent neural networks. The advantage of the Highway Network over other deep learning architectures is its ability to overcome or partially prevent the vanishing gradient problem, thus improving its optimization. Gating mechanisms are used to facilitate information flow across the many layers.

<span class="mw-page-title-main">Residual neural network</span> Type of artificial neural network

A residual neural network is a deep learning architecture in which the layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition, and won the ImageNet Large Scale Visual Recognition Challenge of that year.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter".

<span class="mw-page-title-main">Transformer (deep learning architecture)</span> Deep learning architecture for modelling sequential data

A transformer is a deep learning architecture developed by researchers at Google and based on the multi-head attention mechanism, proposed in the 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.

<span class="mw-page-title-main">Attention (machine learning)</span> Machine learning technique

Attention is a machine learning method that determines the relative importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented by "soft" weights assigned to each word in a sentence. More generally, attention encodes vectors called token embeddings across a fixed-width sequence that can range from tens to millions of tokens in size.

<span class="mw-page-title-main">Attention Is All You Need</span> 2017 research paper by Google

"Attention Is All You Need" is a 2017 landmark research paper in machine learning authored by eight scientists working at Google. The paper introduced a new deep learning architecture known as the transformer, based on the attention mechanism proposed in 2014 by Bahdanau et al. It is considered a foundational paper in modern artificial intelligence, as the transformer approach has become the main architecture of large language models like those based on GPT. At the time, the focus of the research was on improving Seq2seq techniques for machine translation, but the authors go further in the paper, foreseeing the technique's potential for other tasks like question answering and what is now known as multimodal Generative AI.

In neural networks, the gating mechanism is an architectural motif for controlling the flow of activation and gradient signals. They are most prominently used in recurrent neural networks (RNNs), but have also found applications in other architectures.

References

  1. 1 2 3 4 5 Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation . 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID   9377276. S2CID   1915014.
  2. 1 2 3 Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science.
  3. 1 2 Hochreiter, Sepp; Schmidhuber, Jürgen (1996-12-03). "LSTM can solve hard long time lag problems". Proceedings of the 9th International Conference on Neural Information Processing Systems. NIPS'96. Cambridge, MA, USA: MIT Press: 473–479.
  4. 1 2 3 Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation . 12 (10): 2451–2471. CiteSeerX   10.1.1.55.5709 . doi:10.1162/089976600300015015. PMID   11032042. S2CID   11598600.
  5. 1 2 3 Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Jürgen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX   10.1.1.75.6306 .
  6. Karim, Fazle; Majumdar, Somshubra; Darabi, Houshang; Chen, Shun (2018). "LSTM Fully Convolutional Networks for Time Series Classification". IEEE Access. 6: 1662–1669. arXiv: 1709.05206 . doi:10.1109/ACCESS.2017.2779939. ISSN   2169-3536.
  7. 1 2 3 4 Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858.
  8. Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF). Archived from the original (PDF) on 2018-04-24.
  9. Li, Xiangang; Wu, Xihong (2014-10-15). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv: 1410.4281 [cs.CL].
  10. 1 2 Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2016-09-26). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv: 1609.08144 [cs.CL].
  11. 1 2 Ong, Thuy (4 August 2017). "Facebook's translations are now powered completely by AI". www.allthingsdistributed.com. Retrieved 2019-02-15.
  12. Sahidullah, Md; Patino, Jose; Cornell, Samuele; Yin, Ruiking; Sivasankaran, Sunit; Bredin, Herve; Korshunov, Pavel; Brutti, Alessio; Serizel, Romain; Vincent, Emmanuel; Evans, Nicholas; Marcel, Sebastien; Squartini, Stefano; Barras, Claude (2019-11-06). "The Speed Submission to DIHARD II: Contributions & Lessons Learned". arXiv: 1911.02388 [eess.AS].
  13. 1 2 3 Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). "A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 543–548. CiteSeerX   10.1.1.218.3399 . doi:10.1109/IROS.2006.282190. ISBN   978-1-4244-0258-8. S2CID   12284900.
  14. 1 2 "Learning Dexterity". OpenAI. July 30, 2018. Retrieved 2023-06-28.
  15. 1 2 Rodriguez, Jesus (July 2, 2018). "The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI". Towards Data Science. Archived from the original on 2019-12-26. Retrieved 2019-01-15.
  16. 1 2 Stanford, Stacy (January 25, 2019). "DeepMind's AI, AlphaStar Showcases Significant Progress Towards AGI". Medium ML Memoirs. Retrieved 2019-01-15.
  17. Schmidhuber, Jürgen (2021). "The 2010s: Our Decade of Deep Learning / Outlook on the 2020s". AI Blog. IDSIA, Switzerland. Retrieved 2022-04-30.
  18. Calin, Ovidiu (14 February 2020). Deep Learning Architectures. Cham, Switzerland: Springer Nature. p. 555. ISBN   978-3-030-36720-6.
  19. Lakretz, Yair; Kruszewski, German; Desbordes, Theo; Hupkes, Dieuwke; Dehaene, Stanislas; Baroni, Marco (2019), "The emergence of number and syntax units in", The emergence of number and syntax units (PDF), Association for Computational Linguistics, pp. 11–20, doi:10.18653/v1/N19-1002, hdl:11245.1/16cb6800-e10d-4166-8e0b-fed61ca6ebb4, S2CID   81978369
  20. 1 2 3 4 Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv: 1503.04069 . Bibcode:2015arXiv150304069G. doi:10.1109/TNNLS.2016.2582924. PMID   27411231. S2CID   3356463.
  21. 1 2 3 4 5 6 Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID   18249962. S2CID   10192330.
  22. 1 2 3 4 Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143.
  23. Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. arXiv: 1506.04214 . Bibcode:2015arXiv150604214S.
  24. Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". In Kremer and, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
  25. Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial Intelligence, Ijcai 2007: 774–779. CiteSeerX   10.1.1.79.1887 .
  26. 1 2 Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6): 602–610. CiteSeerX   10.1.1.331.5800 . doi:10.1016/j.neunet.2005.06.042. PMID   16112549. S2CID   1856462.
  27. Fernández, S.; Graves, A.; Schmidhuber, J. (9 September 2007). "An Application of Recurrent Neural Networks to Discriminative Keyword Spotting". Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag: 220–229. ISBN   978-3540746935 . Retrieved 28 December 2023.
  28. 1 2 Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech recognition with deep recurrent neural networks". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 6645–6649. arXiv: 1303.5778 . doi:10.1109/ICASSP.2013.6638947. ISBN   978-1-4799-0356-6. S2CID   206741496.
  29. Kratzert, Frederik; Klotz, Daniel; Shalev, Guy; Klambauer, Günter; Hochreiter, Sepp; Nearing, Grey (2019-12-17). "Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets". Hydrology and Earth System Sciences. 23 (12): 5089–5110. arXiv: 1907.08456 . Bibcode:2019HESS...23.5089K. doi: 10.5194/hess-23-5089-2019 . ISSN   1027-5606.
  30. Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). "Learning the Long-Term Structure of the Blues". Artificial Neural Networks — ICANN 2002. Lecture Notes in Computer Science. Vol. 2415. Springer, Berlin, Heidelberg. pp. 284–289. CiteSeerX   10.1.1.116.3620 . doi:10.1007/3-540-46084-5_47. ISBN   978-3540460848.
  31. Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–2041. CiteSeerX   10.1.1.11.7369 . doi:10.1162/089976602320263980. PMID   12184841. S2CID   30459046.
  32. Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks. 16 (2): 241–250. CiteSeerX   10.1.1.381.1992 . doi:10.1016/s0893-6080(02)00219-8. PMID   12628609.
  33. A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
  34. Graves, A.; Fernández, S.; Liwicki, M.; Bunke, H.; Schmidhuber, J. (3 December 2007). "Unconstrained Online Handwriting Recognition with Recurrent Neural Networks". Proceedings of the 20th International Conference on Neural Information Processing Systems. NIPS'07. USA: Curran Associates Inc.: 577–584. ISBN   9781605603520 . Retrieved 28 December 2023.
  35. Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. (2011). "Sequential Deep Learning for Human Action Recognition". In Salah, A. A.; Lepri, B. (eds.). 2nd International Workshop on Human Behavior Understanding (HBU). Lecture Notes in Computer Science. Vol. 7065. Amsterdam, Netherlands: Springer. pp. 29–39. doi:10.1007/978-3-642-25446-8_4. ISBN   978-3-642-25445-1.
  36. Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2018-01-30). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv: 1801.10111 [cs.CV].
  37. 1 2 Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics. 23 (14): 1728–1736. doi: 10.1093/bioinformatics/btm247 . PMID   17488755.
  38. Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID   17666763. S2CID   11787259.
  39. Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series" (PDF). European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. Archived from the original (PDF) on 2020-10-30. Retrieved 2018-02-21.
  40. Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). "Predictive Business Process Monitoring with LSTM Neural Networks". Advanced Information Systems Engineering. Lecture Notes in Computer Science. Vol. 10253. pp. 477–492. arXiv: 1612.02130 . doi:10.1007/978-3-319-59536-8_30. ISBN   978-3-319-59535-1. S2CID   2192354.
  41. Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". JMLR Workshop and Conference Proceedings. 56: 301–318. arXiv: 1511.05942 . Bibcode:2015arXiv151105942C. PMC   5341604 . PMID   28286600.
  42. Jia, Robin; Liang, Percy (2016). "Data Recombination for Neural Semantic Parsing". arXiv: 1606.03622 [cs.CL].
  43. Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2018-05-22). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation" (PDF). Sensors. 18 (5): 1657. Bibcode:2018Senso..18.1657W. doi: 10.3390/s18051657 . ISSN   1424-8220. PMC   5982167 . PMID   29789447.
  44. Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). "Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". 2018 25th IEEE International Conference on Image Processing (ICIP). 25th IEEE International Conference on Image Processing (ICIP). pp. 918–922. doi:10.1109/icip.2018.8451692. ISBN   978-1-4799-7061-2.
  45. Orsini, F.; Gastaldi, M.; Mantecchini, L.; Rossi, R. (2019). Neural networks trained with WiFi traces to predict airport passenger behavior. 6th International Conference on Models and Technologies for Intelligent Transportation Systems. Krakow: IEEE. arXiv: 1910.14026 . doi:10.1109/MTITS.2019.8883365. 8883365.
  46. Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. (2017). "LSTM network: A deep learning approach for Short-term traffic forecast". IET Intelligent Transport Systems. 11 (2): 68–75. doi:10.1049/iet-its.2016.0208. S2CID   114567527.
  47. Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018). "Generative Recurrent Networks for De Novo Drug Design". Mol Inform. 37 (1–2). doi:10.1002/minf.201700111. PMC   5836943 . PMID   29095571.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  48. Saiful Islam, Md.; Hossain, Emam (2020-10-26). "Foreign Exchange Currency Rate Prediction using a GRU-LSTM Hybrid Network". Soft Computing Letters. 3: 100009. doi: 10.1016/j.socl.2020.100009 . ISSN   2666-2221.
  49. {{Cite Abbey Martin, Andrew J. Hill, Konstantin M. Seiler & Mehala Balamurali (2023) Automatic excavator action recognition and localisation for untrimmed video using hybrid LSTM-Transformer networks, International Journal of Mining, Reclamation and Environment, DOI: 10.1080/17480930.2023.2290364}}
  50. Beaufays, Françoise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. Retrieved 2017-06-27.
  51. Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate". Research Blog. Retrieved 2017-06-27.
  52. "Neon prescription... or rather, New transcription for Google Voice". Official Google Blog. 23 July 2015. Retrieved 2020-04-25.
  53. Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. Retrieved 2017-06-27.
  54. Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". Wired. Retrieved 2017-06-27.
  55. "A Neural Network for Machine Translation, at Production Scale". Google AI Blog. 27 September 2016. Retrieved 2020-04-25.
  56. Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". The Information. Retrieved 2017-06-27.
  57. Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy". ZDNet. Retrieved 2017-06-27.
  58. "Can Global Semantic Context Improve Neural Language Models? – Apple". Apple Machine Learning Journal. Retrieved 2020-04-30.
  59. Smith, Chris (2016-06-13). "iOS 10: Siri now works in third-party apps, comes with extra AI features". BGR. Retrieved 2017-06-27.
  60. Capes, Tim; Coles, Paul; Conkie, Alistair; Golipour, Ladan; Hadjitarkhani, Abie; Hu, Qiong; Huddleston, Nancy; Hunt, Melvyn; Li, Jiangchuan; Neeracher, Matthias; Prahallad, Kishore (2017-08-20). "Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System". Interspeech 2017. ISCA: 4011–4015. doi:10.21437/Interspeech.2017-1798.
  61. Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. – All Things Distributed". www.allthingsdistributed.com. Retrieved 2017-06-27.
  62. Xiong, W.; Wu, L.; Alleva, F.; Droppo, J.; Huang, X.; Stolcke, A. (April 2018). "The Microsoft 2017 Conversational Speech Recognition System". 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. pp. 5934–5938. doi:10.1109/ICASSP.2018.8461870. ISBN   978-1-5386-4658-8.
  63. 1 2 3 4 5 6 Schmidhuber, Juergen (10 May 2021). "Deep Learning: Our Miraculous Year 1990-1991". arXiv: 2005.05744 [cs.NE].
  64. Mozer, Mike (1989). "A Focused Backpropagation Algorithm for Temporal Pattern Recognition". Complex Systems.
  65. Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv: 2212.11279 [cs.NE].
  66. Sepp Hochreiter; Jürgen Schmidhuber (21 August 1995), Long Short Term Memory, Wikidata   Q98967430
  67. 1 2 3 Gers, Felix; Schmidhuber, Jürgen; Cummins, Fred (1999). "Learning to forget: Continual prediction with LSTM". 9th International Conference on Artificial Neural Networks: ICANN '99. Vol. 1999. pp. 850–855. doi:10.1049/cp:19991218. ISBN   0-85296-721-7.
  68. Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv: 1406.1078 [cs.CL].
  69. Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv: 1505.00387 [cs.LG].
  70. Srivastava, Rupesh K; Greff, Klaus; Schmidhuber, Juergen (2015). "Training Very Deep Networks". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.: 2377–2385.
  71. Schmidhuber, Jürgen (2021). "The most cited neural networks all build on work done in my labs". AI Blog. IDSIA, Switzerland. Retrieved 2022-04-30.
  72. He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE. pp. 770–778. arXiv: 1512.03385 . doi:10.1109/CVPR.2016.90. ISBN   978-1-4673-8851-1.
  73. Beck, Maximilian; Pöppel, Korbinian; Spanring, Markus; Auer, Andreas; Prudnikova, Oleksandra; Kopp, Michael; Klambauer, Günter; Brandstetter, Johannes; Hochreiter, Sepp (2024-05-07). "xLSTM: Extended Long Short-Term Memory". arXiv: 2405.04517 [cs.LG].
  74. NX-AI/xlstm, NXAI, 2024-06-04, retrieved 2024-06-04
  75. Graves, Alex; Beringer, Nicole; Eck, Douglas; Schmidhuber, Juergen (2004). Biologically Plausible Speech Recognition with LSTM Neural Nets. Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp. 175–184.
  76. Hochreiter, S.; Younger, A. S.; Conwell, P. R. (2001). "Learning to Learn Using Gradient Descent". Artificial Neural Networks — ICANN 2001 (PDF). Lecture Notes in Computer Science. Vol. 2130. pp. 87–94. CiteSeerX   10.1.1.5.323 . doi:10.1007/3-540-44668-0_13. ISBN   978-3-540-42486-4. ISSN   0302-9743. S2CID   52872549.
  77. Wierstra, Daan; Foerster, Alexander; Peters, Jan; Schmidhuber, Juergen (2005). "Solving Deep Memory POMDPs with Recurrent Policy Gradients". International Conference on Artificial Neural Networks ICANN'07.
  78. Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Juergen (2009). "Evolving memory cell structures for sequence learning". International Conference on Artificial Neural Networks ICANN'09, Cyprus.
  79. Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. CiteSeerX   10.1.1.139.4502 . doi:10.1109/tpami.2008.137. ISSN   0162-8828. PMID   19299860. S2CID   14635907.
  80. Märgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition. pp. 1383–1387. doi:10.1109/ICDAR.2009.256. ISBN   978-1-4244-4500-4. S2CID   52851337.
  81. "Patient Subtyping via Time-Aware LSTM Networks" (PDF). msu.edu. Retrieved 21 Nov 2018.
  82. "Patient Subtyping via Time-Aware LSTM Networks". Kdd.org. Retrieved 24 May 2018.
  83. "SIGKDD". Kdd.org. Retrieved 24 May 2018.

Further reading