Reservoir computing

Last updated

Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. [1] After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. [1] The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed. [1] The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost. [2]

Contents

History

The first examples of reservoir neural networks demonstrated that randomly connected recurrent neural networks could be used for simple forms of interval and speech discrimination [3] [4] . In these early models the memory in the network took the form of both short-term synaptic plasticity and activity mediated by recurrent connections. In other early reservoir neural network models the memory of the recent stimulus history was provided solely by the recurrent activity [5] [6] . Overall, the general concept of reservoir computing stems from the use of recursive connections within neural networks to create a complex dynamical system. [7] It is a generalisation of earlier neural network architectures such as recurrent neural networks, liquid-state machines and echo-state networks. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface. [8] The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling. [7] However, training of recurrent neural networks is challenging and computationally expensive. [7] Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer. [7]

A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components.

Recent advances in both AI and quantum information theory have given rise to the concept of quantum neural networks. [9] These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems. [9] [10] In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid. [10] However, the nuclear spin experiments in [10] did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a random kitchen sink [11] algorithm (also going by the name of extreme learning machines in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices. [10] In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers. [12]

Reservoir computers have been used for time-series analysis purposes. In particular, some of their usages involve chaotic time-series prediction, [13] [14] separation of chaotic signals, [15] and link inference of networks from their dynamics. [16]

Classical reservoir computing

Reservoir

The 'reservoir' in reservoir computing is the internal structure of the computer, and must have two properties: it must be made up of individual, non-linear units, and it must be capable of storing information. The non-linearity describes the response of each unit to input, which is what allows reservoir computers to solve complex problems. Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks. [17]

Reservoirs can be virtual or physical. [17] Virtual reservoirs are typically randomly generated and are designed like neural networks. [17] [7] Virtual reservoirs can be designed to have non-linearity and recurrent loops, but, unlike neural networks, the connections between units are randomized and remain unchanged throughout computation. [17] Physical reservoirs are possible because of the inherent non-linearity of certain natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics required in reservoir creation, and a pattern recognition RC was developed by first inputting ripples with electric motors then recording and analyzing the ripples in the readout. [1]

Readout

The readout is a neural network layer that performs a linear transformation on the output of the reservoir. [1] The weights of the readout layer are trained by analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and by utilizing a training method such as a linear regression or a Ridge regression. [1] As its implementation depends on spatiotemporal reservoir patterns, the details of readout methods are tailored to each type of reservoir. [1] For example, the readout for a reservoir computer using a container of liquid as its reservoir might entail observing spatiotemporal patterns on the surface of the liquid. [1]

Types

Context reverberation network

An early example of reservoir computing was the context reverberation network. [18] In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction–diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the reaction–diffusion system served as the reservoir.

Echo state network

The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data. [19]

Liquid-state machine

Chaotic Liquid State Machine

The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM), [20] [21] or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data. [20] [21]

Nonlinear transient computation

This type of information processing is most relevant when time-dependent input signals depart from the mechanism’s internal dynamics. [22] These departures cause transients or temporary altercations which are represented in the device’s output. [22]

Deep reservoir computing

The extension of the reservoir computing framework towards Deep Learning, with the introduction of Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model [23] [24] [25] [26] allows to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling the investigation on the inherent role of layered composition in recurrent neural networks.

Quantum reservoir computing

Quantum reservoir computing may use the nonlinear nature of quantum mechanical interactions or processes to form the characteristic nonlinear reservoirs [9] [10] [27] [12] but may also be done with linear reservoirs when the injection of the input to the reservoir creates the nonlinearity. [28] The marriage of machine learning and quantum devices is leading to the emergence of quantum neuromorphic computing as a new research area. [29]

Types

Gaussian states of interacting quantum harmonic oscillators

Gaussian states are a paradigmatic class of states of continuous variable quantum systems. [30] Although they can nowadays be created and manipulated in, e.g, state-of-the-art optical platforms, [31] naturally robust to decoherence, it is well-known that they are not sufficient for, e.g., universal quantum computing because transformations that preserve the Gaussian nature of a state are linear. [32] Normally, linear dynamics would not be sufficient for nontrivial reservoir computing either. It is nevertheless possible to harness such dynamics for reservoir computing purposes by considering a network of interacting quantum harmonic oscillators and injecting the input by periodical state resets of a subset of the oscillators. With a suitable choice of how the states of this subset of oscillators depends on the input, the observables of the rest of the oscillators can become nonlinear functions of the input suitable for reservoir computing; indeed, thanks to the properties of these functions, even universal reservoir computing becomes possible by combining the observables with a polynomial readout function. [28] In principle, such reservoir computers could be implemented with controlled multimode optical parametric processes, [33] however efficient extraction of the output from the system is challenging especially in the quantum regime where measurement back-action must be taken into account.

2-D quantum dot lattices

In this architecture, randomized coupling between lattice sites grants the reservoir the “black box” property inherent to reservoir processors. [9] The reservoir is then excited, which acts as the input, by an incident optical field. Readout occurs in the form of occupational numbers of lattice sites, which are naturally nonlinear functions of the input. [9]

Nuclear spins in a molecular solid

In this architecture, quantum mechanical coupling between spins of neighboring atoms within the molecular solid provides the non-linearity required to create the higher-dimensional computational space. [10] The reservoir is then excited by radiofrequency electromagnetic radiation tuned to the resonance frequencies of relevant nuclear spins. [10] Readout occurs by measuring the nuclear spin states. [10]

Reservoir computing on gate-based near-term superconducting quantum computers

The most prevalent model of quantum computing is the gate-based model where quantum computation is performed by sequential applications of unitary quantum gates on qubits of a quantum computer. [34] A theory for the implementation of reservoir computing on a gate-based quantum computer with proof-of-principle demonstrations on a number of IBM superconducting noisy intermediate-scale quantum (NISQ) computers [35] has been reported in. [12]

See also

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

<span class="mw-page-title-main">Quantum computing</span> Computer hardware technology that uses quantum mechanics

A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. Theoretically a large-scale quantum computer could break some widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. Recent advances have even discovered ways to mimic the human nervous system through liquid solutions of chemical systems.

In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

Recurrent neural networks (RNNs) are a class of artificial neural network commonly used for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

Optical computing or photonic computing uses light waves produced by lasers or incoherent sources for data processing, data storage or data communication for computing. For decades, photons have shown promise to enable a higher bandwidth than the electrons used in conventional computers.

<span class="mw-page-title-main">Quantum neural network</span> Quantum Mechanics in Neural Networks

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

Unconventional computing is computing by any of a wide range of new or unusual methods.

A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units. Each node receives time varying input from external sources as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units.

<span class="mw-page-title-main">Echo state network</span> Type of reservoir computer

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.

<span class="mw-page-title-main">Activation function</span> Artificial neural network node function

The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear. Modern activation functions include the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model, the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al, the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model.

There are many types of artificial neural networks (ANN).

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

Linear optical quantum computing or linear optics quantum computation (LOQC), also photonic quantum computing (PQC), is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments (including reciprocal mirrors and waveplates) to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.

A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. These networks were first introduced to learn distributed representations of structure, but have been successful in multiple applications, for instance in learning sequence and tree structures in natural language processing.

<span class="mw-page-title-main">Quantum machine learning</span> Interdisciplinary research area at the intersection of quantum physics and machine learning

Quantum machine learning is the integration of quantum algorithms within machine learning programs.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter".

<span class="mw-page-title-main">Large width limits of neural networks</span> Feature of artificial neural networks

Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks. They are the core component of modern deep learning algorithms. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite. This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.

<span class="mw-page-title-main">Physics-informed neural networks</span> Technique to solve partial differential equations

Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.

References

  1. 1 2 3 4 5 6 7 8 Tanaka, Gouhei; Yamane, Toshiyuki; Héroux, Jean Benoit; Nakane, Ryosho; Kanazawa, Naoki; Takeda, Seiji; Numata, Hidetoshi; Nakano, Daiju; Hirose, Akira (2019). "Recent advances in physical reservoir computing: A review". Neural Networks. 115: 100–123. arXiv: 1808.04962 . doi: 10.1016/j.neunet.2019.03.005 . ISSN   0893-6080. PMID   30981085.
  2. Röhm, André; Lüdge, Kathy (2018-08-03). "Multiplexed networks: reservoir computing with virtual and real nodes". Journal of Physics Communications. 2 (8): 085007. arXiv: 1802.08590 . Bibcode:2018JPhCo...2h5007R. doi: 10.1088/2399-6528/aad56d . ISSN   2399-6528.
  3. Buonomano, Dean (1995). "Temporal information transformed into a spatial code by a neural network with realistic properties". Science. 267 (5200): 1028–30. doi:10.1126/science.7863330. PMID   7863330.
  4. Maass, Wolfgang (2002). "Real-time computing without stable states: a new framework for neural computation based on perturbations". Neural Computation. 14 (11): 2531–2560. doi:10.1162/089976602760407955. PMID   12433288.
  5. Buonomano, Mauk (1994). "Neural network model of the cerebellum: temporal discrimination and the timing of motor responses". Neural Computation. 6: 38–55. doi:10.1162/neco.1994.6.1.38.
  6. Jaeger, Hubert (2004). "Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication". Science. 304 (5667): 78–80. doi:10.1126/science.1091277. PMID   15064413.
  7. 1 2 3 4 5 Benjamin Schrauwen, David Verstraeten, and Jan Van Campenhout. "An overview of reservoir computing: theory, applications, and implementations." Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482.
  8. Fernando, C.; Sojakka, Sampsa (2003). "Pattern Recognition in a Bucket". Advances in Artificial Life. Lecture Notes in Computer Science. Vol. 2801. pp. 588–597. doi:10.1007/978-3-540-39432-7_63. ISBN   978-3-540-20057-4. S2CID   15073928.
  9. 1 2 3 4 5 Ghosh, Sanjib; Opala, Andrzej; Matuszewski, Michał; Paterek, Tomasz; Liew, Timothy C. H. (December 2019). "Quantum reservoir processing". npj Quantum Information. 5 (1): 35. arXiv: 1811.10335 . Bibcode:2019npjQI...5...35G. doi:10.1038/s41534-019-0149-8. ISSN   2056-6387. S2CID   119197635.
  10. 1 2 3 4 5 6 7 8 Negoro, Makoto; Mitarai, Kosuke; Fujii, Keisuke; Nakajima, Kohei; Kitagawa, Masahiro (2018-06-28). "Machine learning with controllable quantum dynamics of a nuclear spin ensemble in a solid". arXiv: 1806.10910 [quant-ph].
  11. Rahimi, Ali; Recht, Benjamin (December 2008). "Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in Learning" (PDF). NIPS'08: Proceedings of the 21st International Conference on Neural Information Processing Systems: 1313–1320.
  12. 1 2 3 Chen, Jiayin; Nurdin, Hendra; Yamamoto, Naoki (2020-08-24). "Temporal Information Processing on Noisy Quantum Computers". Physical Review Applied. 14 (2): 024065. arXiv: 2001.09498 . Bibcode:2020PhRvP..14b4065C. doi:10.1103/PhysRevApplied.14.024065. S2CID   210920543.
  13. Pathak, Jaideep; Hunt, Brian; Girvan, Michelle; Lu, Zhixin; Ott, Edward (2018-01-12). "Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach". Physical Review Letters. 120 (2): 024102. Bibcode:2018PhRvL.120b4102P. doi: 10.1103/PhysRevLett.120.024102 . PMID   29376715.
  14. Vlachas, P.R.; Pathak, J.; Hunt, B.R.; Sapsis, T.P.; Girvan, M.; Ott, E.; Koumoutsakos, P. (2020-03-21). "Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics". Neural Networks. 126: 191–217. arXiv: 1910.05266 . doi:10.1016/j.neunet.2020.02.016. ISSN   0893-6080. PMID   32248008. S2CID   211146609.
  15. Krishnagopal, Sanjukta; Girvan, Michelle; Ott, Edward; Hunt, Brian R. (2020-02-01). "Separation of chaotic signals by reservoir computing". Chaos: An Interdisciplinary Journal of Nonlinear Science. 30 (2): 023123. arXiv: 1910.10080 . Bibcode:2020Chaos..30b3123K. doi:10.1063/1.5132766. ISSN   1054-1500. PMID   32113243. S2CID   204823815.
  16. Banerjee, Amitava; Hart, Joseph D.; Roy, Rajarshi; Ott, Edward (2021-07-20). "Machine Learning Link Inference of Noisy Delay-Coupled Networks with Optoelectronic Experimental Tests". Physical Review X. 11 (3): 031014. arXiv: 2010.15289 . Bibcode:2021PhRvX..11c1014B. doi: 10.1103/PhysRevX.11.031014 .
  17. 1 2 3 4 Soriano, Miguel C. (2017-02-06). "Viewpoint: Reservoir Computing Speeds Up". Physics. 10: 12. doi: 10.1103/Physics.10.12 . hdl: 10261/173181 .
  18. Kevin Kirby. "Context dynamics in neural sequential learning." Proceedings of the Florida Artificial Intelligence Research Symposium FLAIRS (1991), 66–70.
  19. Gallicchio, Claudio; Micheli, Alessio (2013). "Tree Echo State Networks". Neurocomputing. 101: 319–337. doi:10.1016/j.neucom.2012.08.017. hdl: 11568/158480 .
  20. 1 2 Aoun, Mario Antoine; Boukadoum, Mounir (2014). "Learning algorithm and neurocomputing architecture for NDS Neurons". 2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing. IEEE. pp. 126–132. doi:10.1109/icci-cc.2014.6921451. ISBN   978-1-4799-6081-1. S2CID   16026952.
  21. 1 2 Aoun, Mario Antoine; Boukadoum, Mounir (2015). "Chaotic Liquid State Machine". International Journal of Cognitive Informatics and Natural Intelligence. 9 (4): 1–20. doi:10.4018/ijcini.2015100101. ISSN   1557-3958.
  22. 1 2 Crook, Nigel (2007). "Nonlinear Transient Computation". Neurocomputing. 70 (7–9): 1167–1176. doi:10.1016/j.neucom.2006.10.148.
  23. Pedrelli, Luca (2019). Deep Reservoir Computing: A Novel Class of Deep Recurrent Neural Networks (PhD thesis). Università di Pisa.
  24. Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (2017-12-13). "Deep reservoir computing: A critical experimental analysis". Neurocomputing. 268: 87–99. doi:10.1016/j.neucom.2016.12.089. hdl: 11568/851934 .
  25. Gallicchio, Claudio; Micheli, Alessio (2017-05-05). "Echo State Property of Deep Reservoir Computing Networks". Cognitive Computation. 9 (3): 337–350. doi:10.1007/s12559-017-9461-9. hdl: 11568/851932 . ISSN   1866-9956. S2CID   1077549.
  26. Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (December 2018). "Design of deep echo state networks". Neural Networks. 108: 33–47. doi:10.1016/j.neunet.2018.08.002. hdl: 11568/939082 . ISSN   0893-6080. PMID   30138751. S2CID   52075702.
  27. Chen, Jiayin; Nurdin, Hendra (2019-05-15). "Learning nonlinear input–output maps with dissipative quantum systems". Quantum Information Processing. 18 (7): 198. arXiv: 1901.01653 . Bibcode:2019QuIP...18..198C. doi:10.1007/s11128-019-2311-9. S2CID   57573677.
  28. 1 2 Nokkala, Johannes; Martínez-Peña, Rodrigo; Giorgi, Gian Luca; Parigi, Valentina; Soriano, Miguel C.; Zambrini, Roberta (2021). "Gaussian states of continuous-variable quantum systems provide universal and versatile reservoir computing". Communications Physics. 4 (1): 53. arXiv: 2006.04821 . Bibcode:2021CmPhy...4...53N. doi:10.1038/s42005-021-00556-w. S2CID   234355683.
  29. Marković, Danijela; Grollier, Julie (2020-10-13). "Quantum Neuromorphic Computing". Applied Physics Letters. 117 (15): 150501. arXiv: 2006.15111 . Bibcode:2020ApPhL.117o0501M. doi:10.1063/5.0020014. S2CID   210920543.
  30. Ferraro, Alessandro; Olivares, Stefano; Paris, Matteo G. A. (2005-03-31). "Gaussian states in continuous variable quantum information". arXiv: quant-ph/0503237 .
  31. Roslund, Jonathan; de Araújo, Renné Medeiros; Jiang, Shifeng; Fabre, Claude; Treps, Nicolas (2013-12-15). "Wavelength-multiplexed quantum networks with ultrafast frequency combs". Nature Photonics. 8 (2): 109–112. arXiv: 1307.1216 . doi:10.1038/nphoton.2013.340. ISSN   1749-4893. S2CID   2328402.
  32. Bartlett, Stephen D.; Sanders, Barry C.; Braunstein, Samuel L.; Nemoto, Kae (2002-02-14). "Efficient Classical Simulation of Continuous Variable Quantum Information Processes". Physical Review Letters. 88 (9): 097904. arXiv: quant-ph/0109047 . Bibcode:2002PhRvL..88i7904B. doi:10.1103/PhysRevLett.88.097904. PMID   11864057. S2CID   2161585.
  33. Nokkala, J.; Arzani, F.; Galve, F.; Zambrini, R.; Maniscalco, S.; Piilo, J.; Treps, N.; Parigi, V. (2018-05-09). "Reconfigurable optical implementation of quantum complex networks". New Journal of Physics. 20 (5): 053024. arXiv: 1708.08726 . Bibcode:2018NJPh...20e3024N. doi:10.1088/1367-2630/aabc77. ISSN   1367-2630. S2CID   119091176.
  34. Nielsen, Michael; Chuang, Isaac (2010), Quantum Computation and Quantum Information (2 ed.), Cambridge University Press Cambridge
  35. John Preskill. "Quantum Computing in the NISQ era and beyond." Quantum 2,79 (2018)

Further reading